Archive for the ‘Artificial Intelligence’ Category

Artificial Intelligence: Should You Teach It To Your Employees? – Forbes

Back view of a senior professor talking on a class to large group of students.

AI is becoming strategic for many companies across the world.The technology can be transformative for just about any part of a business.

But AI is not easy to implement.Even top-notch companies have challenges and failures.

So what can be done?Well, one strategy is to provide AI education to the workforce.

If more people are AI literate and can start to participate and contribute to the process, more problemsboth big and smallacross the organization can be tackled, said David Sweenor, who is the Senior Director of Product Marketing at Alteryx.We call this the Democratization of AI and Analytics. A team of 100, 1,000, or 5,000 working on different problems in their areas of expertise certainly will have a bigger impact than if left in the hands of a few.

Just look at Levi Strauss & Co.Last year the company implemented a full portfolio of enterprise training programsfor all employees at all levelsfocused on data and AI for business applications.For example, there is the Machine Learning Bootcamp, which is an eight-week program for learning Python coding, neural networks and machine learningwith an emphasis on real-world scenarios.

Our goal is to democratize this skill set and embed data scientists and machine learning practitioners throughout the organization, said Louis DeCesari, who is the Global Head of Data, Analytics, and AI at Levi Strauss & Co.In order to achieve our vision of becoming the worlds best digital apparel company, we need to integrate digital into all areas of the enterprise.

Granted, corporate training programs can easily become a waste.This is especially the case when there is not enough buy-in at the senior levels of management.

It is also important to have a training program that is more than just a bunch of lectures.You need to have outcomes-based training, said Kathleen Featheringham, who is the Director of Artificial Intelligence Strategy at Booz Allen.Focus on how AI can be used to push forward the mission of the organization, not just training for the sake of learning about AI. Also, there should be roles-based training.There is no one-size-fits-all approach to training, and different personas within an organization will have different training needs.

AI training can definitely be daunting because of the many topics and the complex concepts.In fact, it might be better to start with basic topics.

A statistics course can be very helpful, said Wilson Pang, who is the Chief Technology Officer at Appen.This will help employees understand how to interpret data and how to make sense of data. It will equip the company to make data driven decisions.

There also should be coverage of how AI can go off the rails.There needs to be training on ethics, said Aswini Thota, who is a Principal Data Scientist at Bose Corporation.Bad and biased data only exacerbate the issues with AI systems.

For the most part, effective AI is a team sport.So it should really involve everyone in an organization.

The acceleration of AI adoption is inescapablemost of us experience AI on a daily basis whether we realize it or not, said Alex Spinelli, who is the Chief Technology Officer at LivePerson.The more companies educate employees about AI, the more opportunities theyll provide to help them stay up-to-date as the economy increasingly depends on AI-inflected roles. At the same time, nurturing a workforce thats ahead of the curve when it comes to understanding and managing AI will be invaluable to driving the companys overall efficiency and productivity.

Tom (@ttaulli) is an advisor/board member to startups and the author of Artificial Intelligence Basics: A Non-Technical Introduction, The Robotic Process Automation Handbook: A Guide to Implementing RPA Systems and Implementing AI Systems: Transform Your Business in 6 Steps. He also has developed various online courses, such as for the COBOL.

Read more from the original source:
Artificial Intelligence: Should You Teach It To Your Employees? - Forbes

A.I. Can Now Write Its Own Computer Code. Thats Good News for Humans. – The New York Times

As soon as Tom Smith got his hands on Codex a new artificial intelligence technology that writes its own computer programs he gave it a job interview.

He asked if it could tackle the coding challenges that programmers often face when interviewing for big-money jobs at Silicon Valley companies like Google and Facebook. Could it write a program that replaces all the spaces in a sentence with dashes? Even better, could it write one that identifies invalid ZIP codes?

It did both instantly, before completing several other tasks. These are problems that would be tough for a lot of humans to solve, myself included, and it would type out the response in two seconds, said Mr. Smith, a seasoned programmer who oversees an A.I. start-up called Gado Images. It was spooky to watch.

Codex seemed like a technology that would soon replace human workers. As Mr. Smith continued testing the system, he realized that its skills extended well beyond a knack for answering canned interview questions. It could even translate from one programming language to another.

Yet after several weeks working with this new technology, Mr. Smith believes it poses no threat to professional coders. In fact, like many other experts, he sees it as a tool that will end up boosting human productivity. It may even help a whole new generation of people learn the art of computers, by showing them how to write simple pieces of code, almost like a personal tutor.

This is a tool that can make a coders life a lot easier, Mr. Smith said.

About four years ago, researchers at labs like OpenAI started designing neural networks that analyzed enormous amounts of prose, including thousands of digital books, Wikipedia articles and all sorts of other text posted to the internet.

By pinpointing patterns in all that text, the networks learned to predict the next word in a sequence. When someone typed a few words into these universal language models, they could complete the thought with entire paragraphs. In this way, one system an OpenAI creation called GPT-3 could write its own Twitter posts, speeches, poetry and news articles.

Much to the surprise of even the researchers who built the system, it could even write its own computer programs, though they were short and simple. Apparently, it had learned from an untold number of programs posted to the internet. So OpenAI went a step further, training a new system Codex on an enormous array of both prose and code.

The result is a system that understands both prose and code to a point. You can ask, in plain English, for snow falling on a black background, and it will give you code that creates a virtual snowstorm. If you ask for a blue bouncing ball, it will give you that, too.

You can tell it to do something, and it will do it, said Ania Kubow, another programmer who has used the technology.

Codex can generate programs in 12 computer languages and even translate between them. But it often makes mistakes, and though its skills are impressive, it cant reason like a human. It can recognize or mimic what it has seen in the past, but it is not nimble enough to think on its own.

Sometimes, the programs generated by Codex do not run. Or they contain security flaws. Or they come nowhere close to what you want them to do. OpenAI estimates that Codex produces the right code 37 percent of the time.

When Mr. Smith used the system as part of a beta test program this summer, the code it produced was impressive. But sometimes, it worked only if he made a tiny change, like tweaking a command to suit his particular software setup or adding a digital code needed for access to the internet service it was trying to query.

In other words, Codex was truly useful only to an experienced programmer.

But it could help programmers do their everyday work a lot faster. It could help them find the basic building blocks they needed or point them toward new ideas. Using the technology, GitHub, a popular online service for programmers, now offers Copilot, a tool that suggests your next line of code, much the way autocomplete tools suggest the next word when you type texts or emails.

It is a way of getting code written without having to write as much code, said Jeremy Howard, who founded the artificial intelligence lab Fast.ai and helped create the language technology that OpenAIs work is based on. It is not always correct, but it is just close enough.

Mr. Howard and others believe Codex could also help novices learn to code. It is particularly good at generating simple programs from brief English descriptions. And it works in the other direction, too, by explaining complex code in plain English. Some, including Joel Hellermark, an entrepreneur in Sweden, are already trying to transform the system into a teaching tool.

The rest of the A.I. landscape looks similar. Robots are increasingly powerful. So are chatbots designed for online conversation. DeepMind, an A.I. lab in London, recently built a system that instantly identifies the shape of proteins in the human body, which is a key part of designing new medicines and vaccines. That task once took scientists days or even years. But those systems replace only a small part of what human experts can do.

In the few areas where new machines can instantly replace workers, they are typically in jobs the market is slow to fill. Robots, for instance, are increasingly useful inside shipping centers, which are expanding and struggling to find the workers needed to keep pace.

With his start-up, Gado Images, Mr. Smith set out to build a system that could automatically sort through the photo archives of newspapers and libraries, resurfacing forgotten images, automatically writing captions and tags and sharing the photos with other publications and businesses. But the technology could handle only part of the job.

It could sift through a vast photo archive faster than humans, identifying the kinds of images that might be useful and taking a stab at captions. But finding the best and most important photos and properly tagging them still required a seasoned archivist.

We thought these tools were going to completely remove the need for humans, but what we learned after many years was that this wasnt really possible you still needed a skilled human to review the output, Mr. Smith said. The technology gets things wrong. And it can be biased. You still need a person to review what it has done and decide what is good and what is not.

Codex extends what a machine can do, but it is another indication that the technology works best with humans at the controls.

A.I. is not playing out like anyone expected, said Greg Brockman, the chief technology officer of OpenAI. It felt like it was going to do this job and that job, and everyone was trying to figure out which one would go first. Instead, it is replacing no jobs. But it is taking away the drudge work from all of them at once.

Link:
A.I. Can Now Write Its Own Computer Code. Thats Good News for Humans. - The New York Times

AAMC Comments on National Artificial Intelligence Initiative – AAMC

The AAMC submitted a letter to the White House Office of Science and Technology Policy (OSTP) and the National Science Foundation (NSF) on Sept. 1 in response to a request for information (RFI) geared toward developing a shared, national artificial intelligence (AI) research infrastructure that is referred to as the National Artificial Intelligence Research Resource (NAIRR).

The RFI will inform the work of the NAIRR Task Force, which has been directed by Congress to develop a first-of-its-kind AI infrastructure that provides AI researchers and students across scientific disciplines with access to computational resources, high-quality data, educational tools, and user support.

In its comments, the AAMC expressed strong support for Congress prioritization of AI, which has tremendous potential to advance human health and usher in a new era of biomedicine. The AAMC also commended the aspirations of the OSTP and the NSF to develop an inclusive AI infrastructure that allows all of America's diverse AI researchers to fully participate in exploring innovative ideas for advancing AI, including communities, institutions, and regions that have been traditionally underserved.

The letter outlined strategies on how the NAIRR should reinforce principles of ethical and responsible research and development of AI. In particular, the AAMC underscored the necessity of building a NAIRR that identifies and addresses systemic inequities at the interface of AI and biomedicine, mitigates bias by promoting representative datasets and algorithms, provides users with a data management and sharing plan that promotes community engagement and transparency, and fosters a diverse AI workforce and leadership.

Given the vast amounts of data, industries, and applications that will converge with the NAIRR, the AAMC also noted the importance of a multisector approach for identifying, researching, and mitigating bias, discrimination, health inequities, and social determinants of health all components that currently preclude the formation of an equitable AI framework that benefits all communities equally.

Finally, the AAMC recommended that the NAIRR partner with diverse communities in the development of this framework, thereby culminating a diverse expertise and fostering community trust. On Aug. 18, the OSTP and the NSF extended the RFIs public comment period by one month to Oct. 1, providing further opportunity for researchers and academic institutions to respond.

Originally posted here:
AAMC Comments on National Artificial Intelligence Initiative - AAMC

Ethical Artificial Intelligence is Focus of New Robotics Program – UT News – UT News | The University of Texas at Austin

AUSTIN, Texas Ethics will be at the forefront of robotics education thanks to a new University of Texas at Austin program that will train tomorrows technologists to understand the positive and potentially negative implications of their creations.

Today, much robotic technology is developed without considering its potentially harmful effects on society, including how these technologies can infringe on privacy or further economic inequity. The new UT Austin program will fill an important educational gap by prioritizing these issues in its curriculum.

In the next 10 years, we are going to live more closely alongside robots, and we want to be sure that those robots are fair, inclusive and free from bias, said Junfeng Jiao, associate professor in the School of Architecture and the program lead. And because the robots we create are reflections of ourselves, it is imperative that technologists receive an excellent ethics education. We want our students to work directly with companies to create practices and technologies that are equitable and fair.

Called CREATE (Convergent, Responsible, and Ethical AI Training Experience for Roboticists), it will offer graduate coursework and professional development in responsible design and implementation.

CREATE is a collaboration among Texas Robotics, industry partners and the UT grand challenge research initiative Good Systems, which seeks to design AI technologies that benefit society. The program has been recently awarded a $3 million grant from the National Science Foundation through its Research Traineeship Program, which will support 32 doctoral students to receive coursework, mentorship, professional development, internships, and research and public service opportunities.

Students will focus specifically on how to ethically design, develop and deploy service robots, which can make deliveries, work in factories and clean homes. They will consider factors such as how to design delivery service robots so they are more inclusive and can reach all people and how to ensure home service robots protect occupants privacy. Several notable robotics companies have also said they will offer students internships, including Sony AI, Bosch, Amazon, SparkCognition and Apptronik.

Researchers involved in the program cross many disciplines at UT, including computer science, architecture, engineering, information, and public affairs. Faculty members from these units will teach courses as part of the curriculum, and two faculty members will mentor each trainee during the five-year program. Additionally, each trainee will receive help with career development, grant writing, and exposure to local startup companies.

More than half of the programs trainees will be chosen from underrepresented groups in STEM education, including women and racial minorities, to help bring much-needed diversity to the field of robotics. The coursework component, which includes five classes in ethical robotics, will be institutionalized as a graduate portfolio program and will be available to all STEM graduate students at UT Austin.

This program will enable us to educate well-rounded roboticists who are not only grounded in the technical details of designing and building autonomous robots but also are equipped to fully consider the societal implications of their work, said Peter Stone, director of Texas Robotics and a professor of computer science. That is a missing part in robotics education in the U.S. and the world. We believe this is a game changer for the future of robotics.

Here is the original post:
Ethical Artificial Intelligence is Focus of New Robotics Program - UT News - UT News | The University of Texas at Austin

Rank and File | Artificial intelligence comes to the fore in computer chess – Evanston RoundTable

Championship tournaments for computer chess engines moved from onsite competition to online well before many human tournaments made the move last year in response to the COVID-19 pandemic. In recent years the Top Engine Chess Competition, which has been played virtually since 2010, has become the unofficial world computer chess championship.

In recent years, many of these competitions have been won by the open-source chess engine Stockfish, thanks to its ability to conduct deep searches of chess positions enabled by powerful computing. However, in 2019 the Stockfish engine was upended by the LCZero engine, which was developed using a very different approach, employing techniques that develop artificial intelligence. LCZero was launched in 2018 with no chess-specific knowledge other than the basic rules; it learned how to play by analyzing the results of millions of games played by volunteer users. This approach was extremely successful and led to LCZero defeating Stockfish to win TCEC tournaments in 2019 and 2020.

The Stockfish team responded by following the maxim if you cant beat em, join em. In late 2020, a new version of Stockfish was introduced that complemented its deep position searches with a learning function similar to that employed by LCZero. The improved Stockfish has regained its top position among chess engines. In the latest TCEC championship, Stockfish trounced LCZero, with 19 wins and only seven losses in their 100-game match. Other chess engine developers have taken note, and all of the top-rated chess engines now combine classical computing with learning functions.

In the recent match, Stockfish often outperformed LCZero in games that reached unusual positions where deep position searches proved to be more valuable than evaluations that relied on prior learning. In Game 68, the following position was reached after lengthy maneuvering by both sides. LCZero evaluated the position as even, but Stockfish found an opportunity to unbalance the game, to its advantage, by offering a surprising bishop sacrifice.

White to Move

(Stockfish-LCZero Game 68 Move 180)

180Bf6! If black plays 180gxh6? white has 181Rxh6+ Nxh6 182Rxh6+ Kg8 183Qh5 and white forces checkmate in a few moves. After further maneuvering, Stockfish intensified its attack on the black king by offering to sacrifice a second piece its queen.

White to Move

(Stockfish-LCZero Game 68 Move 191)

191Qg5! The queen cannot be taken; 191..hxg5 192Rh8 is checkmate. Black has no satisfactory response. The game continued 191Re8 192Rxh6! Nxh6 193Rxh6 gxh6 194Qxh6 when black must sacrifice its queen to delay checkmate.

Black to Move

(Stockfish-LCZero Game 68 Move 194)

194Qg7 195Bxg7 Rxg7 196f5 exf5 197Qg5. Black cant capture whites e-pawn; 197Rxe5? 198Qd8+ and white is about to checkmate.

197Rf8 198e6 Rc7 Stockfish now maneuvers its King to g5, freeing up the queen to harass the black king and rooks.

199Kc3 Rg7 200Kd4 Rc7 201Ke5 Rg7 202Kf4 Rc7 203 Qh4 Rg7 204Kg5 Re7 205Qf4 Kg7 206Qd6 Rfe8 207Qe5+ Kg8 208Qf6 LCZero is reduced to pawn moves, because moving its king or either rook leads to immediate disaster. The game continued until checkmate, per TCEC tournament rules.

208b6 209axb6 a5 210Qf7+ Rxf7 211gxf7+ Kf8 212fxe8+ Kxe8 213b7 Kf8 214Kf6 Kg8 215b8(Q)+ Kh7 217Qc7+ Kg8 218Qg8 checkmate.

(Stockfish-LCZero Final Position)

To view this game on a virtual board, go to https://chess24.com/en/watch/live-tournaments/tcec-season-21-superfinal-2021/1/1/68.

Keith Holzmueller has been the head coach of the Evanston Township High School Chess Club and Team since 2017. He became a serious chess player during his high school years. As an adult player, he obtained a US Chess Federation Expert rating for over-the-board play and wasawarded the Senior International Master title by the International Correspondence Chess Federation. Keith now puts most of his chess energy into helping young chess players in Evanston learn to enjoy chess and improve their play.Please email Keith at news@evanstonroundtable.com if you have any chess questions.

View post:
Rank and File | Artificial intelligence comes to the fore in computer chess - Evanston RoundTable