Archive for the ‘Artificial Intelligence’ Category

NRC Exploring Potential Role of Artificial Intelligence in Commercial Nuclear Power Operations – JD Supra

As artificial intelligence (AI) and machine learning tools become more widely adopted in various products and industries, the NRC has begun studying what roles these technologies can play in commercial nuclear power operations. On April 21, as part of its study, the NRCs Office of Nuclear Regulatory Research requested public comments on the role of these technologies in the various phases of nuclear power generation operational experience and plant management. The NRC requests feedback on the state of practice, benefits, and future trends related to [these technologies] computational tools and techniques in predictive reliability and predictive safety assessments in the commercial nuclear power industry. These technologies are emerging, analytical tools, which, if used properly, show promise in their ability to improve reactor safety, yet offer economic savings. Comments are due by May 21, 2021.

The NRC intends to use the comments to enhance its understanding of the benefits of AI and machine learning as well as the potential pitfalls and challenges associated with their application.

The NRC has requested comments on the following questions:

The NRC is in the early stages of its review, and the agency does not promise to use the information collected in any formal regulatory action. Morgan Lewis will continue to follow the NRCs regulatory initiatives.

[View source.]

Read the rest here:
NRC Exploring Potential Role of Artificial Intelligence in Commercial Nuclear Power Operations - JD Supra

JG Wentworth Welcomes Andrey Zelenovsky as their Vice President of Artificial Intelligence and Machine Learning – PRNewswire

"We are thrilled to have Andrey's leadership and experience and believe he will be instrumental in continuing to expand the use of systems and technology within the company," said Ajai Nair, CIO. "His extensive background in application development and business robotic automation software brings a wealth of knowledge to the team that is necessary to accelerate a successful digital transformation, allowing us to faster determine measurable business benefits and better serve our customers."

Andrey joins the JG Wentworth team from UiPath where he served as Director on their Competitive and Market Intelligence team. During his tenure at UiPath he utilized data mining techniques to analyze the marketplaces, enable sales and predict cashflows.

"I am excited to join a market leader focused on helping customers improve their financial health. I look forward to this unique opportunity to be part of the evolution of JG Wentworth by leveraging AI and automation to positively impact our customers' lives," said Andrey.

Andrey earned his Bachelor of Science in both Information & Systems Engineering and Analytical Finance from the Lehigh University and holds a Master of Science from The George Washington University and a Master of Business Administration from New York University, Leonard N. Stern School of Business.

About JG WentworthJG Wentworth is a financial services company that focuses on helping customers who are experiencing financial hardship or need to quickly access cash. Its services include debt relief, structured settlement payment purchasing, annuity payment purchasing, lottery and casino payment purchasing. J.G. Wentworth was founded in 1991 and currently has offices in Chesterbrook, Pennsylvania, Radnor, Pennsylvania and Rockville, Maryland. For more information about J.G. Wentworth visit http://www.jgwentworth.com or use the information provided below.

SOURCE The JG Wentworth Company

http://www.jgwentworth.com

Original post:
JG Wentworth Welcomes Andrey Zelenovsky as their Vice President of Artificial Intelligence and Machine Learning - PRNewswire

Forbes Recognizes Lilt As One of the Top Artificial Intelligence Companies For Third Straight Year – PRNewswire

SAN FRANCISCO, April 26, 2021 /PRNewswire/ --Lilt, the modern language service and technology provider, today announced that it has been named to the 2021 Forbes AI 50 for the third consecutive year. The Forbes AI 50 recognizes the most promising privately-held companies using artificial intelligence to build business applications and services to transform industries. Lilt is one of only seven companies that have been included every year since the list's inception in 2019.

"Our artificial intelligence and machine learning technologies enable our customers to provide exceptional global experiences to their customers around the world," said Lilt CEO Spence Green. "We're proud to be recognized by Forbes for the third year in a row alongside other leading companies developing AI-powered solutions."

Lilt's translation services are powered by the Lilt Platform, the world's most advanced translation technology that uses AI and automation to make every step of the localization process faster, more accurate, and simpler. Lilt's community of over 60,000 skilled human translators uses its AI-powered translation technology to translate content quickly, efficiently, and at higher quality than ever before. With Lilt, companies go-to-market faster, grow global revenues, and provide a personalized global experience to their customers in their language of choice.

Forbes partnered with Sequoia Capital and Meritech Capital to evaluate hundreds of promising, privately-held North American companies that are using AI in ways that are fundamental to their operations. The list, which nearly 400 companies qualified for, focused on companies utilizing machine learning, natural language processing, or computer vision technologies. Of the qualifying companies, 100 were selected based on their qualitative score created by Forbes' data partners, followed by evaluation by a panel of expert AI judges to narrow the list down to 50.

Along with the Forbes AI 50 list, Lilt was recently named to the CB Insights AI 100 list, showcasing the 100 most promising private artificial intelligence companies in the world, and was included in Gartner's recent Market Guide for AI-Enabled Translation Services.

About LiltHeadquartered in San Francisco, Lilt is the modern language service and technology provider enabling localized customer experiences. Lilt's mission is to make the world's information accessible to everyone regardless of where they were born or which language they speak. Lilt brings human-powered, technology-assisted translations to global enterprises, empowering product, marketing, support, e-commerce, and localization teams to deliver exceptional customer experiences to global audiences. Lilt gives industry-leading organizations like Intel, ASICS, WalkMe, DigitalOcean, and Canva everything they need to scale their localization programs and go-to-market faster. Lilt has additional global offices in Dublin, Berlin, Washington, D.C. and Indianapolis. Visit us online at http://www.lilt.com or contact us at [emailprotected].

SOURCE Lilt

https://www.lilt.com

Continue reading here:
Forbes Recognizes Lilt As One of the Top Artificial Intelligence Companies For Third Straight Year - PRNewswire

Artificial intelligence is infiltrating higher ed, from admissions to grading – The Hechinger Report

Students newly accepted by colleges and universities this spring are being deluged by emails and texts in the hope that they will put down their deposits and enroll. If they have questions about deadlines, financial aid and even where to eat on campus, they can get instant answers.

The messages are friendly and informative. But many of them arent from humans.

Artificial intelligence, or AI, is being used to shoot off these seemingly personal appeals and deliver pre-written information through chatbots and text personas meant to mimic human banter. It can help a university or college by boosting early deposit rates while cutting down on expensive and time-consuming calls to stretched admissions staffs.

AI has long been quietly embedding itself into higher education in ways like these, often to save money a need thats been heightened by pandemic-related budget squeezes.

Now, simple AI-driven tools like these chatbots, plagiarism-detecting software and apps to check spelling and grammar are being joined by new, more powerful and controversial applications that answer academic questions, grade assignments, recommend classes and even teach.

The newest can evaluate and score applicants personality traits and perceived motivation, and colleges increasing are using these tools to make admissions and financial aid decisions.

As the presence of this technology on campus grows, so do concerns about it. In at least one case, a seemingly promising use of AI in admissions decisions was halted because, by using algorithms to score applicants based on historical precedence, it perpetuated bias.

Much of the AI-powered software used by colleges and universities remains confined to fairly mundane tasks such as improving back-office workflow, said Eric Wang, senior director of AI at Turnitin, a service many institutions use to check for plagiarism.

Where you start seeing things that get a bit more worrying, he said, is when AI gets into higher-stakes types of decisions.

Among those are predicting how well students might do if admitted and assessing their financial need.

Hundreds of colleges subscribe to private platforms that do intensive data analysis about past classes and use it to score applicants for admission on factors such as the likelihood they will enroll, the amount of financial aid theyll need, the probability theyll graduate and how likely they are to be engaged alumni.

Some universities use AI to rate applicants potential for success based on how they interact with a schools website and respond to its messages, which the provider of the service says is 20 times more predictive than relying on demographics alone.

Humans always make the final calls, these colleges and the AI companies say, but AI can help them narrow the field.

Baylor, Boston and Wake Forest universities are among those that have used the Canadian company Kira Talent, which offers a review system that can score an applicants personality traits and soft skills based on a recorded, AI-reviewed video the student submits. A company presentation shows students being scored on a five-point scale in areas such as openness, motivation, agreeableness and neuroticism.

Related: Subscribing to college and other visions of higher educations future

New York University, Southeast Missouri State University and other schools have used a service called Element451, which rates prospects potential for success based on how they interact with a schools website and respond to its messages.

The result is 20 times more predictive than relying on demographics alone, the company says.

Once admitted, many students now get messages from companies like AdmitHub, which advertises a customizable chatbot and text message platform that the company calls conversational AI to nudge accepted applicants into putting down deposits. The company says its reached more than 3 million students this way on behalf of hundreds of university and college clients.

Georgia State University, which pioneered the use of these chatbots, says its version, named Pounce, has delivered hundreds of thousands of answers to questions from potential students since it launched in 2016 and reduced summer melt the incidence of students enrolling in the spring but failing to show up in the fall by 20 percent.

Georgia State was also among the first to develop inexpensive, always-on AI teaching assistants, ready to answer student questions about course material. Theirs is called Jill Watson, and studies found that some students couldnt tell they were engaging with AI and not a human teaching assistant.

AI grading does it better, more quickly and probably making fewer errors than humans.

Staffordshire University in England offers students a digital friend, an AI teaching assistant named Beacon that can recommend reading resources and connect students with tutors. Australias Deakin University has an AI assistant named Genie that knows whether a student asking a question has engaged with specific online course materials and can check students locations and activities to determine if theyve visited the library or tell them when theyve spent too long in the dining hall and prompt them to move along.

Related: PROOF POINTS New wave of research shows nudging students by text is not as promising as hoped

Many colleges increasingly use AI to grade students, as online classes grow too large for instructors to manage this well.

The pandemic has hastened the shift to those kinds of classes. Even before that, however, Southern New Hampshire University with 97 percent of its nearly 150,000 students exclusively online was working on ways that AI could be used to grade large numbers of students quickly, said Faby Gagne, executive director of its research and development arm.

SNHU is also starting to AI not just to grade students but to teach them. Gagne has been experimenting with having AI monitor such things as speech or movement or the speed with which a student responds to video lessons and use that information to score achievement.

Turnitin, best known for checking for plagiarism, also sells AI language comprehension products to assess subjective written work. One tool can sort written assignments into batches, allowing a teacher to correct a mistake or give guidance just once instead of highlighting, commenting on and grading the same mistake again and again. The company says instructors check to verify that the machine made the correct assessment, and that eliminating repetitive work gives them more time to teach.

AI tools are also being sold to colleges to make decisions once made by faculty. ElevateU, for example, uses AI to analyze student data and deliver individualized learning content to students based on how they answered questions. If the program determines that a particular student will do better with a video lesson as opposed to a written one, thats what he or she gets.

Where you start seeing things that get a bit more worrying is when AI gets into higher-stakes types of decisions.

But some research suggests that AI tools can be wrong, or even gamed. A team at MIT used a computer to create an essentially meaningless essay that nonetheless included all the prompts an AI essay reader searches for. The AI gave the gibberish a high score.

In Spain, an AI bot named Lola answered more than 38,700 student questions with a 91.7 percent accuracy rate meaning it gave out at least 3,200 wrong or incomplete answers.

AI alone is not a good judge of human behavior or intention, said Jarrod Morgan, the founder and chief strategy officer at ProctorU, which schools hire to manage and observe the tests students take online. We found that people are better at this than machines are, pretty much across the board.

Related: Coronavirus accelerates higher educations trend toward distance learning

The University of St. Thomas in Minnesota said it tested, but did not deploy, an AI system that can scan and analyze students facial expressions to determine whether theyre engaged or understand the material. The system would immediately tell professors or others which students were becoming bored or which points in a lecture required repeating or punching up.

And researchers at the University of California, Santa Barbara, studied whether students got more emotional reinforcement from animated than from real-life instructors and found that, while students recognized emotion in both human and animated teachers, they had stronger, more accurate perceptions of emotions such as happy and frustrated when the instructors were human.

Many people think AI is smarter than people, said Wang, of Turnitin. But the AI is us. Its a mirror that reflects us to us, and sometimes in very exaggerated ways. Those ways, Wang said, underscore that the data AI often uses is a record of what people have done in the past. Thats an issue because we are more prone to accept recommendations that reinforce who we are.

AI alone is not a good judge of human behavior or intention. We found that people are better at this than machines are, pretty much across the board.

Thats what happened with GRADE, the GRaduate ADmissions Evaluator, an AI evaluation system built and used by the graduate program in computer science at the University of Texas at Austin. GRADE reviewed applications and assigned scores based on the likelihood of admission by a review committee. The goal was to reduce human time spent reviewing the increasing pile of applications, which GRADE did, cutting review time by 74 percent.

But the university dropped GRADE last year, agreeing that it had the potential to replicate superficial biases in the scoring scoring up some applications not because they were good, but because they looked like the kinds of applications that had been approved in the past.

These types of reinforcing bias that can surface in AI can be tested initially and frequently, said Kirsten Martin, a professor of technology ethics at the University of Notre Dame. But universities would be makinga mistake if they thought that automating decisions somehow relieved them of theirethical and legal obligations.

This story about artificial intelligence in higher education was produced byThe Hechinger Report, a nonprofit, independent news organization focused oninequality and innovation in education. Sign upforourhigher education newsletter.

The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn't mean it's free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

Join us today.

Read the original:
Artificial intelligence is infiltrating higher ed, from admissions to grading - The Hechinger Report

Tesla is becoming more of an artificial intelligence and robotic company, says Elon Musk – Electrek

Elon Musk made the argument that Tesla is going to be known more as an artificial intelligence and robotic company.

For a while now, Musk has been pushing this idea that investors shouldnt just see Tesla as an automaker and energy company, but as a group of startups.

He argues that Teslas service centers are a startup, Teslas insurance company is a startup, Teslas automation group is a startup, etc.

In that vein, Musk now claims that artificial intelligence and robotics are going to be just as synonymous with Tesla as cars and energy.

The CEO commented during a conference call discussing Teslas Q1 2021 financial results:

Although right now people think of Tesla as a car company or as an energy company. I think long term, people will think of Tesla as much as an AI robotics company as we are a car company or an energy company. I think we are developing one of the strongest hardware and software AI teams in the world.

Many people see self-driving cars as one of the first real-world applications of artificial intelligence and if they can solve it, Teslas cars will become robots with a form of AI in them.

Musk argued:

I think [self-driving] is one of the hardest technical problems that exists, thats maybe ever existed. And really, in order to solve it, we basically need to solve a pretty significant part of artificial intelligence, specifically real-world artificial intelligence. And that sort of AI, the neural net needs to be compressed into a fairly small computer, a very efficient computer that was designed, but nonetheless, a small computer thats using on the order of 70 or 80 watts. So this is a much harder problem than if you were you, say, 10,000 computers in a server room or something like that.

To give examples of Tesla becoming more of an AI company, Musk said that Tesla is developing a lot of tools from scratch when it comes to things like video labeling and neural net training.

Tesla is also still working on Dojo, a supercomputer to train Teslas self-driving AI. Musk has previously claimed thatTeslas Dojo supercomputer will be capable of anexaFLOP,one quintillion (1018) floating-point operations per second, or 1,000 petaFLOPS making it one of the most powerful computers in the world.

It will be optimized to train neural nets, and Musk said that they will make it available to other companies.

But what Tesla really needs to do to become an AI company is delivered on their promise of Full Self-Driving.

Yesterday, Musk again reiterated that he thinks its going to be solved this year: I am highly confident that we will get this done.

FTC: We use income earning auto affiliate links. More.

Subscribe to Electrek on YouTube for exclusive videos and subscribe to the podcast.

View post:
Tesla is becoming more of an artificial intelligence and robotic company, says Elon Musk - Electrek