Archive for the ‘Artificial Intelligence’ Category

‘Day of AI’ Spurs Classroom Discussions on Societal Impacts of … – Education Week

Several thousand students worldwide participated in the second annual Day of AI on May 18, yet another sign of artificial intelligences growing significance to schools.

Its been a year of extraordinary advancements in AI, and with that comes necessary conversations and concerns about who and what this technology is for, said event organizer Cynthia Breazeal, who is the director of the Responsible AI for Social Empowerment and Education (RAISE) initiative at the Massachusetts Institute of Technology.

Americas K-12 schools are already using artificial intelligence for everything from personalizing student learning to conducting classroom observations, as Education Week described in a special report earlier this month. A coalition of influential groups such as Code.org and the Educational Testing Service recently launched an effort to help schools and state education departments integrate artificial intelligence into curricula, and the International Society for Technology in Education has made related learning opportunities available to students and teachers alike.

The RAISE initiative at MIT builds on those efforts by offering free classroom lessons on such topics as What Can AI Do? and ChatGPT in School. Overall, said MIT doctoral student Daniella DiPaola, who helped develop the Day of AI curriculum, the approach is to weave ethical, social, and policy considerations throughout technical explanations. Central to that aim is fostering discussion of the Blueprint for an AI Bill of Rights released by the White Houses Office of Science and Technology Policy (OSTP) in late 2022.

We want to make sure societal impact is part of the process, DiPaola said.

Thats exactly what the White House hoped to spur, said Marc Aidinoff, who helped lead the creation of the Bill of Rights during his time as OSTPs chief of staff. Aidinoff spent the Day of AI working with a group of Massachusetts middle and high school students debating potential legislation for regulating the use of artificial intelligence in schools.

Unlike the adults who talk about AI as this unknowable, all-powerful thing and let their fear take over, the students all treated AI as a knowable thing thats complicated, but we can take action on, he said afterward.

Aidinoff said he particularly appreciated the MIT RAISE initiatives focus on engaging artificial intelligence as a potentially helpful companion, rather than a threat or silver-bullet solution. One benefit of that approach, he said, is an emphasis on considering specific use cases and threats rather than getting paralyzed by amorphous fears. Thinking about how AI can best support humans also encourages discussions of general themes and principles such as fairness that teachers are already accustomed to exploring with their students.

That sentiment was echoed by Kristen Thomas Clarke, a literacy and information technology teacher at the private Media-Providence Friends School in Pennyslvania. Now in her eighth year at the school, Thomas Clarke said shes long mixed digital citizenship and media literacy activities into her lessons on coding and robotics. But in the wake of ChatGPT s emergence this year, she and her head of school decided that a broader school-wide discussion of artificial intelligence was warranted.

That included use of MITs curriculum, which Thomas Clarke praised as highly interactive and effective at helping students see both the promise and potential pitfalls of AI, including discrimination that can result from biased training data.

But the most important impact, she said, was on the adults at her school.

I think our initial reaction [to ChatGPT] was maybe a little bit of fear, like what are the kids going to do with this? Thomas Clarke said. But now I think of it more in terms of enhancing their knowledge than doing their homework for them.

See the original post:
'Day of AI' Spurs Classroom Discussions on Societal Impacts of ... - Education Week

Will artificial intelligence replace doctors? – Harvard Health

Q. Everyone's talking about artificial intelligence, and how it may replace people in various jobs. Will artificial intelligence replace my doctor?

A. Not in my lifetime, fortunately! And the good news is that artificial intelligence (AI) has the potential to improve your doctor's decisions, and to thereby improve your health if we are careful about how it is developed and used.

AI is a mathematical process that tries to make sense out of massive amounts of information. So it requires two things: the ability to perform mathematical computations rapidly, and huge amounts of information stored in an electronic form words, numbers, and pictures.

When computers and AI were first developed in the 1950s, some visionaries described how they could theoretically help improve decisions about diagnosis and treatment. But computers then were not nearly fast enough to do the computations required. Even more important, almost none of the information the computers would have to analyze was stored in electronic form. It was all on paper. Doctors' notes about a patient's symptoms and physical examination were written (not always legibly) on paper. Test results were written on paper and pasted in a patient's paper medical record. As computers got better, they started to relieve doctors and other health professionals from some tedious tasks like helping to analyze images electrocardiograms (ECGs), blood samples, x-rays, and Pap smears.

Today, computers are literally millions of times more powerful than when they were first developed. More important, huge amounts of medical information now are in electronic form: medical records of millions of people, the results of medical research, and the growing knowledge about how the body works. That makes feasible the use of AI in medicine.

Already, computers and AI have made powerful medical research breakthroughs, like predicting the shape of most human proteins. In the future, I predict that computers and AI will listen to conversations between doctor and patient and then suggest tests or treatments the doctor should consider; highlight possible diagnoses based on a patient's symptoms, after comparing that patient's symptoms to those of millions of other people with various diseases; and draft a note for the medical record, so the doctor doesn't have to spend time typing at a computer keyboard and can spend more time with the patient.

All of this will not happen immediately or without missteps: doctors and computer scientists will need to carefully evaluate and guide the development of new AI tools in medicine. If the suggestions AI provides to doctors prove to be inaccurate or incomplete, that "help" will be rejected. And if AI then does not get better, and fast, it will lose credibility. Powerful technologies can be powerful forces for good, and for mischief.

Read more:
Will artificial intelligence replace doctors? - Harvard Health

Health Tech Startup Suki Is Using Artificial Intelligence To Make Patient Records More Accessible To Every Doctor – Forbes

records easier and more accessible.Google Images

On its website, healthcare tech startup Suki AI touts its Suki Speech Platform as the most intelligent and responsive voice platform in healthcare. The company builds software intended to assist doctors in more easily and efficiently complete patient documentation in patients electronic health records, or EHR. The idea is simple: by making charting faster and more accessiblethis is accessibility too, especially for doctors with certain conditions of their ownthe more physicians can shift their energy from the bureaucratic aspect of medicine to the actual practice of the profession. After all, doctors spend a kings ransom on medical school to help people, not push pencils on their behalf.

In a press release issued this week, the Bay Area-based company announced a partnership with EHR maker Epic that entails deep integration of Sukis AI-powered voice assistant tech with Epics records tech. Suki notes its eponymous Suki Assistant helps clinicians complete time-consuming administrative tasks by voice and recently announced the ability to generate clinical notes from ambiently listening to a patient-clinician conversation; the integration enables notes to automatically be sent back to Epic, updating the relevant sections.

Ambient documentation holds great promise for reducing administrative burden and clinician burnout, and we are delighted to work with Epic to deliver a sophisticated, easy-to-use solution to its client base, said Suki CEO Punit Soni in a prepared statement. Suki Assistant represents the future of AI-powered voice assistants, and we are thrilled that it is integrated with Epic through its ambient APIs.

In an interview with me conducted over email ahead of the announcement, Soni explained Sukis mission is to make healthcare tech invisible and assistive so that clinicians can focus on patient care. The conduit through which Soni and team accomplishes their mission is their core product in the Suki Assistant. According to Soni, the companys origin story began when he spotted a big hole in the health tech market. Clinician burnout, he said, continues to be a major problem in the industry as society reconciles with a pandemic-addled world. To that point, Soni pointed to a statistic gleaned from a recent study that found 88% of doctors dont recommend their profession to their children. Soni feels the sobering reality is indicative of societal and financial problems. I believe that when utilized properly, AI and voice technologies can transform healthcare and help relieve administrative burdens, he said. Suki has spent years investing in our technology to develop a suite of solutions that reduce burnout, improve the quality of care, and increase [the return on investment] for healthcare systems.

When asked how the Suki Assistant works at a technical level, Soni told me its the only product on the market that integrates with commonly used EHRs like Epic to create a seamless workflow for physicians. He went on to tell me the company has used generative AI and large language models in training the Suki software; one of the teams overarching goals was to build an assistant that could (reasonably) understand natural language. The team didnt want people to have to memorize some rote syntax, akin to interacting with a pseudo-sentient command line. Clinicians can ask queries like Whos my next patient? or Suki, whats my schedule? Moreover, users can dictate notes to the Assistant and ask it to show a list of a patients allergies. Our goal is to make Suki as intuitive and easy to use as possible and we use the latest technologies in voice and AI to do so, Soni said. Using Suki should be as easy as picking up a phone, opening the app, and speaking naturally to it. Theres a lot of tech under the hood to enable that experience.

The dots between AI and healthcare and accessibility are easy to connect. For one thing, as I alluded in the lede, its certainly plausible for a doctor to have a physical conditioncarpal tunnel, for instancethat make doing administrative work like updating charts not merely a matter of drudgery, but of disability as well. Maybe using a pen or pencil even a few minutes causes the carpal tunnel to flare up, not to mention the eye strain and fatigue that could conceivably surface. Suki clearly doesnt position anything they build expressly for accessibility, yet its obvious the Suki Assistant has as much relevance as an assistive technology as more consumer-facing digital butlers like Siri and Alexa. The bottom line, at least in this context, is many doctors will not only work better if they use Suki to maintain patient records. The truth is, theyll feel better too as a side effect of doing their jobs more efficiently.

Feedback on the Suki Assistant, Soni said, has been really positive. He cited a large healthcare system using Epic as its health records provider being amazed at how well Suki pulls up schedules and how it integrates with Epics software. He also noted peoples pleasure with Sukis ambient note-taking capability. All told, Soni said people in the field are immensely enjoying the Suki tech in their day-to-day lives, adding they appreciate the freedom and flexibility Suki offers because now they can do their notes [and more] anywhere they have their phonethey dont have to be in front of their computers anymore.

Ultimately, what Soni and his team have done is harness AI to do genuine good for the world by making record-keeping not simply more efficient but accessible tooin a way not dissimilar to how Apples just-announced Personal Voice and Point to Speak accessibility features change the usability game. As Soni explained, artificial intelligence and machine learning is just tech. Its soulless, inanimate, inhuman.

By itself, [AI] doesnt solve anything, he said.

Soni continued: Sukis primary value is that every pixel in the company is [created] in service of the clinician. That culture is what makes us different. Anyone can build a product, but the special sauce that makes it useful is empathy. That is the magic that is a key part of Suki.

Looking ahead, Soni is tantalized by the possibilities for his work.

Our mission is to make healthcare technology invisible and assistive so clinicians can focus on what they love: patient care. We want to be able to help every clinician who needs more time back and we are just scratching the surface of what we can do, he said of his companys future. There are so many potential applications of our technology, from simplifying the orders process to helping nurses complete their tasks by voice to enabling clinicians to answer patient portal messages by voice. We have an ambitious, exciting roadmap of features were working on, and I cant wait to show this work to the world.

Steven is a freelance tech journalist covering accessibility and assistive technologies, and is based in San Francisco. His work has appeared in such places as The Verge, TechCrunch, and Macworld. Hes also appeared on podcasts, NPR, and television.

See the original post:
Health Tech Startup Suki Is Using Artificial Intelligence To Make Patient Records More Accessible To Every Doctor - Forbes

Reviving the Past with Artificial Intelligence – Caltech

While studying John Singer Sargent's paintings of wealthy women in 19th-century society, Jessica Helfand, a former Caltech artist in residence, had an idea: to search census records to find the identities of those women's servants. "I thought, What happens if I paint these women in the style of John Singer Sargent?' It's a sort of cultural restitution," Helfand explained, "reverse engineering the narrative by reclaiming a kind of beauty, style, and majesty."

To recreate a style from history, she turned to technology that, increasingly, is driving the future. "Could AI help me figure out how to paint, say, lace or linen, to capture the folds of clothing in daylight?" Helfand discussed her process in a seminar and discussion moderated by Hillary Mushkin, research professor of art and design in engineering and applied science and the humanities and social sciences. The event, part of Caltech's Visual Culture program, also featured Joanne Jang, product lead at DALL-E, an AI system that generates images based on user-supplied prompts.

While DALL-E has a number of practical applications from urban planning, to clothing design, to cooking, the technology also raises new questions. Helfand and Jang spoke about recent advancements in generative AI, ethical considerations when using such tools, and the distinction between artistic intelligence and artificial intelligence.

More here:
Reviving the Past with Artificial Intelligence - Caltech

A Look Back on the Dartmouth Summer Research Project on … – The Dartmouth

At this convention that took place on campus in the summer of 1956, the term artificial intelligence was coined by scientists.

by Kent Friel | 5/19/23 5:10am

For six weeks in the summer of 1956, a group of scientists convened on Dartmouths campus for the Dartmouth Summer Research Project on Artificial Intelligence. It was at this meeting that the term artificial intelligence, was coined. Decades later, artificial intelligence has made significant advancements. While the recent onset of programs like ChatGPT are changing the artificial intelligence landscape once again, The Dartmouth investigates the history of artificial intelligence on campus.

That initial conference in 1956 paved the way for the future of artificial intelligence in academia, according to Cade Metz, author of the book Genius Makers: the Mavericks who Brought AI to Google, Facebook and the World.

It set the goals for this field, Metz said. The way we think about the technology is because of the way it was framed at that conference.

However, the connection between Dartmouth and the birth of AI is not very well-known, according to some students. DALI Lab outreach chair and developer Jason Pak 24 said that he had heard of the conference, but that he didnt think it was widely discussed in the computer science department.

In general, a lot of CS students dont know a lot about the history of AI at Dartmouth, Pak said. When Im taking CS classes, it is not something that Im actively thinking about.

Even though the connection between Dartmouth and the birth of artificial intelligence is not widely known on campus today, the conferences influence on academic research in AI was far-reaching, Metz said. In fact, four of the conference participants built three of the largest and most influential AI labs at other universities across the country, shifting the nexus of AI research away from Dartmouth.

Conference participants John McCarthy and Marvin Minsky would establish AI labs at Stanford and MIT, respectively, while two other participants, Alan Newell and Hebert Simon, built an AI lab at Carnegie Mellon. Taken together, the labs at MIT, Stanford and Carnegie Mellon drove AI research for decades, Metz said.

Although the conference participants were optimistic, in the following decades, they would not achieve many of the achievements they believed would be possible with AI. Some participants in the conference, for example, believed that a computer would be able to beat any human in chess within just a decade.

The goal was to build a machine that could do what the human brain could do, Metz said. Generally speaking, they didnt think [the development of AI] would take that long.

The conference mostly consisted of brainstorming ideas about how AI should work. However, there was very little written record of the conference, according to computer science professor emeritus Thomas Kurtz, in an interview that is part of the Rauner Special Collections archives.

The conference represented all kinds of disciplines coming together, Metz said. At that point, AI was a field at the intersection of computer science and psychology and it had overlaps with other emerging disciplines, such as neuroscience, he added.

Metz said that after the conference, two camps of AI research emerged. One camp believed in what is called neural networks, mathematical systems that learn skills by analyzing data. The idea of neural networks was based on the concept that machines can learn like the human brain, creating new connections and growing over time by responding to real-world input data.

Some of the conference participants would go on to argue that it wasnt possible for machines to learn on their own. Instead, they believed in what is called symbolic AI.

They felt like you had to build AI rule-by-rule, Metz said. You had to define intelligence yourself; you had to rule-by-rule, line-by-line define how intelligence would work.

Notably, conference participant Marvin Minsky would go on to cast doubt on the neural network idea, particularly after the 1969 publication of Perceptrons, co-authored by Minsky and mathematician Seymour Paper, which Metz said led to a decline in neural network research.

Over the decades, Minsky adapted his ideas about neural networks, according to Joseph Rosen, a surgery professor at Dartmouth Hitchcock Medical Center. Rosen first met Minsky in 1989 and remained a close friend of his until Minskys death in 2016.

Minskys views on neural networks were complex, Rosen said, but his interest in studying AI was driven by a desire to understand human intelligence and how it worked.

Marvin was most interested in how computers and AI could help us better understand ourselves, Rosen said.

In about 2010, however, the neural network idea was proven to be the way forward, Metz said. Neural networks allow artificial intelligence programs to learn tasks on their own, which has driven a current boom in AI research, he added.

Given the boom in research activity around neural networks, some Dartmouth students feel like there is an opportunity for growth in AI-related courses and research opportunities. According to Pak, currently, the computer science department mostly focuses on research areas other than AI. Of the 64 general computer science courses offered every year, only two are related to AI, according to the computer science department website.

A lot of our interests are shaped by the classes we take, Pak said. There is definitely room for more growth in AI-related courses.

There is a high demand for classes related to AI, according to Pak. Despite being a computer science and music double major, he said he could not get into a course called MUS 14.05: Music and Artificial Intelligence because of the demand.

DALI Lab developer and former development lead Samiha Datta 23 said that she is doing her senior thesis on neural language processing, a subfield of AI and machine learning. Datta said that the conference is pretty well-referenced, but she believes that many students do not know much about the specifics.

She added she thinks the department is aware of and trying to improve the lack of courses taught directly related to AI, and that it is more possible to do AI research at Dartmouth now than it would have been a few years ago, due to the recent onboarding of four new professors who do AI research.

I feel lucky to be doing research on AI at the same place where the term was coined, Datta said.

Read the original:
A Look Back on the Dartmouth Summer Research Project on ... - The Dartmouth