Archive for the ‘Artificial Intelligence’ Category

Health Tech Startup Suki Is Using Artificial Intelligence To Make Patient Records More Accessible To Every Doctor – Forbes

records easier and more accessible.Google Images

On its website, healthcare tech startup Suki AI touts its Suki Speech Platform as the most intelligent and responsive voice platform in healthcare. The company builds software intended to assist doctors in more easily and efficiently complete patient documentation in patients electronic health records, or EHR. The idea is simple: by making charting faster and more accessiblethis is accessibility too, especially for doctors with certain conditions of their ownthe more physicians can shift their energy from the bureaucratic aspect of medicine to the actual practice of the profession. After all, doctors spend a kings ransom on medical school to help people, not push pencils on their behalf.

In a press release issued this week, the Bay Area-based company announced a partnership with EHR maker Epic that entails deep integration of Sukis AI-powered voice assistant tech with Epics records tech. Suki notes its eponymous Suki Assistant helps clinicians complete time-consuming administrative tasks by voice and recently announced the ability to generate clinical notes from ambiently listening to a patient-clinician conversation; the integration enables notes to automatically be sent back to Epic, updating the relevant sections.

Ambient documentation holds great promise for reducing administrative burden and clinician burnout, and we are delighted to work with Epic to deliver a sophisticated, easy-to-use solution to its client base, said Suki CEO Punit Soni in a prepared statement. Suki Assistant represents the future of AI-powered voice assistants, and we are thrilled that it is integrated with Epic through its ambient APIs.

In an interview with me conducted over email ahead of the announcement, Soni explained Sukis mission is to make healthcare tech invisible and assistive so that clinicians can focus on patient care. The conduit through which Soni and team accomplishes their mission is their core product in the Suki Assistant. According to Soni, the companys origin story began when he spotted a big hole in the health tech market. Clinician burnout, he said, continues to be a major problem in the industry as society reconciles with a pandemic-addled world. To that point, Soni pointed to a statistic gleaned from a recent study that found 88% of doctors dont recommend their profession to their children. Soni feels the sobering reality is indicative of societal and financial problems. I believe that when utilized properly, AI and voice technologies can transform healthcare and help relieve administrative burdens, he said. Suki has spent years investing in our technology to develop a suite of solutions that reduce burnout, improve the quality of care, and increase [the return on investment] for healthcare systems.

When asked how the Suki Assistant works at a technical level, Soni told me its the only product on the market that integrates with commonly used EHRs like Epic to create a seamless workflow for physicians. He went on to tell me the company has used generative AI and large language models in training the Suki software; one of the teams overarching goals was to build an assistant that could (reasonably) understand natural language. The team didnt want people to have to memorize some rote syntax, akin to interacting with a pseudo-sentient command line. Clinicians can ask queries like Whos my next patient? or Suki, whats my schedule? Moreover, users can dictate notes to the Assistant and ask it to show a list of a patients allergies. Our goal is to make Suki as intuitive and easy to use as possible and we use the latest technologies in voice and AI to do so, Soni said. Using Suki should be as easy as picking up a phone, opening the app, and speaking naturally to it. Theres a lot of tech under the hood to enable that experience.

The dots between AI and healthcare and accessibility are easy to connect. For one thing, as I alluded in the lede, its certainly plausible for a doctor to have a physical conditioncarpal tunnel, for instancethat make doing administrative work like updating charts not merely a matter of drudgery, but of disability as well. Maybe using a pen or pencil even a few minutes causes the carpal tunnel to flare up, not to mention the eye strain and fatigue that could conceivably surface. Suki clearly doesnt position anything they build expressly for accessibility, yet its obvious the Suki Assistant has as much relevance as an assistive technology as more consumer-facing digital butlers like Siri and Alexa. The bottom line, at least in this context, is many doctors will not only work better if they use Suki to maintain patient records. The truth is, theyll feel better too as a side effect of doing their jobs more efficiently.

Feedback on the Suki Assistant, Soni said, has been really positive. He cited a large healthcare system using Epic as its health records provider being amazed at how well Suki pulls up schedules and how it integrates with Epics software. He also noted peoples pleasure with Sukis ambient note-taking capability. All told, Soni said people in the field are immensely enjoying the Suki tech in their day-to-day lives, adding they appreciate the freedom and flexibility Suki offers because now they can do their notes [and more] anywhere they have their phonethey dont have to be in front of their computers anymore.

Ultimately, what Soni and his team have done is harness AI to do genuine good for the world by making record-keeping not simply more efficient but accessible tooin a way not dissimilar to how Apples just-announced Personal Voice and Point to Speak accessibility features change the usability game. As Soni explained, artificial intelligence and machine learning is just tech. Its soulless, inanimate, inhuman.

By itself, [AI] doesnt solve anything, he said.

Soni continued: Sukis primary value is that every pixel in the company is [created] in service of the clinician. That culture is what makes us different. Anyone can build a product, but the special sauce that makes it useful is empathy. That is the magic that is a key part of Suki.

Looking ahead, Soni is tantalized by the possibilities for his work.

Our mission is to make healthcare technology invisible and assistive so clinicians can focus on what they love: patient care. We want to be able to help every clinician who needs more time back and we are just scratching the surface of what we can do, he said of his companys future. There are so many potential applications of our technology, from simplifying the orders process to helping nurses complete their tasks by voice to enabling clinicians to answer patient portal messages by voice. We have an ambitious, exciting roadmap of features were working on, and I cant wait to show this work to the world.

Steven is a freelance tech journalist covering accessibility and assistive technologies, and is based in San Francisco. His work has appeared in such places as The Verge, TechCrunch, and Macworld. Hes also appeared on podcasts, NPR, and television.

See the original post:
Health Tech Startup Suki Is Using Artificial Intelligence To Make Patient Records More Accessible To Every Doctor - Forbes

Reviving the Past with Artificial Intelligence – Caltech

While studying John Singer Sargent's paintings of wealthy women in 19th-century society, Jessica Helfand, a former Caltech artist in residence, had an idea: to search census records to find the identities of those women's servants. "I thought, What happens if I paint these women in the style of John Singer Sargent?' It's a sort of cultural restitution," Helfand explained, "reverse engineering the narrative by reclaiming a kind of beauty, style, and majesty."

To recreate a style from history, she turned to technology that, increasingly, is driving the future. "Could AI help me figure out how to paint, say, lace or linen, to capture the folds of clothing in daylight?" Helfand discussed her process in a seminar and discussion moderated by Hillary Mushkin, research professor of art and design in engineering and applied science and the humanities and social sciences. The event, part of Caltech's Visual Culture program, also featured Joanne Jang, product lead at DALL-E, an AI system that generates images based on user-supplied prompts.

While DALL-E has a number of practical applications from urban planning, to clothing design, to cooking, the technology also raises new questions. Helfand and Jang spoke about recent advancements in generative AI, ethical considerations when using such tools, and the distinction between artistic intelligence and artificial intelligence.

More here:
Reviving the Past with Artificial Intelligence - Caltech

A Look Back on the Dartmouth Summer Research Project on … – The Dartmouth

At this convention that took place on campus in the summer of 1956, the term artificial intelligence was coined by scientists.

by Kent Friel | 5/19/23 5:10am

For six weeks in the summer of 1956, a group of scientists convened on Dartmouths campus for the Dartmouth Summer Research Project on Artificial Intelligence. It was at this meeting that the term artificial intelligence, was coined. Decades later, artificial intelligence has made significant advancements. While the recent onset of programs like ChatGPT are changing the artificial intelligence landscape once again, The Dartmouth investigates the history of artificial intelligence on campus.

That initial conference in 1956 paved the way for the future of artificial intelligence in academia, according to Cade Metz, author of the book Genius Makers: the Mavericks who Brought AI to Google, Facebook and the World.

It set the goals for this field, Metz said. The way we think about the technology is because of the way it was framed at that conference.

However, the connection between Dartmouth and the birth of AI is not very well-known, according to some students. DALI Lab outreach chair and developer Jason Pak 24 said that he had heard of the conference, but that he didnt think it was widely discussed in the computer science department.

In general, a lot of CS students dont know a lot about the history of AI at Dartmouth, Pak said. When Im taking CS classes, it is not something that Im actively thinking about.

Even though the connection between Dartmouth and the birth of artificial intelligence is not widely known on campus today, the conferences influence on academic research in AI was far-reaching, Metz said. In fact, four of the conference participants built three of the largest and most influential AI labs at other universities across the country, shifting the nexus of AI research away from Dartmouth.

Conference participants John McCarthy and Marvin Minsky would establish AI labs at Stanford and MIT, respectively, while two other participants, Alan Newell and Hebert Simon, built an AI lab at Carnegie Mellon. Taken together, the labs at MIT, Stanford and Carnegie Mellon drove AI research for decades, Metz said.

Although the conference participants were optimistic, in the following decades, they would not achieve many of the achievements they believed would be possible with AI. Some participants in the conference, for example, believed that a computer would be able to beat any human in chess within just a decade.

The goal was to build a machine that could do what the human brain could do, Metz said. Generally speaking, they didnt think [the development of AI] would take that long.

The conference mostly consisted of brainstorming ideas about how AI should work. However, there was very little written record of the conference, according to computer science professor emeritus Thomas Kurtz, in an interview that is part of the Rauner Special Collections archives.

The conference represented all kinds of disciplines coming together, Metz said. At that point, AI was a field at the intersection of computer science and psychology and it had overlaps with other emerging disciplines, such as neuroscience, he added.

Metz said that after the conference, two camps of AI research emerged. One camp believed in what is called neural networks, mathematical systems that learn skills by analyzing data. The idea of neural networks was based on the concept that machines can learn like the human brain, creating new connections and growing over time by responding to real-world input data.

Some of the conference participants would go on to argue that it wasnt possible for machines to learn on their own. Instead, they believed in what is called symbolic AI.

They felt like you had to build AI rule-by-rule, Metz said. You had to define intelligence yourself; you had to rule-by-rule, line-by-line define how intelligence would work.

Notably, conference participant Marvin Minsky would go on to cast doubt on the neural network idea, particularly after the 1969 publication of Perceptrons, co-authored by Minsky and mathematician Seymour Paper, which Metz said led to a decline in neural network research.

Over the decades, Minsky adapted his ideas about neural networks, according to Joseph Rosen, a surgery professor at Dartmouth Hitchcock Medical Center. Rosen first met Minsky in 1989 and remained a close friend of his until Minskys death in 2016.

Minskys views on neural networks were complex, Rosen said, but his interest in studying AI was driven by a desire to understand human intelligence and how it worked.

Marvin was most interested in how computers and AI could help us better understand ourselves, Rosen said.

In about 2010, however, the neural network idea was proven to be the way forward, Metz said. Neural networks allow artificial intelligence programs to learn tasks on their own, which has driven a current boom in AI research, he added.

Given the boom in research activity around neural networks, some Dartmouth students feel like there is an opportunity for growth in AI-related courses and research opportunities. According to Pak, currently, the computer science department mostly focuses on research areas other than AI. Of the 64 general computer science courses offered every year, only two are related to AI, according to the computer science department website.

A lot of our interests are shaped by the classes we take, Pak said. There is definitely room for more growth in AI-related courses.

There is a high demand for classes related to AI, according to Pak. Despite being a computer science and music double major, he said he could not get into a course called MUS 14.05: Music and Artificial Intelligence because of the demand.

DALI Lab developer and former development lead Samiha Datta 23 said that she is doing her senior thesis on neural language processing, a subfield of AI and machine learning. Datta said that the conference is pretty well-referenced, but she believes that many students do not know much about the specifics.

She added she thinks the department is aware of and trying to improve the lack of courses taught directly related to AI, and that it is more possible to do AI research at Dartmouth now than it would have been a few years ago, due to the recent onboarding of four new professors who do AI research.

I feel lucky to be doing research on AI at the same place where the term was coined, Datta said.

Read the original:
A Look Back on the Dartmouth Summer Research Project on ... - The Dartmouth

Artificial intelligence: Implications for strategic plans Inside INdiana … – Inside INdiana Business

At this moment, many business leaders dont need to understand the intricacies of artificial intelligence (AI) or how to interpret raw analytics to know that they need to invest in AI. The destabilization of the economy, ongoing geopolitical tensions, and the residual impact of the COVID-19 pandemic are just a few of the circumstances that have forced us to let go of our preconceived notions about how the future will most likely evolve.

Strategic planning has always been a crucial aspect of business success, but in todays rapidly changing landscape its more important than ever. Artificial intelligence has the potential to transform the way we approach strategic planning. AI can help companies gather and analyze massive amounts of data, automate processes, and provide valuable insights that help inform decision-making.

Acknowledging the Reality of AI Technologies

AI is no longer a thought-provoking, futuristic conceptit has become an indispensable tool for many companies. One of the key advantages of AI is its ability to generate decisions and assess outcomes based on complex data sets. This makes it particularly attractive for leaders seeking to monitor strategic plans. Additionally, AIs capacity for adapting to new rules and information means that it can continuously improve over time. Incorporating machine learning into existing information management systems can take data processing to the next level, resulting in even greater intelligence and insights.

Reflecting Upon the Nature of Strategic Planning

As companies operate in an increasingly dynamic and ever-changing environment, the traditional approach to strategic planning that relies upon periodic reports is no longer sufficient. Companies need to move beyond legacy plans and assumptions and embrace a more dynamic and data-driven approach to strategic planning. Thats why the use of AI technology continues to gain traction because it can help companies develop, track, and update strategic plans in a more efficient and effective way. In addition, continuously monitoring and updating strategic plans using AI enables companies to remain aligned with business goals throughout the year, instead of being constrained by periodic planning cycles.

Understanding Organizational and Managerial Implications

As we know, AI has the potential to streamline countless repetitive, low-visibility tasks in a variety of business units. By reducing the burden of these tasks, AI empowers employees to focus on higher value-added activities, ultimately driving innovation. Lets consider additional organizational and managerial implications that come with incorporating this technology into developing, monitoring, and updating strategic plans. Here are a few aspects to keep in mind:

Organizational change: The integration of AI into supporting the development, tracking, and updating of strategic plans can require significant changes to the way work is organized and executed. As a result, organizations may need to update job descriptions, provide training, and potentially reorganize or form new teams to fully leverage the benefits of the technology. This is all in addition to securing the talent with the skillsets to deploy IA.

Managerial responsibility: Managers must assume new responsibilities when implementing AI to support strategic plans. While oversight and management of AI systems may be delegated to a unit or department, managers within each department must understand their responsibility for processes and data collection and management. This requires that they understand the technology, if even at the most basic level, and ensure that their teams understand it as well and how it relates to their roles and responsibilities.

Data quality: Given that AI relies on data to make decisions, the quality of the data can have a significant impact on the effectiveness of the technology. Organizations must invest in data management and ensure that data is accurate, complete, secure, and up to date to realize the full potential of AI in strategic planning. This involves organizational investment and managements ability to garner support, implement change, and lead by example.

Creating a Well-Informed Business Strategy

As the business environment continues to experience rapid, and at times unpredictable change, more companies are recognizing the importance of leveraging AI to develop, track, and update their strategic plans. By embracing the applications of this fast-evolving technology, companies can gain a competitive edge by making better informed decisions and keeping up with market dynamics. With the ability to analyze complex data sets and generate insights in real-time, AI provides a powerful tool for developing agile and responsive strategic plans. By continuously monitoring and updating these plans using AI, companies can ensure they remain relevant and aligned with business goals and prevent themselves from falling behind their competitors who have yet to embrace these new technologies.

Tuesday Strongs company, Strong Performance Management, LLC, is approved by the Indiana Professional Licensing Agency as a provider of continuing education for licensed professional engineers. Learn morehere.

Story Continues Below

View original post here:
Artificial intelligence: Implications for strategic plans Inside INdiana ... - Inside INdiana Business

What if artificial intelligence isnt the apocalypse? – EL PAS USA

In just six months, searches for artificial intelligence on Google have multiplied by five. ChatGPT launched on November 30 of 2022 already has tens of millions of users. And Sam Altman, the CEO of OpenAI, the company that created ChatGPT, has already appeared before the United States Congress to explain himself and answer questions about the impact of AI. By comparison, it took Mark Zuckerberg 14 years to go to Washington to talk about the role of Facebook in society.

Altman has oddly been quite blunt about the technology that his firm produces. My worst fears are that we can cause significant harm to the world I think if this technology goes wrong, it can go quite wrong, he said while testifying. However, some analysts have noted that the words about his supposed fears may be carefully calculated, with the intention of encouraging more stringent regulation so as to hinder the rise of competitors to OpenAI, which already occupies the dominant position in the sector.

Heavy and bombastic phrases about the explosion of AI have already spawned their own memes. The term criti-hype created in 2021 to define criticism of a new technology has become popularized, thanks to ChatGPT. A pioneering example of criti-hype was the case of Cambridge Analytica, when the company was accused of harvesting Facebook data to understand and influence the electorate during the 2016 presidential election.

The pinnacle of these statements was the departure of Geoffrey Hinton known as the godfather of AI from Google. He left the company to be able to speak freely about the dangers of AI: From what we know so far about the functioning of the human brain, our learning process is probably less efficient than that of computers, he told EL PAS in an interview after departing from Google.

Meanwhile, the U.K. governments outgoing chief scientific adviser has just said that AI could be as big as the Industrial Revolution was. There are already groups trying to organize, so that their trades are not swept away by this technology.

There are too many prophecies and fears about AI to list. But theres also the possibility that the impact of this technology will actually be bearable. What if everything ended up going slower than is predicted, with fewer shake ups in society and the economy? This opinion is valid, but it hasnt been deeply explored amidst all the hype. While its hard to deny the impact of AI in many areas, changing the world isnt so simple. Previous revolutions have profoundly changed our way of life, but humans have managed to adapt without much turbulence. Could AI also end up being a subtle revolution?

At the very least, [AI has caused] a big structural change in what software can do, says Benedict Evans, an independent analyst and former partner at Andreessen Horowitz, one of Silicon Valleys leading venture capital firms. It will probably allow a lot of new things to be possible. This makes people compare it to the iPhone. It could also be more than that: it could be more comparable to the personal computer, or to the graphical user interface, which allows interaction with the computer through the graphical elements on the screen.

These new AI and machine learning (ML) technologies obviously carry a lot of weight in the tech world. My concern is not that AI will replace humans, says Meredith Whittaker, president of Signal, a popular messaging app, but Im deeply concerned that companies will use it to demean and diminish the position of their workers today. The danger is not that AI will do the job of workers: its that the introduction of AI by employers will be used to make these jobs worse, further exacerbating inequality.

It must be noted that the new forms of AI still make a lot of mistakes. Jos Hernndez-Orallo a researcher at the Leverhulme Center for the Future of Intelligence at Cambridge University has been studying these so-called hallucinations for years. At the moment, [AI is] at the level of a know-it-all brother-in-law. But in the future, [it may be] an expert, perhaps knowing more about some subjects than others. This is what causes us anxiety, because we dont yet know in which subjects [the AI] is most reliable, he explains.

Its impossible to build a system that never fails, because well always be asking questions that are more and more complex. At the moment, the systems are capable of the best and the worst Theyre very unpredictable, he adds.

But if this technology isnt so mature, why has it had such a sudden and broad impact in the past few months? There are at least two reasons, says Hernndez-Orallo: first, commercial pressure. The biggest problem comes because there is commercial, media and social pressure for these systems to always respond to something, even when they dont know how. If higher thresholds were set, these systems would fail less, but they would almost always answer I dont know, because there are thousands of ways to summarize a text.

The second reason, he notes, is human perception: We have the impression that an AI system must be 100% correct, like a mixture of a calculator and an encyclopedia. But this isnt the case. For language models, generating a plausible but false text is easy. The same happens with audio, video, code. Humans do it all the time, too. Its especially evident in children, who respond with phrases that sound good, but may not make sense. With kids, we just tell them thats funny, but we dont go to the pediatrician and say that my son hallucinates a lot. In the case of both children and certain types of AI, [there is an ability] to imitate things as best as possible, he explains.

The large impact on the labor market will fade when its clear that there are things that the AI doesnt properly complete. Similarly, when the AI is questioned and we are unsure of the answer it offers, disillusionment will set in. For instance, if a student asks a chatbot about a specific book that they havent read, it may be difficult for them to determine if the synopsis is completely reliable. In some cases, even a margin of doubt will be unacceptable. Its likely that, in the future, humans using AI will even assume (and accept) that the technology will make certain errors. But with all the hype, we havent reached that stage yet.

The long-term realization of AIs limited impact still doesnt mean that the main fear that AI is more advanced than human intelligence will go away. In the collective imagination, this fear becomes akin to the concept of a machine taking control of the worlds software and destroying humans.

People use this concept for everything, Hernndez-Orallo shrugs. The questions [that really need to be asked when thinking about] a general-purpose system like GPT-4 are: how much capacity does it have? Does it need to be more powerful than a human being? And what kind of human being an average one, the smartest one? What tasks is it specifically going to be used for? All of [the answers to these questions] are very poorly defined at this point.

Matt Beane, a professor at UC Santa Barbara, opines that since weve imagined machines that can replace us, we now fear them. We have strong evidence that shows how we rely on criticism and fear as well as imagination and assertiveness when it comes to thinking about new technologies.

Fear has been the most recurring emotion when it comes to this issue. We seem to fall into a kind of trance around these [AI] systems, telling these machines about our experiences, says Whittaker. Reflexively, we think that theyre human We begin to assume that theyre listening to us. And if we look at the history of the systems that preceded ChatGPT, its notable that, while these systems were much less sophisticated, the reaction was often the same. People locked themselves in a surrogate intimate relationship with these systems when they used them. And back then just like today the experts were predicting that these systems would soon (always soon, never now) be able to replace humans entirely.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAS USA Edition

Go here to read the rest:
What if artificial intelligence isnt the apocalypse? - EL PAS USA