Archive for the ‘Artificial Intelligence’ Category

Artificial intelligence makes its way into Nebraska hospitals and clinics – Omaha World-Herald

In November 2021, doctors at Midwest Gastrointestinal Associates in Omaha got what might be considered a new assistant.

Called GI Genius, the new computer-aided system was designed to help doctors performing colonoscopies identify in real time suspicious tissue that might be a polyp, or precancerous lesion in the colon.

The Medtronic device puts a little green box on any spot it thinks might be a polyp, using the same display screen a doctor is watching while navigating the colons twists and turns and searching for suspicious spots.

Finding and removing the lesions is important because it decreases a patients risk of developing colon cancer, said Dr. Jason Cisler, a gastroenterologist and the practices quality chairman. Studies have shown that doctors find more polyps if they have two people looking at the screen.

People are also reading

After adopting the system, the groups already good adenoma detection rate the rate at which doctors find and remove polyps during screening colonoscopies went up 10% across the board, putting the practice at more than double the national standard. Every 1% increase in the detection rate, according to one study, decreases patients risk of colon cancer by 3%.

It makes it a more sensitive screening tool, Cisler said. And what were doing is screening. If were able to prevent more colon cancer, thats the rationale where were at today.

The device, approved by the Food and Drug Administration in early 2021, uses a type of artificial intelligence. And its just one of a number of technologies incorporating various forms of artificial intelligence that are already working behind the scenes in Nebraska hospitals and clinics. And with research and development underway around the world, there will be more.

Some are focused on flagging doctors about needed health screenings and identifying hospitalized patients at higher risk of being readmitted to the hospital or developing potentially life-threatening infections. Others monitor patients at risk of falling and analyze the impact of blockages in heart arteries on blood flow.

AI also is being used to take some mundane tasks off the plates of both clerical staff and health care providers, freeing them to do higher-level work.

Some Nebraska Medicine doctors are using a product called Dragon Ambient eXperience, or DAX, from a company called Nuance, to capture conversations between themselves and patients and create notes in patients charts, said Scott Raymond, the health systems chief information and innovation officer. The physician then reviews and accepts the notes. Some physicians notes now are proving accurate with no need for further human intervention between 80% and 90% of the time.

Its a great use of the technology, he said. Its taking away physician burnout, the burden of documentation ... where (they) feel theyre practicing medicine and not being documentation specialists.

Lincolns Bryan Health plans to go live with the system in early May. We think that will (be) a tremendous win for both our patients and our physicians, said Bridgett Ojeda, that systems chief information officer.

Raymond said Microsoft plans to put the artificial intelligence chatbot ChatGPT behind the next version of the program. ChatGPT, developed by OpenAI, has been making headlines around the world in recent months. Users would have to decide whether to adopt it.

Such technologies are making it a fun time to be in health care information technology, Ojeda said. Technologists have spent the last two decades getting information out of paper files and into electronic systems. Now AI and large language models like ChatGPT are allowing them to begin using that data to benefit patients.

Indeed, the authors of a 2022 report from the National Academy of Medicine on AI in health care said their hope is that AI will be the payback for investments in electronic systems.

They caution, however, that such systems could introduce bias if not carefully trained and create concerns about privacy and security.

Raymond acknowledged that standards and guardrails need to be put around the technology, particularly when it comes to the chatbots.

Ojeda noted that other challenges lie in having enough health care data and engineering experts to put the technology to work in ways that help rather than disrupt. With interest and investment in the sector high, they have to focus on selecting tools that will be sustainable and ultimately benefit patients.

But Dr. Steven Leitch, vice president of clinical informatics with CHI Health, stressed that humans, not machines, still are making the decisions.

What would make it scary is if we dont make the human in charge, he said. And thats not what health care is about. Doctors and nurses make decisions in health care. Its between people. These tools are amendments; theyre only going to be assisting where we allow them to assist.

Raymond, who previously practiced as a pediatric intensive care nurse, said Nebraska Medicine and the University of Nebraska Medical Center are forming a committee to consider how the health system will use chatbot technology in research, education and clinical care.

Its happening in medicine, he said. Its happening slowly and carefully with a lot of thought behind it. I think it will change how we deliver care and it will improve care. Our responsibility is to make sure we use the technology in the right way.

The term artificial intelligence, however, implies that machines are reasoning the way humans do, he said. Theyre not, although theyre good at gathering data, learning from it and starting to glean insights.

In actuality, Leitch said, what most people think of as artificial intelligence really is a broader category that includes a lot of different tools, including machine learning, robotic process automation and the chatbots natural language processing. Even chatbots, however, arent having independent thoughts but rather are running very complex sets of rules.

Cisler said the GI Genius system, also in place at Methodist Endoscopy Center, which is owned by Midwest and Methodist, has been trained on millions of images from colonoscopies and is constantly updated.

But the final word on whether what the system flags actually is a polyp rather than a bubble or fold in the colon lies with the doctor, he said.

Such systems, however, also can help sort patients in other ways, and in doing so, make it more likely they get the care they need.

Hastings Family Care in Hastings, Nebraska, part of Mary Lanning Healthcare, recently began using Eyenuks EyeArt technology, a special camera connected to a computer backed by machine learning that allows providers to screen patients with diabetes for diabetic retinopathy, without dilating their eyes.

Hastings Family Care in Hastings, Nebraska, a primary care clinic that's part of Mary Lanning Healthcare, is using a new device that uses a type of artificial intelligence to screen patients with diabetes for diabetic retinopathy, without dilating their eyes. It's one example of the kinds of artificial intelligence technologies that are already working behind the scenes in Nebraska hospitals and clinics.

People who have diabetes are advised to have their eyes checked once a year for the condition, which can cause vision loss and blindness. Early treatment can stop progression.

But Jessica Sutton, clinic manager, said a lot of diabetics dont get the annual exams, often due to a lack of vision insurance, transportation or time to get to an eye doctor. The clinic saw 980 patients with diabetes last year, 45% of whom had not had the exam. Funding for the equipment came through a local donor and a grant UNMC received to improve diabetic care in rural areas.

Dr. Zachary Frey, director of primary care, said he saw three such patients Wednesday morning. One didnt have insurance. The other two hadnt had an eye exam in a while. Having the device allows the clinic staff to catch such patients when theyre already in the office.

Frey said the system essentially provides three results, each of which triggers next steps. If no problem is detected, the patient is cleared until the next year. If the scan shows changes suggesting retinopathy, the patient is referred to an eye doctor for further investigation. If it detects vision-threatening retinopathy, the patient is sent to a retina specialist.

People who have diabetes are advised to have their eyes checked once a year for diabetic retinopathy, which can cause vision loss and blindness. Early treatment can stop progression. Here, Hastings Family Care is using a new device that uses a type of AI to screen patients with diabetes for the condition, without dilating their eyes.

The systems also can be used to keep patients from falling through the cracks in other ways.

Methodist, for instance, has several systems aimed at helping put additional eyes on lung scans.

One searches radiology reports from scans of, say, the abdomen, that incidentally catch part of the lung for key words like nodule. Those get sent to a team that determines whether there might be a problem, and if so, contacts the patients doctor, even those in other health systems, said Dr. Adam Wells, a pulmonologist with Methodist Physicians Clinic and Methodist Hospital.

That incidental nodule program flagged more than 13,000 scans last year, which triggered nearly 1,000 communications with a physician and ongoing follow-up with more than 700 patient scans, he said. Those identified nearly 30 cancers.

The health system also screens patients with a known risk for lung cancer using low-dose CT scans, Wells said. While radiologists read the scans, an AI program reads behind and categorizes any spots it sees. Nearly 20 cancers were identified last year out of more than 2,300 scheduled screening scans.

Cancer is a common focus. Locally, the Omaha-based MRI medical device company Bot Image, founded by entrepreneur Randall Jones, last year received FDA clearance for an AI-driven software system called ProstatID for detection and diagnosis of prostate cancer.

But there are others. Leitch said CHI uses robotic process automation, or bots, which use sets of rules to identify patients with upcoming visits and check if theyre due for a test, such as a lung cancer screening.

If so, it places a pending order in the patients electronic medical record. If the doctor and patient decide its not the right time for the test, the provider can remove it. But it takes the burden off the doctor to remember every test a patient might need, particularly on busy days with lots of distractions.

Other systems can be used to help monitor hospitalized patients. Bryan for several years has used a fall-prevention system developed by Lincoln-based Ocuvera, Ojeda said. It uses 3-D cameras and an algorithm to predict patient movement and alert nurses before a fall can occur.

Epic Systems, she said, has developed five different predictive models that monitor hospitalized patients for other risks, including sepsis and hospital readmission, and alert clinicians so they can respond quickly.

Health systems that use Epics health records, including Bryan, CHI and Nebraska Medicine, can then adopt them and build them out for their patient populations, she said.

One of the latest, which CHI has adopted and Bryan is developing, is a model that helps predict when patients will be no-shows for clinic appointments.

If providers can head off missed appointments by, say, Leitch said, providing transportation, they can keep patients healthier.

If we do what the evidence shows us, as we learn more and more, its going to make it easier for us to deliver the care the right way every time, Leitch said.

From flooding to drought to infectious diseases, the adverse health effects of climate change already are evident in Nebraska, experts say. And they warn that 'those changes will only get greater.'

Just like other Americans, Nebraskans are feeling the pinch of prescription drug cost increases.

Simulation centers and high-tech mannequins let Nebraska doctors and medical students practice procedures before they attempt them on real people.

A Omaha mom who specializes in 3D imaging arranged to get a 3D rendering of the scans of her son's brain so her husband could see where the son's tumor was situated.

Older Americans now can join, switch or drop a Medicare plan or change Medicare Part D drug coverage or Medicare Advantage plans for the coming year.

Seven Nebraska organizations formed to take better care of their patients' health and reduce costs all performed better than the U.S. average on satisfaction and quality measures.

The data reported by public health agencies in Nebraska has ebbed and flowed over the course of the COVID-19 pandemic.

Kayla Northup's family is pretty healthy, but when her kids do get sick, it's often at an inconvenient time, such as on a vacation.

Lincoln-based Bryan Health officials want to set up a center that would provide virtual nurses to help rural hospitals across Nebraska with staffing issues.

Jeremy Nordquist, president of the Nebraska Hospital Association, said hospitals still are seeing a staff vacancy rate of somewhere between 10% and 15%, with some as high as 20%.

Just before the COVID pandemic broke out, UNMC's Global Center for Health Security received a grant from the CDC to strengthen infection control training, education and tools.

The pandemic forced medical professionals, including Nebraska-based researchers and physicians, to innovate. Some innovations likely will be around for good.

Joanna Halbur of Project Harmony, a child advocacy center in Omaha, said noticeable changes in a child's behavior -- such as a normally outgoing child acting more reserved -- can be signs of anxiety or depression.

Experts say suicide rates often drop following major disasters, such as the 2019 floods in Nebraska, before experiencing an uptick.

Nebraska has reached a "cultural crisis point" in mental health availability, experts say, as long waitlists and a shortage in providers persists.

The COVID pandemic has brought extra attention to the health care world. To help readers learn about how health care is evolving, we offer Health Matters in the Heartland.

The pandemic accelerated a shift to more outpatient or same-day surgeries and sped the expansion of telehealth, among other changes, Nebraska health care leaders say.

You know losing that extra weight would be good for your health. Your health care team talked with you about how obesity increases your risk of other health issues, such as heart disease, diabetes, high blood pressure and certain types of cancer.

Sign up here to get the latest health & fitness updates in your inbox every week!

Here is the original post:
Artificial intelligence makes its way into Nebraska hospitals and clinics - Omaha World-Herald

I can’t wait for artificial intelligence to take this job away from humans – Tom’s Guide

Adults have to do a lot of unpleasant jobs; its part of the gig. Taking out the trash, cleaning the toilet and making the daily school run are unavoidable when it comes to keeping life running smoothly.

But theres one particular job that fills me with dread: calling a helpline."

Every time I pick up the phone to discuss tax codes, remortgage rates, insurance quotes, doctors appointments or some other exciting aspect of modern life, my knees go slack and my head starts to pound. Cue generic hold music and a constant robotic reminder of my place in the virtual queue.

Once you do get through to a person, things rarely improve. The poor soul on the other end of the line guides me through mundane security questions before reading from a pre-prepared script. Often, they fail to offer up a single noteworthy piece of advice when questioned directly.

During one of these recent calls, it occurred to me everyone involved would benefit from letting artificial intelligence handle the task. I dont mean the basic interactive voice response (IVR) program that routes your call based on how you answer recorded questions; I mean a full conversational AI agent capable of discussing and actioning my requests with no human input.

Id get through the process faster (because the organization wouldnt need to wait for available humans to assign) and it wouldnt require a living, breathing person to spend their days on the phone to an aggravated person like me. Similarly, an AI doesnt need to clock off at the end of a shift, so the call could be handled any time of the day or night.

Plenty of companies have implemented browser or app-based chat clients but, the fact is, a huge amount of people still prefer to pick up the phone and do things by voice. And I think most industry leaders recognize this.

Humana, a healthcare insurance provider with over 13 million customers, partnered with IBMs Data and AI Expert Labs in 2019 to implement natural language understanding (NLU) software into its call centers to respond to spoken sentences. The machines either rerouted the call to the relevant person or, where necessary, simply provided the information. This came after Humana recognized that 60% of the million-or-so calls they were getting each month were just queries for information.

According to a blog post (opens in new tab) from IBM, The Voice Assistant uses significant speech customization with seven language models and two acoustic models, each targeted to a specific type of user input collected by Humana.

Through speech customization training, the solution achieves an average of 90-95% sentence error rate accuracy level on the significant data inputs. The implementation handles several sub-intents within the major groupings of eligibility, benefits, claims, authorization and referrals, enabling Humana to quickly answer questions that were never answerable before.

The obvious stumbling block for most companies will be the cost. After all, OpenAIs chatbot ChatGPT charges for API access while Metas LLaMA is partially open-source but doesnt permit commercial use.

However, given time, the cost for implementing machine learning solutions will come down. For example, Databricks, a U.S.-based enterprise company recently launched Dolly 2.0 (opens in new tab), a 12-billion parameter model thats completely open source. It will allow companies and organizations to create large language models (LLMs) without having to pay costly API fees to the likes of Microsoft, Google or Meta. With more of these advancements being made, the AI adoption rate for small and medium-sized businesses will (and should) increase.

According to recent research by industry analysts Gartner (opens in new tab), around 10% of so-called agent interactions will be performed by conversational AI by 2026. At present, the number stands at around 1.6%.

"Many organizations are challenged by agent staff shortages and the need to curtail labor expenses, which can represent up to 95 percent of contact center costs, explained Daniel O'Connell, a VP analyst at Gartner. Conversational AI makes agents more efficient and effective, while also improving the customer experience."

You could even make the experience a bit more fun. Imagine if a company got the license to utilize James Earl Jones voice for its call center AI. I could spend a half-hour discussing insurance renewal rates with Darth Vader himself.

I could spend a half-hour discussing insurance renewal rates with Darth Vader himself.

Im not saying there wont be teething problems; AI can struggle with things like regional dialects or slang terms and there are more deep-rooted issues like unconscious bias. And if a company simply opts for a one-size-fits-all AI approach, rather than tailoring it to specific customer requirements, we wont be any better off.

Zooming out for a second, I appreciate that were yet to fully consider all the ethical questions posed by the rapid advancements in AI. Regulation will surely become a factor (if it can keep pace) and upskilling a workforce to become comfortable with the new system will be something for industry leaders and educational institutions to grapple with.

But I still think a good place to start is letting the robots take care of mundane helpline tasks its for the good of humanity.

Today's best Google Nest Hub (2nd Gen) deals

Original post:
I can't wait for artificial intelligence to take this job away from humans - Tom's Guide

Amazon Joins the Rush Into Artificial Intelligence – Investopedia

Key Takeaways

Amazon (AMZN) became the latest big tech firm to go all-in onartificial intelligence (AI). The company announced that it is offering new AI language models through itsAmazon Web Services (AWS)cloud platform. Called Amazon Bedrock, the product will allow customers to boost their software with AI systems that create text, similar to OpenAI'sChatGPTchatbot.

Swami Sivasubramanian, vice president of Data and Machine Learning at AWS, said that Amazon's mission "is to make it possible for developers of all skill levels and for organizations of all sizes to innovate using generative AI." He indicated that this is just the beginning of what the company believes "will be the next wave of machine learning."

The competition in the AI field is heating up. In March, OpenAI released its latest version of ChatGPT, and Meta Platforms (META), Microsoft (MSFT), and Alphabet's (GOOGL) Google all recently introduced their moves into the sector.

Sivasubramanian added that "we are truly at an exciting inflection point in the widespread adoption of machine learning" and that most customer experiences and applications "will be reinvented with generative AI."

The news helped lift Amazon shares 4.7% on April 13.

Read the original here:
Amazon Joins the Rush Into Artificial Intelligence - Investopedia

Everything to Know About Artificial Intelligence, or AI – The New York Times

Welcome to On Tech: A.I., a pop-up newsletter that will teach you about artificial intelligence, especially the new breed of chatbots like ChatGPT all in only five days.

Well tackle some of the big themes and questions around A.I. By the end of the week, youll know enough to command the room at a dinner party, or impress your co-workers.

Every day, well give you a quiz and a homework assignment. (A pro tip: Ask the chatbots themselves about how they work, or about concepts you dont understand. Answering such questions is one of their most useful skills. But keep in mind that they sometimes get things wrong.)

Lets start at the beginning.

The term artificial intelligence gets tossed around a lot to describe robots, self-driving cars, facial recognition technology and almost anything else that seems vaguely futuristic.

A group of academics coined the term in the late 1950s as they set out to build a machine that could do anything the human brain could do skills like reasoning, problem-solving, learning new tasks and communicating using natural language.

Progress was relatively slow until around 2012,when a single idea shifted the entire field.

It was called a neural network. That may sound like a computerized brain, but, really, its a mathematical system that learns skills by finding statistical patterns in enormous amounts of data. By analyzing thousands of cat photos, for instance, it can learn to recognize a cat. Neural networks enable Siri and Alexa to understand what youre saying, identify people and objects in Google Photos and instantly translate dozens of languages.

A New Generation of Chatbots

A brave new world. A new crop of chatbotspowered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning todays powerhouses into has-beens and creating the industrys next giants. Here are the bots to know:

ChatGPT. ChatGPT, the artificial intelligence language model from a research lab, OpenAI, has been making headlines since November for its ability to respond to complex questions, write poetry, generate code, plan vacationsand translate languages. GPT-4, the latest version introduced in mid-March, can even respond to images(and ace the Uniform Bar Exam).

Bing. Two months after ChatGPTs debut, Microsoft, OpenAIs primary investor and partner, added a similar chatbot, capable of having open-ended text conversations on virtually any topic, to its Bing internet search engine. But it was the bots occasionally inaccurate, misleading and weird responsesthat drew much of the attention after its release.

Ernie. The search giant Baidu unveiled Chinas first major rival to ChatGPT in March. The debut of Ernie, short for Enhanced Representation through Knowledge Integration, turned out to be a flopafter a promised live demonstration of the bot was revealed to have been recorded.

The next big change: large language models. Around 2018, companies like Google, Microsoft and OpenAI began building neural networks that were trained on vast amounts of text from the internet, including Wikipedia articles, digital books and academic papers.

Somewhat to the experts surprise, these systems learned to write unique prose and computer code and carry on sophisticated conversations. This is sometimes called generative A.I. (More on that later this week.)

The result: ChatGPT and other chatbotsare now poised to change our everyday lives in dramatic ways. Over the next four days, we will explain the technology behind these bots, help you understand their abilities and limitations, and where they are headed in the years to come.

Tuesday: How do chatbots work?

Wednesday: How can they go wrong?

Thursday: How can you use them right now?

Friday: Where are they headed?

Youve got some homework to do! One of the best ways to understand A.I. is to use it yourself.

The first step is to sign up for these chatbots. Bing and Bard chatbots are being rolled out slowly, and you may need to get on their waiting lists for access. ChatGPT currently has no waiting list, but requires setting up a free account.

Once youre ready, just type your words (known as a prompt) into the text box, and the chatbot will reply. You may want to play around with different prompts and see if you get a different response.

Todays assignment: Ask ChatGPT or one of its competitors to write a cover letter for your dream job like, say, a NASA astronaut.

We want to see the results! Share it as a comment and see what other people have submitted.

Weve been covering developments in artificial intelligence for a long time, and we've both written recent books on the subject. But this moment feels distinctly different from whats come before. We recently chatted on Slack with our editor, Adam Pasick, about how were each approaching this unique point in time.

Cade:The technologies driving the new wave of chatbots have been percolating for years. But the release ofChatGPT really opened peoples eyes. It set off a new arms race across Silicon Valley. Tech giants like Google and Meta had been reluctant to release this technology, but now theyre racing to compete with OpenAI.

Kevin:Yeah, its crazy out there I feel like Ive got vertigo. Theres a natural inclination to be skeptical of tech trends. Wasnt crypto supposed to change everything? Werent we all just talking about the metaverse? But it feels different with A.I., in part because millions of users are already experiencing the benefits. Ive interviewed teachers, filmmakers and engineers who are using tools like ChatGPT every day. And it came out only four months ago!

Adam: How do you balance the excitement out there with caution about where this could go?

Cade:A.I. is not as powerful as it might seem. If you take a step back, you realize that these systems cant duplicate our common sense or reasoning in full. Remember the hype around self-driving cars: Were those cars impressive? Yes, remarkably so. Were they ready to replace human drivers? Not by a long shot.

Kevin:I suspect that tools like ChatGPT are actually more powerful than they seem. We havent yet discovered everything they can do. And, at the risk of getting too existential, Im not sure these models work so differently than our brains. Isnt a lot of human reasoning just recognizing patterns and predicting what comes next?

Cade:These systems mimic humans in some ways but not in others. They exhibit what we can rightly call intelligence. But as OpenAIs chief executive told me, this is an alien intelligence. So, yes, they will do things that surprise us. But they can also fool us into thinking they are more like us than they really are. They are both powerful and flawed.

Kevin: Sounds like some humans I know!

Question 1 of 3

Start the quiz by choosing your answer.

Neural network: A mathematical system, modeled on the human brain, that learns skills by finding statistical patterns in data. It consists of layers of artificial neurons: The first layer receives the input data, and the last layer outputs the results. Even the experts who create neural networks dont always understand what happens in between.

Large language model: A type of neural network that learns skills including generating prose, conducting conversations and writing computer code by analyzing vast amounts of text from across the internet. The basic function is to predict the next word in a sequence, but these models have surprised experts by learning new abilities.

Generative A.I.: Technology that creates content including text, images, video and computer code by identifying patterns in large quantities of training data, and then creating new, original material that has similar characteristics. Examples include ChatGPTfor text and DALL-E and Midjourney for images.

Click here for more glossary terms.

Go here to read the rest:
Everything to Know About Artificial Intelligence, or AI - The New York Times

ChatGPT: Who and what is behind the artificial intelligence tool changing the tech landscape – Fox Business

Deloitte AI Institute executive director Beena Ammanath and C3.ai CEO Thomas Siebel discuss ChatGPT's risks and how it should be integrated into society on 'The Claman Countdown.'

Since the introduction of the artificial intelligence tool ChatGPT in November 2022, the new technology has displayed the power and potential that AI can have on our lives.

Open AI CEO Sam Altman, the company behind ChatGPT, admitted earlier this month that he was even "a little bit scared" of the powerful technology his company is developing. While Altman predicted that artificial intelligence "will eliminate a lot of current jobs," he has said the technology will be a net positive for humans because of the potential to transform industries like education.

But who is Sam Altman, and what is behind this new technology?

MARK CUBAN ISSUES DIRE WARING OVER CHATGPT

In this photo illustration, the welcome screen for the OpenAI "ChatGPT" app is displayed on a laptop screen on February 03, 2023 in London, England. (Leon Neal/Getty Images / Getty Images)

ChatGPT is an artificial intelligence chatbot whose core function is to mimic a human in conversation. Users across the world have used ChatGPT to write emails, debug computer programs, answer homework questions, play games, write stories and song lyrics, and much more.

"It is going to eliminate a lot of current jobs, thats true. We can make much better ones. The reason to develop AI at all, in terms of impact on our lives and improving our lives and upside, this will be the greatest technology humanity has yet developed," Altman said in a recent interview with ABC News. "The promise of this technology, one of the ones that I'm most excited about is the ability to provide individual learning great individual learning for each student."

Altman has made numerous business deals over the last several years, because of the potential of his company's technology.

In January, OpenAI expanded its partnership with Microsoft, who will add almost $10 billion in new capital to the company. Microsoft, as a result of the deal, will likely acquire a large chunk of the companys profits over the next several years (Microsoft previously invested $1 billion in OpenAI three years ago).

Furthermore, Microsoft is planning on implementing the tool into its existing ecosystem to be used in software like Microsoft PowerPoint, Excel and Teams.

Although the influx of cash has provided the company with more resources, it has reportedly divided its 300 some staffers and angered some in the field of AI, who believe the once humanitarian company is now primarily concerned with making a buck.

Dr. Mike Capps is the co-founder of Diveplane, an ethical AI company based in Raleigh, N.C. He was formerly president of Epic Games, the creators of Fortnite and Gears of War, for nearly a decade. He does not believe OpenAI would have been nearly as successful without that significant connection to Microsoft, but also expressed disappointment in some of the business decisions made by the ChatGPT creators.

"I feel a little bit like they sold their soul in order to speed things up, and they succeeded," he said.

Business moguls and AI researchers have also pointed to OpenAIs broken promise of turning ChatGPT open source, allowing businesses and computer scientists to manipulate and tailor the tool to their liking, as another sign of the companys increasingly profit-centric mindset.

"They swore up and down that they were going to give it all away because its the best way to handle this space, and now theyre not, theyre pulling things down, so you cant recreate their work. Its super frustrating," Capps added.

NASA Jet Propulsion Laboratory (JPL) Chief Technology and Innovation Officer Dr. Chris Mattmann told Fox News Digital that the trajectory of OpenAI directly mirrors the evolution of the Apache Software Foundation.

MUSK LOOKS TO BUILD CHATGPT ALTERNATIVE TO COMBAT WOKE AI: REPORT

CEO of OpenAI Sam Altman attends the annual Allen and Co. Sun Valley Media Conference in Sun Valley, Idaho, U.S., July 6, 2022. (REUTERS/Brendan McDermid / Reuters Photos)

Created in 1999, Apache started out as an American nonprofit corporation to support open-source software projects, but overtime became a sort of "toxic place" that abandoned many of its original altruistic intents to pursue commercial interests, according to Mattmann.

"Even in a meritocracy there are still political controls and committees. It works a lot like dark money in government. Its almost like the notion of dark money in tech," he said.

While their goals in the beginning largely revolved around data sharing agreements and scraping data for curation, the decision to take big donations created an inherent tension and pushed OpenAI into a situation where they had to work for the benefit of their investors.

"They dont release their model. I would even give Meta more credit for releasing Llama and allowing people to download it. You cant do that with OpenAI. You cant download their models. You have to pay to play and thats a lot different than what they said in the beginning," Mattmann added.

Earlier this year, OpenAI announced a waitlist for a commercial version of ChatGPT that will allow customers to sign up for a version of the bot that can be integrated into various product and businesses, for a fee.

While many technologies over the last two decades have seen fervent consumer interest, none have seen the type of rapid interest there is for ChatGPT.

"Remember how big mobile got? Its so much faster than that. Remember how big Twitter got, this is faster," Capps claimed.

The first iteration of the artificial intelligence tool launched in November 2022 and crossed 1 million users in just 5 days.

In comparison, it took Netflix 41 months, Facebook 10 months and Instagram nearly three months to reach similar metrics.

The massive success of the technology has spurred countless debates about how and where it should be implemented, fact versus science fiction and recentered artificial intelligence as the hot topic among Silicon Valley boardrooms after years of big promises and false starts.

"It is so good at certain things and absolutely inappropriate forever and always at other things, and we just have to use it correctly," Capps added.

Large corporations are split on ChatGPT. While some have implemented the technology to improve the user experience, such as Netflix, others have outright banned ChatGPT in their ecosystems because of the lack of available knowledge and level of uncertainty.

ChatGPT has the potential to supplant entire businesses. For example, a company that created an artificial intelligence to read through and analyze legal documents could utilize ChatGPT, which can out the same functions at a much lower cost.

OPENAI DEBUTS CHAT GPT-4, MORE ADVANCED AI MODEL THAT CAN DESCRIBE PHOTOS, HANDLE MORE TEXTS

SYMBOL - 11 February 2023, Baden-Wrttemberg, Rottweil: The Welcome to ChatGPT lettering of the US company OpenAI can be seen on a computer screen. ((Photo by Silas Stein/picture alliance via Getty Images) / Getty Images)

ChatGPT, as opposed to large language models (LLMs) specifically designed for use in a specific area of expertise, is no savant in specialized knowledge, but can provide relatively detailed responses on a wide range of topics, even if the output is susceptible to inaccuracies and surface-level observations.

Although the generative AI technology behind ChatGPT has been around for several years, the streamlined user experience of OpenAIs tool and incremental improvements to the algorithm has propelled it to everyday use alongside phones and social media.

You ask it questions and have a conversation with it, and it tries to predict statistically the best input, typically a word, sentence, or paragraph, using a significant portion of all the written text publicly available. The more data dumped in, the better the AI typically performs.

These forms of AI often use neural network-based models, which assign probabilities into a large matrix of variables and filter through a vast network of connections to produce an output.

The AI tool can generate and debug code to help build applications and websites, write emails and essays, offer quick answers to fasten research, create marketing and SEO strategies for various businesses and provide ideas to bolster creative thinking.

The program is phenomenal for people that dont have English as there first language, those who want assistance writing a letter, or people trying to find the top cities to visit for travel, according to Capps, but should not be used in situations that can affect humans in their health or livelihood.

"You dont want to ask ChatGPT how much Tylenol to give your kid when theyre sick because that would just be irresponsible," Capps said.

GPT3, the version of ChatGPT that propelled OpenAI to new levels of popularity, uses over 175 billion statistical connections and is trained on two-thirds of the internet, including Wikipedia and a large array of books. As time goes on, the company refines and expands the data set on which the tool is trained.

The newest iteration of the tool, GPT4, was unveiled earlier this month. OpenAI claims it can provide more information, understand and respond to images, process eight times more words than its predecessor and is less likely to respond to malicious requests.

But ChatGPT is still also essentially a blackbox, where the lineage and origin of the information are not immediately apparent. When hallucinations in the code arise, users cannot determine where the inaccurate information was sourced from, underscoring the importance of human-driven review.

In a March 16 interview with ABC News, Altman acknowledged concerns about ChatGPTs sometimes unreliable behavior.

"The thing that I try to caution people the most is what we call the hallucinations problem. The model will confidently state things as if they were facts that are entirely made up."

Critics have also claimed ChatGPT has a liberal bias, a "shortcoming" that Altman has said the company is working to improve. Generative AI is susceptible to biases from a number of different vectors, including the input of the user, the dataset it is trained on and the parameters and safeguards set by developers.

Altman said in early February that the company was altering ChatGPTs default settings to be "more neutral" and "empower users" to get the system to behave in a way that mirrors their own personal preferences "within broad bounds."

"[We're] talking to various policy and safety experts, getting audits of the system to try to address these issues and put something out that we think is safe and good," Altman told ABC News "And again, we won't get it perfect the first time, but it's so important to learn the lessons and find the edges while the stakes are relatively low."

While Altman works to quell concerns about biases inside his own system he also has drawn scrutiny for his political contributions.

WHAT IS CHATGPT? WHAT TO KNOW ABOUT THE AI CHATBOT THAT WILL POWER MICROSOFT BING

Sam Altman, President of Y Combinator, speaks at the Wall Street Journal Digital Conference in Laguna Beach, California, U.S., October 18, 2017. (REUTERS/Lucy Nicholson/File Photo / Reuters Photos)

In addition to hosting a fundraiser for Democratic presidential candidate Andrew Yang at his San Francisco home in late 2019, Altman has donated over $1 million to Democrats and Democratic groups, including $600,000 to the Senate Majority PAC, $250,000 to the American Bridge PAC, $100,000 to the Biden Victory Fund, and over $150,000 to the Democratic National Committee (DNC).

In 2014, Altman co-hosted a fundraiser for the DNC at the Y Combinators offices in Mountain View, California, which was headlined by then-President Obama.

During Altmans tenure from 2014 until 2019 as the CEO of Y Combinator, an incubator startup that launched Airbnb, DoorDash and DropBox, he talked about China in multiple blog posts and interviews. In 2017, Altman said that he "felt more comfortable discussing controversial ideas in Beijing than in San Francisco" and that he felt like an expansion into China was "important" because "some of the most talented entrepreneurs" that he has met have been operating there.

POTENTIAL GOOGLE KILLER COULD CHANGE US WORKFORCE AS WE KNOW IT

A book of poems lies on a screen on which the homepage of ChatGPT is called up. Artificial intelligence that writes greeting cards, poems or non-fiction texts - and sounds amazingly human in the process. The chatbot does more than just chat on the In (Karl-Josef Hildenbrand/picture alliance via Getty Images / Getty Images)

Altman also has ties to many prominent figures in the tech landscape.

Altman founded the San Francisco-based company OpenAI in 2015 with the help of big financial contributions from Silicon Valleys heavyweights, including Tesla and Twitter CEO Elon Musk, PayPal co-founder Peter Theil and LinkedIn co-founder Reid Hoffman.

At the time, the company was a tiny nonprofit laboratory focused on academic research but has since grown into a tech powerhouse (valued at $29 billion) and a major disruptor within the industry. The companys continuing strides in AI have prompted Google to declare a "code red" internally over fears that ChatGPT could displace its search engine monolith.

OpenAI raised around $130 million from 2016 to 2019, according to a Fox News Digital review of its tax forms. During that time, the group steered money toward numerous AI initiatives.

POTENTIAL GOOGLE KILLER COULD CHANGE US WORKFORCE AS WE KNOW IT

FOX Business' Lydia Hu breaks down the controversy surrounding OpenAI CEO Sam Altman and their chatbot, ChatGPT.

OpenAI spent $10.5 million in 2016 establishing its research team, setting goals, and choosing its first projects, according to its tax forms. The group also launched OpenAI Gym Beta, published nearly half a dozen comprehensive research papers, held a self-organized machine learning conference, developed infrastructure, and built a safety team.

The following year, in 2017, OpenAI spent $28 million on initiatives such as demonstrating "reinforcement learning algorithms could be scaled to beat the world's best humans at a restricted version of an advanced, multiplayer game called Dota2." The nonprofit also participated in a report on the potential malicious uses of AI and published those findings, according to its tax forms.

In 2018, the group spent nearly $50 million when it launched the OpenAI Fellows and Scholars programs. They also trained a "human-like robot hand to manipulate physical objects with unprecedented dexterity and scaling its reinforcement learning algorithms to beat a team of 99.95th percentile Dota 2 players."

CLICK HERE TO READ MORE ON FOX BUSINESS

OpenAI and ChatGPT logos are seen in this illustration taken, February 3, 2023. REUTERS/Dado Ruvic/Illustration (Reuters Photos)

And in 2019, OpenAI put nearly $2 million toward creating "OpenAI, L.P. ("Partnership"), a new capped-profit company to help rapidly scale investments in compute and talent while including checks and balances in furtherance of the organization's mission. Through its control of the partnership, the group's reinforcement learning algorithms "became the first AI to beat the world champions in an esports game," that year's tax records state.

"These same algorithms were then used to train a pair of neural networks to solve a Rubik's Cube with a human-like hand, requiring unprecedented dexterity," the tax records state.

Altman's other nonprofit, OpenResearch, has received around $24 million since its inception, TechCrunch reported.

GET FOX BUSINESS ON THE GO BY CLICKING HERE

More:
ChatGPT: Who and what is behind the artificial intelligence tool changing the tech landscape - Fox Business