Archive for the ‘Artificial Intelligence’ Category

What is Artificial Intelligence as a Service (AIaaS)? | ITBE – IT Business Edge

Software as a Service, or SaaS, is a concept that is familiar to many. Long-time Photoshop users will recall when Adobe stopped selling its product and instead shifted to a subscriber model. Netflix and Disney+ are essentially Movies as a Service, particularly at a time when ownership of physical media is losing ground to media streaming. Artificial Intelligence as a Service (AIaaS) has been growing in market adoption in recent years, but the uninitiated might be asking: what exactly is it?

In a nutshell, AIaaS is what happens when a company develops and licenses use of an AI to another company, most often to solve a very specific problem. For example, Bill owns a company that sells hotdogs through his e-commerce site. While Bill offers a free returns policy for dissatisfied customers, he lacks the time to provide decent customer support, and rarely replies to emails. Separately, a software developer has created a chatbot that can handle most customer inquiries using natural language processing, and often solve the issue or answer a question before human intervention is even required. For a monthly fee, the chatbot is licensed to the hotdog vendor, and implemented on his website. Now, the bot is solving 80% of customer issues, leaving Bill with the time to respond to the remaining 20%. But Bill is still too preoccupied making hotdogs, so he subscribes to a service like Flowrite, that uses AI to intelligently write his emails on the fly.

AI is also being put in service to analyze large sets of data and make predictions, streamline information storage, or even detect fraudulent activity. Amazons personal recommendation engine, an AI powered by machine learning, is now available as a licensed service to other retailers, video stream platforms, and even the finance industry. Googles suite of AI services range from natural language processing, handwriting recognition, to real-time captioning and translation. IBMs groundbreaking AI, Watson, is now being deployed to fight financial crimes, target advertisements based on real-time weather analysis, and analyze data to help hospitals make treatment judgements.

Also read: AI-Enabled Payments: A Q&A with Tradeshift

Also read: How Quantum Computing Will Transform AI

Machine learning AIs improve with time, usage, and development. Some, like YouTubes recommendation engine, have become so sophisticated that it sometimes feels like we have entire television stations tailored perfectly to our interests. Others, like language model AI GPT-3, produce entire volumes of text that are nearly indistinguishable from an authentic human source.

Microsoft has even put GPT-3 to use to translate conversational language into a working computer code, potentially opening up a new frontier in how software can be written in the future, and giving coding novices a fighting chance. Microsoft has also partnered with NVIDIA to create a new natural language generation model, three times as powerful as GPT-3. Improvements in language recognition and generation have obvious carryover benefits for the future development of chatbots, home assistants, and document generation as well.

Industrial giant Siemens has announced they are integrating Googles AIaaS solutions to streamline and analyze data, and predict, for instance, the rate of wear-and-tear of machinery on their factory floor. This could reduce maintenance costs, improve the scheduling of routine inspections, and prevent unexpected equipment failures.

AIaaS is a rapidly growing field, and there will be many more niches discovered that it can fill for years to come.

Read next: Top 5 Benefits of AI in Banking and Finance

See the article here:
What is Artificial Intelligence as a Service (AIaaS)? | ITBE - IT Business Edge

Putting artificial intelligence at the heart of health care with help from MIT – MIT News

Artificial intelligence is transforming industries around the world and health care is no exception. A recent Mayo Clinic study found that AI-enhanced electrocardiograms (ECGs) have the potential to save lives by speeding diagnosis and treatment in patients with heart failure who are seen in the emergency room.

The lead author of the study is Demilade Demi Adedinsewo, a noninvasive cardiologist at the Mayo Clinic who is actively integrating the latest AI advancements into cardiac care and drawing largely on her learning experience with MIT Professional Education.

Identifying AI opportunities in health care

A dedicated practitioner, Adedinsewo is a Mayo Clinic Florida Women's Health Scholar and director of research for the Cardiovascular Disease Fellowship program. Her clinical research interests include cardiovascular disease prevention, women's heart health, cardiovascular health disparities, and the use of digital tools in cardiovascular disease management.

Adedinsewos interest in AI emerged toward the end of her cardiology fellowship, when she began learning about its potential to transform the field of health care. I started to wonder how we could leverage AI tools in my field to enhance health equity and alleviate cardiovascular care disparities, she says.

During her fellowship at the Mayo Clinic, Adedinsewo began looking at how AI could be used with ECGs to improve clinical care. To determine the effectiveness of the approach, the team retroactively used deep learning to analyze ECG results from patients with shortness of breath. They then compared the results with the current standard of care a blood test analysis to determine if the AI enhancement improved the diagnosis of cardiomyopathy, a condition where the heart is unable to adequately pump blood to the rest of the body. While she understood the clinical implications of the research, she found the AI components challenging.

Even though I have a medical degree and a masters degree in public health, those credentials arent really sufficient to work in this space, Adedinsewo says. I began looking for an opportunity to learn more about AI so that I could speak the language, bridge the gap, and bring those game-changing tools to my field.

Bridging the gap at MIT

Adedinsewos desire to bring together advanced data science and clinical care led her to MIT Professional Education, where she recently completed the Professional Certificate Program in Machine Learning & AI. To date, she has completed nine courses, including AI Strategies and Roadmap.

All of the courses were great, Adedinsewo says. I especially appreciated how the faculty, like professors Regina Barzilay, Tommi Jaakkola, and Stefanie Jegelka, provided practical examples from health care and nonhealth care fields to illustrate what we were learning.

Adedinsewos goals align closely with those of Barzilay, the AI lead for the MIT Jameel Clinic for Machine Learning in Health. There are so many areas of health care that canbenefit from AI, Barzilay says. Its exciting to see practitioners like Demijoin the conversation and help identify new ideas for high-impact AIsolutions.

Adedinsewo also valued the opportunity to work and learn within the greater MIT community alongside accomplished peers from around the world, explaining that she learned different things from each person. It was great to get different perspectives from course participants who deploy AI in other industries, she says.

Putting knowledge into action

Armed with her updated AI toolkit, Adedinsewo was able to make meaningful contributions to Mayo Clinics research. The team successfully completed and published their ECG project in August 2020, with promising results. In analyzing the ECGs of about 1,600 patients, the AI-enhanced method was both faster and more effective outperforming the standard blood tests with a performance measure (AUC) of 0.89 versus 0.80. This improvement could enhance health outcomes by improving diagnostic accuracy and increasing the speed with which patients receive appropriate care.

But the benefits of Adedinsewos MIT experience go beyond a single project. Adedinsewo says that the tools and strategies she acquired have helped her communicate the complexities of her work more effectively, extending its reach and impact. I feel more equipped to explain the research and AI strategies in general to my clinical colleagues. Now, people reach out to me to ask, I want to work on this project. Can I use AI to answer this question? she said.

Looking to the AI-powered future

Whats next for Adedinsewos research? Taking AI mainstream within the field of cardiology. While AI tools are not currently widely used in evaluating Mayo Clinic patients, she believes they hold the potential to have a significant positive impact on clinical care.

These tools are still in the research phase, Adedinsewo says. But Im hoping that within the next several months or years we can start to do more implementation research to see how well they improve care and outcomes for cardiac patients over time.

Bhaskar Pant, executive director of MIT Professional Education, says We at MIT Professional Education feel particularly gratified that we are able to provide practitioner-oriented insights and tools in machine learning and AI from expert MIT faculty to frontline health researchers such as Dr. Demi Adedinsewo, who are working on ways to enhance markedly clinical care and health outcomes in cardiac and other patient populations. This is also very much in keeping with MITs mission of 'working with others for the betterment of humankind!'

Excerpt from:
Putting artificial intelligence at the heart of health care with help from MIT - MIT News

Beethoven’s Unfinished 10th Symphony Brought to Life by Artificial Intelligence – Scientific American

Teresa Carey: This is Scientific Americans 60-Second Science. I'm Teresa Carey.

Every morning at five oclock, composer Walter Werzowa would sit down at his computer to anticipate a particular daily e-mail. It came from six time zones away, where a team had been working all night (or day, rather) to draft Beethovens unfinished 10th Symphonyalmost two centuries after his death.

The e-mail contained hundreds of variations, and Werzowa listened to them all.

Werzowa: So by nine, 10 oclock in the morning, its likeIm already in heaven.

Carey: Werzowa was listening for the perfect tunea sound that was unmistakably Beethoven.

But the phrases he was listening to werent composed by Beethoven. They were created by artificial intelligencea computer simulation of Beethovens creative process.

Werzowa: There werehundreds of options, and some are better than others. But then there is that one which grabs you, and that was just a beautiful process.

Carey: Ludwig van Beethoven was one of the most renowned composers in Western music history. When he died in 1827, he left behind musical sketches and notes that hinted at a masterpiece. There was barely enough to make out a phrase, let alone a whole symphony. But that didnt stop people from trying.

In 1988 musicologist Barry Cooper attempted. But he didnt get beyond the first movement. Beethovens handwritten notes on the second and third movements are meagernot enough to compose a symphony.

Werzowa: A movement of a symphony can have up to 40,000 notes. And some of his themes were three bars, like 20 notes. Its very little information.

Carey: Werzowa and a group of music experts and computer scientists teamed up to use machine learning to create the symphony. AhmedElgammal, the director of the Art and Artificial Intelligence Laboratory at Rutgers University, led the AI side of the team.

Elgammal: When you listen to music read by AIto continue a theme of music, usually its a very short few seconds, and then they start diverging and becoming boring and not interesting. They cannot really take that and compose a full movement of a symphony.

Carey: The teams first task was to teach the AI to think like Beethoven. To do that, they gave it Beethovens complete works, his sketchesand notes. They taught it Beethoven's processlike how he went from those iconic four notes to his entire Fifth Symphony.

[CLIP: Notes from Symphony no. 5]

Carey: Then they taught it to harmonize with a melody, compose a bridge between two sectionsand assign instrumentation. With all that knowledge, the AI came as close to thinking like Beethoven as possible. But it still wasnt enough.

Elgammal: The way music generation using AI works is very similar to the way, when you write an e-mail, you find that the e-mail thread predicts whats the next word for you or what the rest of the sentence is for you.

Carey: Butlet the computer predict your words long enough, and eventually, the text will sound like gibberish.

Elgammal: It doesnt really generate something that can continue for a long time and be consistent. So that was the main challenge in dealing with this project: How can you take a motif or a short phrase of music that Beethoven wrote in his sketchand continue it into a segment of music?

Carey: Thats where Werzowas daily e-mails came in. On those early mornings, he was selecting what he thought was Beethovens best. And, piece by piece, the team built a symphony.

Matthew Guzdial researches creativity and machine learning at the University of Alberta. He didnt work on the Beethoven project, but he says that AI is overhyped.

Guzdial: Modern AI, modern machine learning, is all about just taking small local patterns and replicating them. And its up to a human to then take what the AI outputs and find the genius. The genius wasnt there. The genius wasnt in the AI. The genius was in the human who was doing the selection.

Carey: Elgammal wants to make the AI tool available to help other artists overcome writers block or boost their performance. But both Elgammal and Werzowa say that the AI shouldnt replace the role of an artist. Insteadit should enhance their work and process.

Werzowa: Like every tool, you can use a knife to kill somebody or to save somebodys life, like with a scalpel in a surgery. So it can go any way. If you look at the kids, like kids are born creative.Its like everything is about being creative, creative and having fun. And somehow were losing this. I think if we could sit back on a Saturday afternoon in our kitchen, and because maybe were a little bit scared to make mistakes, ask the AI to help us to write us a sonata, song or whateverin teamwork, life will be so much more beautiful.

Carey: The team released the 10th Symphony over the weekend. When asked who gets credit for writing it Beethoven, the AIor the team behind itWerzowa insists it is a collaborative effort. But, suspending disbelief for a moment, it isnt hard to imagine that were listening to Beethoven once again.

Werzowa: I dare to say that nobody knows Beethovenas well as the AI, didas well as the algorithm. I think music, when you hear it, when you feel it, when you close your eyes, it does something to your body. Close your eyes, sit back and be open for it, and I would love to hear what you felt after.

Carey: Thanks for listening. For Scientific Americans60-Second Science, Im Teresa Carey.

[The above text is a transcript of this podcast.]

View post:
Beethoven's Unfinished 10th Symphony Brought to Life by Artificial Intelligence - Scientific American

Predicting Traffic Crashes Before They Happen With Artificial Intelligence – SciTechDaily

A deep model was trained on historical crash data, road maps, satellite imagery, and GPS to enable high-resolution crash maps that could lead to safer roads.

Todays world is one big maze, connected by layers of concrete and asphalt that afford us the luxury of navigation by vehicle. For many of our road-related advancements GPS lets us fire fewer neurons thanks to map apps, cameras alert us to potentially costly scrapes and scratches, and electric autonomous cars have lower fuel costs our safety measures havent quite caught up. We still rely on a steady diet of traffic signals, trust, and the steel surrounding us to safely get from point A to point B.

To get ahead of the uncertainty inherent to crashes, scientists from MITs Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Qatar Center for Artificial Intelligence developed a deep learning model that predicts very high-resolution crash risk maps. Fed on a combination of historical crash data, road maps, satellite imagery, and GPS traces, the risk maps describe the expected number of crashes over a period of time in the future, to identify high-risk areas and predict future crashes.

A dataset that was used to create crash-risk maps covered 7,500 square kilometers from Los Angeles, New York City, Chicago and Boston. Among the four cities, L.A. was the most unsafe, since it had the highest crash density, followed by New York City, Chicago, and Boston. Credit: Image courtesy of MIT CSAIL.

Typically, these types of risk maps are captured at much lower resolutions that hover around hundreds of meters, which means glossing over crucial details since the roads become blurred together. These maps, though, are 55 meter grid cells, and the higher resolution brings newfound clarity: The scientists found that a highway road, for example, has a higher risk than nearby residential roads, and ramps merging and exiting the highway have an even higher risk than other roads.

By capturing the underlying risk distribution that determines the probability of future crashes at all places, and without any historical data, we can find safer routes, enable auto insurance companies to provide customized insurance plans based on driving trajectories of customers, help city planners design safer roads, and even predict future crashes, says MIT CSAIL PhD student Songtao He, a lead author on a new paper about the research.

Even though car crashes are sparse, they cost about 3 percent of the worlds GDP and are the leading cause of death in children and young adults. This sparsity makes inferring maps at such a high resolution a tricky task. Crashes at this level are thinly scattered the average annual odds of a crash in a 55 grid cell is about one-in-1,000 and they rarely happen at the same location twice. Previous attempts to predict crash risk have been largely historical, as an area would only be considered high-risk if there was a previous nearby crash.

To evaluate the model, the scientists used crashes and data from 2017 and 2018, and tested its performance at predicting crashes in 2019 and 2020. Many locations were identified as high-risk, even though they had no recorded crashes, and also experienced crashes during the follow-up years. Credit: Image courtesy of MIT CSAIL.

The teams approach casts a wider net to capture critical data. It identifies high-risk locations using GPS trajectory patterns, which give information about density, speed, and direction of traffic, and satellite imagery that describes road structures, such as the number of lanes, whether theres a shoulder, or if theres a large number of pedestrians. Then, even if a high-risk area has no recorded crashes, it can still be identified as high-risk, based on its traffic patterns and topology alone.

To evaluate the model, the scientists used crashes and data from 2017 and 2018, and tested its performance at predicting crashes in 2019 and 2020. Many locations were identified as high-risk, even though they had no recorded crashes, and also experienced crashes during the follow-up years.

Our model can generalize from one city to another by combining multiple clues from seemingly unrelated data sources. This is a step toward general AI, because our model can predict crash maps in uncharted territories, says Amin Sadeghi, a lead scientist at Qatar Computing Research Institute (QCRI) and an author on the paper. The model can be used to infer a useful crash map even in the absence of historical crash data, which could translate to positive use for city planning and policymaking by comparing imaginary scenarios.

The dataset covered 7,500 square kilometers from Los Angeles, New York City, Chicago, and Boston. Among the four cities, L.A. was the most unsafe, since it had the highest crash density, followed by New York City, Chicago, and Boston.

If people can use the risk map to identify potentially high-risk road segments, they can take action in advance to reduce the risk of trips they take. Apps like Waze and Apple Maps have incident feature tools, but were trying to get ahead of the crashes before they happen, says He.

Reference: Inferring high-resolution traffic accident risk maps based on satellite imagery and GPS trajectories by Songtao He, Mohammad Amin Sadeghi, Sanjay Chawla, Mohammad Alizadeh, Hari Balakrishnan and Samuel Madden, ICCV.PDF

He and Sadeghi wrote the paper alongside Sanjay Chawla, research director at QCRI, and MIT professors of electrical engineering and computer science Mohammad Alizadeh, ??Hari Balakrishnan, and Sam Madden. They will present the paper at the 2021 International Conference on Computer Vision.

Follow this link:
Predicting Traffic Crashes Before They Happen With Artificial Intelligence - SciTechDaily

Create And Scale Complex Artificial Intelligence And Machine Learning Pipelines Anywhere With IBM CodeFlare – Forbes

Pixabay

To say that AI is complicated is an understatement. Machine learning, a subset of artificial intelligence, is a multifaceted process that integrates and scales mountains of data that comes in different forms from various sources. Data is used to train machine learning models in order to develop insights and solutions from newly acquired related data. For example, an image recognition model trained with several million dog and cat photos can efficiently classify a new image as either a cat or a dog.

A better way to build and manage machine learning models

Project Codeflare

The development of machine learning models requires the coordination of many processes linked together with pipelines. Pipelines can handle data ingestion, scrubbing, and manipulation from varied sources for training and inference.Machine learning models use end-to-end pipelines to manage input and output data collection and processing.

To deal with the extraordinary growth of AI and its ever-increasing complexity, IBM created an open-source framework calledCodeFlareto deal with AIs complex pipeline requirements. CodeFlaresimplifies the integration, scaling, and acceleration of complex multi-step analytics and machine learning pipelines on the cloud.Hybrid cloud deployment is one of the critical design points for CodeFlare, which using OpenShift can be easily deployed from on-premises to public clouds to edge.

It is important to note thatCodeFlare is not currently a generally available product, and IBM has yet to commit to a timeline for it becoming a product. Nevertheless, CodeFlare is available as an open-source project.And, as an evolving project, some aspects of orchestration and automation are still work in progress. At this stage, issues can be reported through the public GitHub project. IBM invites community engagement through issue and bug reports, which will be handled on a best effort basis.

CodeFlares main features are:

Technology

CodeFlare is built on top of Ray, an open-source distributed computing framework for machine learning applications. According to IBM, CodeFlare extends the capabilities of Ray by adding specific elements to make scaling workflows easier. CodeFlare pipelines run on a serverless platform using IBM Cloud Code Engine and Red Hat OpenShift. This platform providesCodeFlare the flexibility to be deployed just about anywhere.

Emerging workflows

Emerging AI/ML workflows pose new challenges

CodeFlare can integrate emerging workflows with complex pipelines that require integration and coordination of different tools and runtimes. It is designed also to scale complex pipelines such as multi-step NLP, complex time series and forecasting, reinforcement learning, and AI-Workbenches. The framework can integrate, run, and scale heterogenous pipelines that use data from multiple sources and require different treatments.

How much difference does CodeFlare make?

According to theIBM Research blog, CodeFlare significantly increases the efficiency of machine learning. The blog states that a user used the framework to analyze and optimize approximately 100,000 pipelines for training machine learning models. CodeFlare cut the time it took to execute each pipeline from 4 hours to 15 minutes - an 18x speedup provided by CodeFlare.

The research blog also indicates that CodeFlare can save scientists months of work on large pipelines, providing the data team more time for productive and development work.

Wrapping up

Studies show that about75%of prototype machine learning models fail to transition to production status despite large investments in artificial intelligence. Several reasons for low conversion rates range from poor project planning to weak collaboration and communications between AI data team members.

CodeFlare is a purpose-built platform that provides complete end-to-end pipeline visibility and analytics for a broad range of machine learning models and workflows. It providesa more straightforward way to integrate and scale full pipelines while offering a unified runtime and programming interface.

For those reasons, despite the historical high AI model failure rates, Moor Insights & Strategy believes that machine learning models using CodeFlare pipelines will have a high percentage of machine learning models transition from experimental status to production status.

Analyst Notes:

Note: Moor Insights & Strategy writers and editors may have contributed to this article.

Moor Insights & Strategy, like all research and analyst firms, provides or has provided paid research, analysis, advising, or consulting to many high-tech companies in the industry, including 8x8, Advanced Micro Devices, Amazon, Applied Micro, ARM, Aruba Networks, AT&T, AWS, A-10 Strategies,Bitfusion, Blaize, Box, Broadcom, Calix, Cisco Systems, Clear Software, Cloudera,Clumio, Cognitive Systems, CompuCom, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Digital Optics,Dreamchain, Echelon, Ericsson, Extreme Networks, Flex, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Google (Nest-Revolve), Google Cloud, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Ion VR,Inseego, Infosys, Intel, Interdigital, Jabil Circuit, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation,MapBox, Marvell,Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco),Mesophere, Microsoft, Mojo Networks, National Instruments, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek,Novumind, NVIDIA, Nuvia, ON Semiconductor, ONUG, OpenStack Foundation, Oracle, Poly, Panasas,Peraso, Pexip, Pixelworks, Plume Design, Poly,Portworx, Pure Storage, Qualcomm, Rackspace, Rambus,RayvoltE-Bikes, Red Hat,Residio, Samsung Electronics, SAP, SAS, Scale Computing, Schneider Electric, Silver Peak, SONY,Springpath, Spirent, Splunk, Sprint, Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, TE Connectivity,TensTorrent,TobiiTechnology, T-Mobile, Twitter, Unity Technologies, UiPath, Verizon Communications,Vidyo, VMware, Wave Computing,Wellsmith, Xilinx, Zebra,Zededa, and Zoho which may be cited in blogs and research.

Read more:
Create And Scale Complex Artificial Intelligence And Machine Learning Pipelines Anywhere With IBM CodeFlare - Forbes