Archive for the ‘Artificial Intelligence’ Category

Beethoven’s Unfinished 10th Symphony Brought to Life by Artificial Intelligence – Scientific American

Teresa Carey: This is Scientific Americans 60-Second Science. I'm Teresa Carey.

Every morning at five oclock, composer Walter Werzowa would sit down at his computer to anticipate a particular daily e-mail. It came from six time zones away, where a team had been working all night (or day, rather) to draft Beethovens unfinished 10th Symphonyalmost two centuries after his death.

The e-mail contained hundreds of variations, and Werzowa listened to them all.

Werzowa: So by nine, 10 oclock in the morning, its likeIm already in heaven.

Carey: Werzowa was listening for the perfect tunea sound that was unmistakably Beethoven.

But the phrases he was listening to werent composed by Beethoven. They were created by artificial intelligencea computer simulation of Beethovens creative process.

Werzowa: There werehundreds of options, and some are better than others. But then there is that one which grabs you, and that was just a beautiful process.

Carey: Ludwig van Beethoven was one of the most renowned composers in Western music history. When he died in 1827, he left behind musical sketches and notes that hinted at a masterpiece. There was barely enough to make out a phrase, let alone a whole symphony. But that didnt stop people from trying.

In 1988 musicologist Barry Cooper attempted. But he didnt get beyond the first movement. Beethovens handwritten notes on the second and third movements are meagernot enough to compose a symphony.

Werzowa: A movement of a symphony can have up to 40,000 notes. And some of his themes were three bars, like 20 notes. Its very little information.

Carey: Werzowa and a group of music experts and computer scientists teamed up to use machine learning to create the symphony. AhmedElgammal, the director of the Art and Artificial Intelligence Laboratory at Rutgers University, led the AI side of the team.

Elgammal: When you listen to music read by AIto continue a theme of music, usually its a very short few seconds, and then they start diverging and becoming boring and not interesting. They cannot really take that and compose a full movement of a symphony.

Carey: The teams first task was to teach the AI to think like Beethoven. To do that, they gave it Beethovens complete works, his sketchesand notes. They taught it Beethoven's processlike how he went from those iconic four notes to his entire Fifth Symphony.

[CLIP: Notes from Symphony no. 5]

Carey: Then they taught it to harmonize with a melody, compose a bridge between two sectionsand assign instrumentation. With all that knowledge, the AI came as close to thinking like Beethoven as possible. But it still wasnt enough.

Elgammal: The way music generation using AI works is very similar to the way, when you write an e-mail, you find that the e-mail thread predicts whats the next word for you or what the rest of the sentence is for you.

Carey: Butlet the computer predict your words long enough, and eventually, the text will sound like gibberish.

Elgammal: It doesnt really generate something that can continue for a long time and be consistent. So that was the main challenge in dealing with this project: How can you take a motif or a short phrase of music that Beethoven wrote in his sketchand continue it into a segment of music?

Carey: Thats where Werzowas daily e-mails came in. On those early mornings, he was selecting what he thought was Beethovens best. And, piece by piece, the team built a symphony.

Matthew Guzdial researches creativity and machine learning at the University of Alberta. He didnt work on the Beethoven project, but he says that AI is overhyped.

Guzdial: Modern AI, modern machine learning, is all about just taking small local patterns and replicating them. And its up to a human to then take what the AI outputs and find the genius. The genius wasnt there. The genius wasnt in the AI. The genius was in the human who was doing the selection.

Carey: Elgammal wants to make the AI tool available to help other artists overcome writers block or boost their performance. But both Elgammal and Werzowa say that the AI shouldnt replace the role of an artist. Insteadit should enhance their work and process.

Werzowa: Like every tool, you can use a knife to kill somebody or to save somebodys life, like with a scalpel in a surgery. So it can go any way. If you look at the kids, like kids are born creative.Its like everything is about being creative, creative and having fun. And somehow were losing this. I think if we could sit back on a Saturday afternoon in our kitchen, and because maybe were a little bit scared to make mistakes, ask the AI to help us to write us a sonata, song or whateverin teamwork, life will be so much more beautiful.

Carey: The team released the 10th Symphony over the weekend. When asked who gets credit for writing it Beethoven, the AIor the team behind itWerzowa insists it is a collaborative effort. But, suspending disbelief for a moment, it isnt hard to imagine that were listening to Beethoven once again.

Werzowa: I dare to say that nobody knows Beethovenas well as the AI, didas well as the algorithm. I think music, when you hear it, when you feel it, when you close your eyes, it does something to your body. Close your eyes, sit back and be open for it, and I would love to hear what you felt after.

Carey: Thanks for listening. For Scientific Americans60-Second Science, Im Teresa Carey.

[The above text is a transcript of this podcast.]

View post:
Beethoven's Unfinished 10th Symphony Brought to Life by Artificial Intelligence - Scientific American

Predicting Traffic Crashes Before They Happen With Artificial Intelligence – SciTechDaily

A deep model was trained on historical crash data, road maps, satellite imagery, and GPS to enable high-resolution crash maps that could lead to safer roads.

Todays world is one big maze, connected by layers of concrete and asphalt that afford us the luxury of navigation by vehicle. For many of our road-related advancements GPS lets us fire fewer neurons thanks to map apps, cameras alert us to potentially costly scrapes and scratches, and electric autonomous cars have lower fuel costs our safety measures havent quite caught up. We still rely on a steady diet of traffic signals, trust, and the steel surrounding us to safely get from point A to point B.

To get ahead of the uncertainty inherent to crashes, scientists from MITs Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Qatar Center for Artificial Intelligence developed a deep learning model that predicts very high-resolution crash risk maps. Fed on a combination of historical crash data, road maps, satellite imagery, and GPS traces, the risk maps describe the expected number of crashes over a period of time in the future, to identify high-risk areas and predict future crashes.

A dataset that was used to create crash-risk maps covered 7,500 square kilometers from Los Angeles, New York City, Chicago and Boston. Among the four cities, L.A. was the most unsafe, since it had the highest crash density, followed by New York City, Chicago, and Boston. Credit: Image courtesy of MIT CSAIL.

Typically, these types of risk maps are captured at much lower resolutions that hover around hundreds of meters, which means glossing over crucial details since the roads become blurred together. These maps, though, are 55 meter grid cells, and the higher resolution brings newfound clarity: The scientists found that a highway road, for example, has a higher risk than nearby residential roads, and ramps merging and exiting the highway have an even higher risk than other roads.

By capturing the underlying risk distribution that determines the probability of future crashes at all places, and without any historical data, we can find safer routes, enable auto insurance companies to provide customized insurance plans based on driving trajectories of customers, help city planners design safer roads, and even predict future crashes, says MIT CSAIL PhD student Songtao He, a lead author on a new paper about the research.

Even though car crashes are sparse, they cost about 3 percent of the worlds GDP and are the leading cause of death in children and young adults. This sparsity makes inferring maps at such a high resolution a tricky task. Crashes at this level are thinly scattered the average annual odds of a crash in a 55 grid cell is about one-in-1,000 and they rarely happen at the same location twice. Previous attempts to predict crash risk have been largely historical, as an area would only be considered high-risk if there was a previous nearby crash.

To evaluate the model, the scientists used crashes and data from 2017 and 2018, and tested its performance at predicting crashes in 2019 and 2020. Many locations were identified as high-risk, even though they had no recorded crashes, and also experienced crashes during the follow-up years. Credit: Image courtesy of MIT CSAIL.

The teams approach casts a wider net to capture critical data. It identifies high-risk locations using GPS trajectory patterns, which give information about density, speed, and direction of traffic, and satellite imagery that describes road structures, such as the number of lanes, whether theres a shoulder, or if theres a large number of pedestrians. Then, even if a high-risk area has no recorded crashes, it can still be identified as high-risk, based on its traffic patterns and topology alone.

To evaluate the model, the scientists used crashes and data from 2017 and 2018, and tested its performance at predicting crashes in 2019 and 2020. Many locations were identified as high-risk, even though they had no recorded crashes, and also experienced crashes during the follow-up years.

Our model can generalize from one city to another by combining multiple clues from seemingly unrelated data sources. This is a step toward general AI, because our model can predict crash maps in uncharted territories, says Amin Sadeghi, a lead scientist at Qatar Computing Research Institute (QCRI) and an author on the paper. The model can be used to infer a useful crash map even in the absence of historical crash data, which could translate to positive use for city planning and policymaking by comparing imaginary scenarios.

The dataset covered 7,500 square kilometers from Los Angeles, New York City, Chicago, and Boston. Among the four cities, L.A. was the most unsafe, since it had the highest crash density, followed by New York City, Chicago, and Boston.

If people can use the risk map to identify potentially high-risk road segments, they can take action in advance to reduce the risk of trips they take. Apps like Waze and Apple Maps have incident feature tools, but were trying to get ahead of the crashes before they happen, says He.

Reference: Inferring high-resolution traffic accident risk maps based on satellite imagery and GPS trajectories by Songtao He, Mohammad Amin Sadeghi, Sanjay Chawla, Mohammad Alizadeh, Hari Balakrishnan and Samuel Madden, ICCV.PDF

He and Sadeghi wrote the paper alongside Sanjay Chawla, research director at QCRI, and MIT professors of electrical engineering and computer science Mohammad Alizadeh, ??Hari Balakrishnan, and Sam Madden. They will present the paper at the 2021 International Conference on Computer Vision.

Follow this link:
Predicting Traffic Crashes Before They Happen With Artificial Intelligence - SciTechDaily

Create And Scale Complex Artificial Intelligence And Machine Learning Pipelines Anywhere With IBM CodeFlare – Forbes

Pixabay

To say that AI is complicated is an understatement. Machine learning, a subset of artificial intelligence, is a multifaceted process that integrates and scales mountains of data that comes in different forms from various sources. Data is used to train machine learning models in order to develop insights and solutions from newly acquired related data. For example, an image recognition model trained with several million dog and cat photos can efficiently classify a new image as either a cat or a dog.

A better way to build and manage machine learning models

Project Codeflare

The development of machine learning models requires the coordination of many processes linked together with pipelines. Pipelines can handle data ingestion, scrubbing, and manipulation from varied sources for training and inference.Machine learning models use end-to-end pipelines to manage input and output data collection and processing.

To deal with the extraordinary growth of AI and its ever-increasing complexity, IBM created an open-source framework calledCodeFlareto deal with AIs complex pipeline requirements. CodeFlaresimplifies the integration, scaling, and acceleration of complex multi-step analytics and machine learning pipelines on the cloud.Hybrid cloud deployment is one of the critical design points for CodeFlare, which using OpenShift can be easily deployed from on-premises to public clouds to edge.

It is important to note thatCodeFlare is not currently a generally available product, and IBM has yet to commit to a timeline for it becoming a product. Nevertheless, CodeFlare is available as an open-source project.And, as an evolving project, some aspects of orchestration and automation are still work in progress. At this stage, issues can be reported through the public GitHub project. IBM invites community engagement through issue and bug reports, which will be handled on a best effort basis.

CodeFlares main features are:

Technology

CodeFlare is built on top of Ray, an open-source distributed computing framework for machine learning applications. According to IBM, CodeFlare extends the capabilities of Ray by adding specific elements to make scaling workflows easier. CodeFlare pipelines run on a serverless platform using IBM Cloud Code Engine and Red Hat OpenShift. This platform providesCodeFlare the flexibility to be deployed just about anywhere.

Emerging workflows

Emerging AI/ML workflows pose new challenges

CodeFlare can integrate emerging workflows with complex pipelines that require integration and coordination of different tools and runtimes. It is designed also to scale complex pipelines such as multi-step NLP, complex time series and forecasting, reinforcement learning, and AI-Workbenches. The framework can integrate, run, and scale heterogenous pipelines that use data from multiple sources and require different treatments.

How much difference does CodeFlare make?

According to theIBM Research blog, CodeFlare significantly increases the efficiency of machine learning. The blog states that a user used the framework to analyze and optimize approximately 100,000 pipelines for training machine learning models. CodeFlare cut the time it took to execute each pipeline from 4 hours to 15 minutes - an 18x speedup provided by CodeFlare.

The research blog also indicates that CodeFlare can save scientists months of work on large pipelines, providing the data team more time for productive and development work.

Wrapping up

Studies show that about75%of prototype machine learning models fail to transition to production status despite large investments in artificial intelligence. Several reasons for low conversion rates range from poor project planning to weak collaboration and communications between AI data team members.

CodeFlare is a purpose-built platform that provides complete end-to-end pipeline visibility and analytics for a broad range of machine learning models and workflows. It providesa more straightforward way to integrate and scale full pipelines while offering a unified runtime and programming interface.

For those reasons, despite the historical high AI model failure rates, Moor Insights & Strategy believes that machine learning models using CodeFlare pipelines will have a high percentage of machine learning models transition from experimental status to production status.

Analyst Notes:

Note: Moor Insights & Strategy writers and editors may have contributed to this article.

Moor Insights & Strategy, like all research and analyst firms, provides or has provided paid research, analysis, advising, or consulting to many high-tech companies in the industry, including 8x8, Advanced Micro Devices, Amazon, Applied Micro, ARM, Aruba Networks, AT&T, AWS, A-10 Strategies,Bitfusion, Blaize, Box, Broadcom, Calix, Cisco Systems, Clear Software, Cloudera,Clumio, Cognitive Systems, CompuCom, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Digital Optics,Dreamchain, Echelon, Ericsson, Extreme Networks, Flex, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Google (Nest-Revolve), Google Cloud, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Ion VR,Inseego, Infosys, Intel, Interdigital, Jabil Circuit, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation,MapBox, Marvell,Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco),Mesophere, Microsoft, Mojo Networks, National Instruments, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek,Novumind, NVIDIA, Nuvia, ON Semiconductor, ONUG, OpenStack Foundation, Oracle, Poly, Panasas,Peraso, Pexip, Pixelworks, Plume Design, Poly,Portworx, Pure Storage, Qualcomm, Rackspace, Rambus,RayvoltE-Bikes, Red Hat,Residio, Samsung Electronics, SAP, SAS, Scale Computing, Schneider Electric, Silver Peak, SONY,Springpath, Spirent, Splunk, Sprint, Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, TE Connectivity,TensTorrent,TobiiTechnology, T-Mobile, Twitter, Unity Technologies, UiPath, Verizon Communications,Vidyo, VMware, Wave Computing,Wellsmith, Xilinx, Zebra,Zededa, and Zoho which may be cited in blogs and research.

Read more:
Create And Scale Complex Artificial Intelligence And Machine Learning Pipelines Anywhere With IBM CodeFlare - Forbes

Transactions in the Age of Artificial Intelligence: Risks and Considerations – JD Supra

Artificial Intelligence (AI) has become a major focus of, and the most valuable asset in, many technology transactions and the competition for top AI companies has never been hotter. According to CB Insights, there have been over 1,000 AI acquisitions since 2010. The COVID pandemic interrupted this trajectory, causing acquisitions to fall from 242 in 2019 to 159 in 2020. However, there are signs of a return, with over 90 acquisitions in the AI space as of June 2021 according to the latest CB Insights data. With tech giants helping drive the demand for AI, smaller AI startups are becoming increasingly attractive targets for acquisition.

AI companies have their own set of specialized risks that may not be addressed if buyers approach the transaction with their standard process. AIs reliance on data and the dynamic nature of its insights highlight the shortcomings of standard agreement language and the risks in not tailoring agreements to address AI specific issues. Sophisticated parties should consider crafting agreements specifically tailored to AI and its unique attributes and risks, which lend the parties a more accurate picture of an AI systems output and predictive capabilities, and can assist the parties in assessing and addressing the risks associated with the transaction. These risks include:

Freedom to use training data may be curtailed by contracts with third parties or other limitations regarding open source or scraped data.

Clarity around training data ownership can be complex and uncertain. Training data may be subject to ownership claims by third parties, be subject to third-party infringement claims, have been improperly obtained, or be subject to privacy issues.

To the extent that training data is subject to use limitations, a company may be restricted in a variety of ways including (i) how it commercializes and licenses the training data, (ii) the types of technology and algorithms it is permitted to develop with the training data and (iii) the purposes to which its technology and algorithms may be applied.

Standard representations on ownership of IP and IP improvements may be insufficient when applied to AI transactions. Output data generated by algorithms and the algorithms themselves trained from supplied training data may be vulnerable to ownership claims by data providers and vendors. Further, a third-party data provider may contract that, as between the parties, it owns IP improvements, resulting in companies struggling to distinguish ownership of their algorithms prior to using such third-party data from their improved algorithms after such use, as well as their ownership and ability to use model generated output data to continue to train and improve their algorithms.

Inadequate confidentiality or exclusivity provisions may leave an AI systems training data inputs and material technologies exposed to third parties, enabling competitors to use the same data and technologies to build similar or identical models. This is particularly the case when algorithms are developed using open sourced or publicly available machine learning processes.

Additional maintenance covenants may be warranted because an algorithms competitive value may atrophy if the algorithm is not designed to permit dynamic retraining, or the user of the algorithm fails to maintain and retrain the algorithm with updated data feeds.

In addition to the above, legislative protection in the AI space has yet to fully mature, and until such time, companies should protect their IP, data, algorithms, and models, by ensuring that their transactions and agreements are specifically designed to address the unique risks presented by the use and ownership of training data, AI-based technology and any output data generated by such technology.

Originally posted here:
Transactions in the Age of Artificial Intelligence: Risks and Considerations - JD Supra

AI in Robotics: Robotics and Artificial Intelligence 2021 – Datamation

Artificial intelligence (AI) is driving the robotics market into various areas, including mobile robots on the factory floor, robots that can do a large number of tasks rather than being specialized on one, and robots that can stay in control of inventory levels as well as fetching orders for delivery.

Such advanced functionality has raised the complexity of robotics. Hence the need for AI.

Artificial intelligence provides the ability to monitor many parameters in real-time and make decisions. For example, in an inventory robot, the machine has to be able to know its own location, the location of all stock, know stock levels, work out the sequence to go and retrieve items for orders, know the location of other robots on the floor, be able to navigate around the site, know when a human is near and change course, take deliveries to shipping, keep track of everything, and more.

The mobile robot also has to interoperate with various shop floor systems, computer numerical control (CNC) equipment, and other industrial systems. AI helps all those disparate systems work together seamlessly by being able to process their various inputs in real-time and coordinate action.

The autonomous robotic market alone is worth around $103 billion this year, according to Rob Enderle, an analyst at Enderle Group. He predicts that it will more than double by 2025 to $210 billion.

It will only go vertical from there, Enderle said.

Thats only one portion of the market. Another hot area is robotic process automation (RPA). It, too, is being integrated with AI to deal with high-volume, repeatable tasks. By handing these tasks over to robots, labor costs are reduced, workflows can be streamlined, and assembly processes are accelerated. Software can be written, for example, to take care of routine queries, calculations, and record keeping.

Historically, two different teams were needed: one for robotics and another for factory automation. The robotics team consists of specialized technicians with their own programming language to deal with the complex kinematics of multi-axis robots. Factory automation engineers, on the other hand, use programmable logic controllers (PLCs) and shop floor systems that utilize different programming languages. But software is now on the market that brings these two worlds together.

Further, better software and more sophisticated hardware has opened the door to a whole new breed of robot. While basic models operate on two axes, the latest breed of robotic machine with AI is capable of movement on six axes. They can be programmed to either carry out one task, over and over with high accuracy and speed, or execute complex tasks, such as coating or machining intricate components.

See more: Artificial Intelligence Market

Hondas ASIMO has become something of a celebrity. This advanced humanoid robot has been programmed to walk like a human, maintain balance, and do backflips.

But now AI is being used to advance its capabilities with an eventual view toward autonomous motion.

The difficulty is no longer building the robot but training it to deal with unstructured environments, like roads, open areas, and building interiors, Enderle said.They are complex systems with massive numbers of actuators and sensors to move and perceive what is around them.

Sight Machine, the developer of a manufacturing data platform, has partnered with Nissan to use AI to perform anomaly detection on 300 robots working on an automated final assembly process.

This system provides predictions and root-cause analysis for downtime.

See more: Artificial Intelligence: Current and Future Trends

Siemens and AUTOParkit have formed a partnership to bring parking into the 21st century.

Using Siemens automation controls with AI, the AUTOParkit solution provides a safe valet service without the valet.

This fully automated parking solution can achieve 2:1 efficiency over a conventional parking approach, AUTOParkit says. It reduces parking-related fuel consumption by 83% and carbon emissions by 82%.

In such a complex system, specialized vehicle-specific hardware and software work together to provide smooth and seamless parking experience that is far faster than traditional parking. Siemens controls use AI to pull it all together.

Kawasaki has a large offering of robots that are primarily used in fixed installations. But now it is working on robotic mobility and that takes AI.

For stationary robots to work seamlessly with mobile robots, it is essential that they can exchange information accurately and without failure, said Samir Patel, senior director of robotics engineering, Kawasaki Robotics USA.

To meet such integration requirements, Kawasaki robot controllers offer numerous options, including EtherNet TCP/IP, EtherNet IP, EtherCat, PROFIBUS, PROFINET and DeviceNet. These options not only allow our robots to communicate with mobile robots, but also allow communication to supervisory servers, PLCs, vision systems, sensors, and other devices.

With so many data sources to communicate with and instantaneous response needed to provide operational efficiency and maintain safety, AI is needed.

Over time, each robot accumulates data, such as joint load, speed, temperature, and cycle count, which periodically gets transferred to the network server, Patel said. In turn. the server running an application, such as Kawasakis Trend Manager, can analyze the data for performance and failure prediction.

Sight Machine, in close cooperation with Komatsu, has developed a system that can rapidly analyze 500 million data points from 600 welding robots.

The AI-based system can provide early warning of potential downtime and other welding faults.

See more: Top Performing Artificial Intelligence Companies

See the article here:
AI in Robotics: Robotics and Artificial Intelligence 2021 - Datamation