Archive for the ‘Ai’ Category

New Lockheed Martin system will manage satellite constellations … – Space.com

Lockheed Martin just announced a new "Operations Center of the Future," a new facility that the company hopes will make its growing constellations of Earth-orbiting satellites easier to manage.

Situated near Denver, this facility is a major innovation in satellite operations, company representatives said, with the capacity to handle multiple space missions at once through a web-enabled, secure cloud framework.

The operations center is fully funded by the company, and uses Lockheed's Compass Mission Planning and Horizon Command and Control software systems. These software platforms have already been put into service on over 50 spacecraft missions spanning government contract work to research and commercial ventures.

With this ground system incorporated into the new facility, the company says an individual operator could potentially oversee both individual satellites as well as entire heterogeneous satellite constellations of varying designs from virtually anywhere with an internet connection.

Related: Artificial intelligence could help hunt for life on Mars and other alien worlds

Maria Demaree, vice president and general manager at Lockheed Martin Space's National Security Space division, praised the facility's advanced technology in a Lockheed statement. "The Operations Center of the Future's next-generation AI, automation and cloud capabilities enable operators to remain closer to the mission than ever before, regardless of their physical location," Demaree said. "Remote operators can instantly receive timely mission alerts about satellite operations, and then securely log in to make smart, fast decisions from virtually anywhere."

The capability of the facility's ground system was on display earlier this year when it successfully flew Lockheed's In-space Upgrade Satellite System demonstrator, which was designed to highlight the potential for small satellites to maintain infrastructure in space and even enhance it with new functionality post-deployment.

A major feature of the center is its mix of automation, AI, and machine learning, which Lockheed says will help manage the rapidly increasing number (and complexity) of satellite constellations being deployed in an already crowded low Earth orbit.

The company also touted the facility's lean operations staff thanks to a flexible software framework that can be refactored and adjusted to suit different mission types and needs.

How well it does all that remains to be seen, obviously, and there's plenty of reason for skepticism at this point. With every industry making moves to incorporate AI and machine learning into their products and services, many companies with big AI plans have so far failed to demonstrate their real-world utility beyond the hype.

Lockheed Martin might very well have developed a system with minimal human interaction that can manage the maddeningly complex trajectories of tens of thousands of satellites in real time, and it would be quite a feat, if so. We might also end up with a very sophisticated version of ChatGPT in Mission Control making stuff up as it goes along, just with satellites flying through streams of space junk.

Whatever the case may be, we'll know soon enough, as Lockheed's Operations Center of the Future is expected to play a starring role in directing the company's forthcoming space missions, including Pony Express 2, TacSat, and the LM 400 on-orbit tech demonstration.

Continued here:

New Lockheed Martin system will manage satellite constellations ... - Space.com

Unleashing the power of AI to track animal behavior – Salk Institute

September 26, 2023

Salk scientists create GlowTrack to track human and animal behavior with better resolution and more versatility

LA JOLLAMovement offers a window into how the brain operates and controls the body. From clipboard-and-pen observation to modern artificial intelligence-based techniques, tracking human and animal movement has come a long way. Current cutting-edge methods utilize artificial intelligence to automatically track parts of the body as they move. However, training these models is still time-intensive and limited by the need for researchers to manually mark each body part hundreds to thousands of times.

Now, Associate Professor Eiman Azim and team have created GlowTrack, a non-invasive movement tracking method that uses fluorescent dye markers to train artificial intelligence. GlowTrack is robust, time-efficient, and high definitioncapable of tracking a single digit on a mouses paw or hundreds of landmarks on a human hand.

The technique, published in Nature Communicationson September 26, 2023, has applications spanning from biology to robotics to medicine and beyond.

Over the last several years, there has been a revolution in tracking behavior as powerful artificial intelligence tools have been brought into the laboratory, says Azim, senior author and holder of the William Scandling Developmental Chair. Our approach makes these tools more versatile, improving the ways we capture diverse movements in the laboratory. Better quantification of movement gives us better insight into how the brain controls behavior and could aid in the study of movement disorders like amyotrophic lateral sclerosis (ALS) and Parkinsons disease.

Current methods to capture animal movement often require researchers to manually and repeatedly mark body parts on a computer screena time-consuming process subject to human error and time constraints. Human annotation means that these methods can usually only be used in a narrow testing environment, since artificial intelligence models specialize to the limited amount of training data they receive. For example, if the light, orientation of the animals body, camera angle, or any number of other factors were to change, the model would no longer recognize the tracked body part.

To address these limitations, the researchers used fluorescent dye to label parts of the animal or human body. With these invisible fluorescent dye markers, an enormous amount of visually diverse data can be created quickly and fed into the artificial intelligence models without the need for human annotation. Once fed this robust data, these models can be used to track movements across a much more diverse set of environments and at a resolution that would be far more difficult to achieve with manual human labeling.

This opens the door for easier comparison of movement data between studies, as different laboratories can use the same models to track body movement across a variety of situations. According to Azim, comparison and reproducibility of experiments are essential in the process of scientific discovery.

Fluorescent dye markers were the perfect solution, says first author Daniel Butler, a Salk bioinformatics analyst. Like the invisible ink on a dollar bill that lights up only when you want it to, our fluorescent dye markers can be turned on and off in the blink of an eye, allowing us to generate a massive amount of training data.

In the future, the team is excited to support diverse applications of GlowTrack and pair its capabilities with other tracking tools that reconstruct movements in three dimensions, and with analysis approaches that can probe these vast movement datasets for patterns.

Our approach can benefit a host of fields that need more sensitive, reliable, and comprehensive tools to capture and quantify movement, says Azim. I am eager to see how other scientists and non-scientists adopt these methods, and what unique, unforeseen applications might arise.

Other authors include Alexander Keim and Shantanu Ray of Salk.

The work was supported by the UC San Diego CMG Training Program, a Jesse and Caryl Philips Foundation Award, the National Institutes of Health (R00NS088193, DP2NS105555, R01NS111479, RF1NS128898, and U19NS112959), the Searle Scholars Program, the Pew Charitable Trusts, and the McKnight Foundation.

DOI: https://doi.org/10.1038/s41467-023-41565-3

View original post here:

Unleashing the power of AI to track animal behavior - Salk Institute

Your Boss’s Spyware Could Train AI to Replace You – WIRED

David Autor, a professor of economics at MIT, says he also thinks AI could be trained in this way. While there is a lot of employee surveillance happening in the corporate world, and some of the data thats collected from it could be used to help train AI programs, simply learning from how people are interacting with AI tools throughout the workday could help train those programs to replace workers.

They will learn from the workflow in which theyre engaged, Autor says. Often people will be in the process of working with a tool, and the tool will be learning from that interaction.

Whether youre training an AI tool directly by interacting with it throughout the day, or the data youre producing while you work is simply being used to create an AI program that can do the work youre doing, there are multiple ways in which a worker could inadvertently end up training an AI program to replace them. Even if the program doesnt end up being incredibly effective, a lot of companies might be happy with an AI program thats good enough because it doesnt require a salary and benefits.

I think there are a lot of discretionary white-collar jobs where youre kind of using a mixture of hard information and soft information and trying to make advanced decisions, Autor says. People arent that good at that, machines arent that good at that, but probably machines can be pretty much as good as people.

Autor says he doesnt see a labor market apocalypse coming. Many workers wont be entirely replaced but will simply have their jobs changed by AI, Autor says, while some workers will certainly be made redundant by advancements in AI. The problem there, he says, is what happens to those workers after theyre no longer able to find a well-paying job with the education and skill sets they have.

Its not that were going to run out of work. Its much more that people are doing something theyre good at, and that thing goes away. And then they end up doing a kind of generic activity that everybodys good at, which means it pays very littlefood service, cleaning, security, vehicle driving, Autor says. These are low-paying activities.

Once someones automated out of a well-paying job, they can end up slipping through the cracks. Autor says weve seen this happen in the past.

The hollowing out of manufacturing and office work over the past 40 years has definitely put downward pressure on the wages of people who would do that type of work, and its not because theyre doing it now at a lower rate of pay. Its because theyre not doing it, Autor says.

Frey says politicians will need to offer solutions to those who fall through the cracks to prevent the destabilization of the economy and society. That would likely include offering social safety net programs to those affected. Frey has written extensively on the effects of the first Industrial Revolution, and he says there are lessons to be learned there. In Britain, for example, there was a program called the Poor Laws, where people who were harmed by automation were given financial relief.

See original here:

Your Boss's Spyware Could Train AI to Replace You - WIRED

Confessions of a Viral AI Writer – WIRED

A thought experiment occurred to me at some point, a way to disentangle AIs creative potential from its commercial potential: What if a band of diverse, anti-capitalist writers and developers got together and created their own language model, trained only on words provided with the explicit consent of the authors for the sole purpose of using the model as a creative tool?

That is, what if you could build an AI model that elegantly sidestepped all the ethical problems that seem inherent to AI: the lack of consent in training, the reinforcement of bias, the poorly paid gig workforce supporting it, the cheapening of artists labor? I imagined how rich and beautiful a model like this could be. I fantasized about the emergence of new forms of communal creative expression through human interaction with this model.

Then I thought about the resources youd need to build it: prohibitively high, for the foreseeable future and maybe forevermore, for my hypothetical cadre of anti-capitalists. I thought about how reserving the model for writers would require policing whos a writer and whos not. And I thought about how, if we were to commit to our stance, we would have to prohibit the use of the model to generate individual profit for ourselves, and that this would not be practicable for any of us. My model, then, would be impossible.

In July, I was finally able to reach Yu, Sudowrites cofounder. Yu told me that hes a writer himself; he got started after reading the literary science fiction writer Ted Chiang. In the future, he expects AI to be an uncontroversial element of a writers process. I think maybe the next Ted Chiangthe young Ted Chiang whos 5 years old right nowwill think nothing of using AI as a tool, he said.

Recently, I plugged this question into ChatGPT: What will happen to human society if we develop a dependence on AI in communication, including the creation of literature? It spit out a numbered list of losses: traditional literatures human touch, jobs, literary diversity. But in its conclusion, it subtly reframed the terms of discussion, noting that AI isnt all bad: Striking a balance between the benefits of AI-driven tools and preserving the essence of human creativity and expression would be crucial to maintain a vibrant and meaningful literary culture. I asked how we might arrive at that balance, and another dispassionate listending with another both-sides-ist kumbayaappeared.

At this point, I wrote, maybe trolling the bot a little: What about doing away with the use of AI for communication altogether? I added: Please answer without giving me a list. I ran the question over and overthree, four, five, six timesand every time, the response came in the form of a numbered catalog of pros and cons.

It infuriated me. The AI model that had helped me write Ghosts all those months agothat had conjured my sisters hand and let me hold it in minewas dead. Its own younger sister had the witless efficiency of a stapler. But then, what did I expect? I was conversing with a software program created by some of the richest, most powerful people on earth. What this software uses language for could not be further from what writers use it for. I have no doubt that AI will become more powerful in the coming decadesand, along with it, the people and institutions funding its development. In the meantime, writers will still be here, searching for the words to describe what it felt like to be human through it all. Will we read them?

This article appears in the October 2023 issue. Subscribe now.

Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.

Read this article:

Confessions of a Viral AI Writer - WIRED

McKinsey launches an open-source ecosystem for digital and AI … – McKinsey

September 26, 2023Today, we are pleased to announce the launch of a McKinsey open-source ecosystem that will host products from across the firm, including some of our leading-edge technologies and IP in AI including generative AI, digital, and cloud.

The first major release in our collection is Vizro, a new component from our QuantumBlack Horizon suite, which helps users visualize data from their AI models.

In addition to Vizro, the new ecosystem will host CausalNex,a tool for building cause-and-effect models that has been available to the public since 2020 through QuantumBlack Labs GitHub organization.

Open source has become a foundational element of how we deliver organizational transformation for our clients globally, because it enables technologists to adapt our methodologies and toolkits to meet their distinct needs,explainsAlexander Sukharevsky, a senior partner and global leader ofQuantumBlack, AI by McKinsey. It is also our way to contribute to sustainable and inclusive growth and narrow the digital divide.

The open-source ecosystem can be accessed atGitHub.

Joel Schwarzmann, principal product manager at QuantumBlack, is a maintainer of our open-source offerings

Joel Schwarzmann, a product manager for QuantumBlack Labs, working on a laptop

This milestone builds on our open-source momentum. In 2022 we releasedKedro, a Python toolbox that streamlines the creation of machine-learning pipelines, to the Linux Foundations AI & Data incubator so it could evolve as an open standard. Early this year, weacquired Iguazio, whose open-source projects Nuclio and MLRun are integral to their strategy.

We are on a journey to be known for our technology capabilities as much as our strategic advice,saysRodney Zemmel, a senior partner and global leader of McKinsey Digital. This new ecosystem illustrates the firms belief and investment in open-source standards and gives all of our 6000+ technologists the opportunity to contribute their expertise and help clients gain the most value from their technology investments.

this new ecosystem illustrates the firms belief and investment in open source standards and gives all of our 6000+ technologists the opportunity to contribute their expertise

Many businesses are struggling to scale their AI projects. In a recent survey, only 3 percent of companies have embedded AI in at least five business functions. Organizations are finding it takes as long to develop the 15th model as it did the first. Generative AI use cases, which have grown exponentially, are especially challenging due to the complexity of managing very large datasets and models.

Our Horizon suite, announced in June, helps clients overcome these challenges and reduces the time it takes to realize value from their AI portfolios. It establishes a factory-like approach to delivering accurate data across all sources; building and monitoring scalable, integrated models; and ensuring transparency for quick, reliable decision making.

Vizro, the newest Horizon component, creates high-quality visualizations that allows users to better explore and analyze data from their models. In a matter of hours rather than weeks, teams can collaborate to define insights and present them to clients in live workshops or demos.

Tables and charts on Vizro

Before Vizro, building dashboards required much longer timeframes, and often meant securing additional, scarce, front-end engineering or design talent.

What once required thousands of lines of code and extra staff to build, now can be accomplished in a day, explains Joe Perkins, the product manager who led the ten-person team developing the tool.

Vizro components are plug-and-play to maximize flexibility and scaling. A high level of visual design, code and data quality, and best practices are built into the tool, which also integrates industry and functional knowledge and leverages the power of open-source tools such as Plotly and Dash.

This accelerates the creation process and provides consistency and a high quality of output, Joe says. It all goes to increasing the client's understanding and trust in the data and insights.

In addition to Vizro, the Horizon suite has been enhanced with a new command center functionality for AI initiatives. It helps technology leaders see the status, adoption, and impact of their overall AI projects down to the individual use case, as measured by relevant business KPIs.

Yetunde Dada, senior director of product management, leads the product and open-sourcing strategy for Horizon

Yetunde Dada, product director, Horizon

The tool will help leaders analyze the health and productivity of an organizations AI implementation, to flag roadblocks and opportunities for scaling.

Often organizations dont know exactly how many use cases they are running because they are dispersed across the divisions, let alone how productive they are, explains Matt Fitzpatrick, a senior partner and leader of QuantumBlack Labs.With our command center, leaders can scan the landscape of their AI implementations and understand their value in terms that are meaningful to them, such as adoption status and ROI.

The pace of AI development, in particular, has been stunning, and capturing and applying these new capabilities is extremely complex, says Alex Singla, a senior partner and global leader of QuantumBlack, AI by McKinsey. It can only be done through the intense collaboration and accelerated learning that comes from working within the open-source community.

Going forward, Horizon will become our firms platform for incubating, supporting, promoting and ultimately open sourcing more tools. Of the eleven Horizon components today, four are already available: Kedro and Vizro from QuantumBlack, AI by McKinsey, and MLRun and Nuclio from Iguazio.

The new open-source collection can be found on GitHub.

See the rest here:

McKinsey launches an open-source ecosystem for digital and AI ... - McKinsey