Archive for the ‘Machine Learning’ Category

Global machine learning as a service market is expected to grow with a CAGR of 38.5% over the forecast period from 2018-2024 – Yahoo Finance

The report on the global machine learning as a service market provides qualitative and quantitative analysis for the period from 2016 to 2024. The report predicts the global machine learning as a service market to grow with a CAGR of 38.

New York, Feb. 20, 2020 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Machine Learning as a Service Market: Global Industry Analysis, Trends, Market Size, and Forecasts up to 2024" - https://www.reportlinker.com/p05751673/?utm_source=GNW 5% over the forecast period from 2018-2024. The study on machine learning as a service market covers the analysis of the leading geographies such as North America, Europe, Asia-Pacific, and RoW for the period of 2016 to 2024.

The report on machine learning as a service market is a comprehensive study and presentation of drivers, restraints, opportunities, demand factors, market size, forecasts, and trends in the global machine learning as a service market over the period of 2016 to 2024. Moreover, the report is a collective presentation of primary and secondary research findings.

Porters five forces model in the report provides insights into the competitive rivalry, supplier and buyer positions in the market and opportunities for the new entrants in the global machine learning as a service market over the period of 2016 to 2024. Further, IGR- Growth Matrix gave in the report brings an insight into the investment areas that existing or new market players can consider.

Report Findings1) Drivers Increasing use in cloud technologies Provides statistical analysis along with reduce time and cost Growing adoption of cloud based systems2) Restraints Less skilled personnel3) Opportunities Technological advancement

Research Methodology

A) Primary ResearchOur primary research involves extensive interviews and analysis of the opinions provided by the primary respondents. The primary research starts with identifying and approaching the primary respondents, the primary respondents are approached include1. Key Opinion Leaders associated with Infinium Global Research2. Internal and External subject matter experts3. Professionals and participants from the industry

Our primary research respondents typically include1. Executives working with leading companies in the market under review2. Product/brand/marketing managers3. CXO level executives4. Regional/zonal/ country managers5. Vice President level executives.

B) Secondary ResearchSecondary research involves extensive exploring through the secondary sources of information available in both the public domain and paid sources. At Infinium Global Research, each research study is based on over 500 hours of secondary research accompanied by primary research. The information obtained through the secondary sources is validated through the crosscheck on various data sources.

The secondary sources of the data typically include1. Company reports and publications2. Government/institutional publications3. Trade and associations journals4. Databases such as WTO, OECD, World Bank, and among others.5. Websites and publications by research agencies

Segment CoveredThe global machine learning as a service market is segmented on the basis of component, application, and end user.

The Global Machine Learning As a Service Market by Component Software Services

The Global Machine Learning As a Service Market by Application Marketing & Advertising Fraud Detection & Risk Management Predictive Analytics Augmented & Virtual Reality Security & Surveillance Others

The Global Machine Learning As a Service Market by End User Retail Manufacturing BFSI Healthcare & Life Sciences Telecom Others

Company Profiles IBM PREDICTRON LABS H2O.ai. Google LLC Crunchbase Inc. Microsoft Yottamine Analytics, LLC Fair Isaac Corporation. BigML, Inc. Amazon Web Services, Inc.

What does this report deliver?1. Comprehensive analysis of the global as well as regional markets of the machine learning as a service market.2. Complete coverage of all the segments in the machine learning as a service market to analyze the trends, developments in the global market and forecast of market size up to 2024.3. Comprehensive analysis of the companies operating in the global machine learning as a service market. The company profile includes analysis of product portfolio, revenue, SWOT analysis and latest developments of the company.4. IGR- Growth Matrix presents an analysis of the product segments and geographies that market players should focus to invest, consolidate, expand and/or diversify.Read the full report: https://www.reportlinker.com/p05751673/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

Story continues

Clare: clare@reportlinker.comUS: (339)-368-6001Intl: +1 339-368-6001

Read more here:
Global machine learning as a service market is expected to grow with a CAGR of 38.5% over the forecast period from 2018-2024 - Yahoo Finance

Machine Learning Patentability In 2019: 5 Cases Analyzed And Lessons Learned Part 2 – Mondaq News Alerts

To print this article, all you need is to be registered or login on Mondaq.com.

This article is the second in a five-part series. Each of thesearticles relates to the state of machine-learning patentability inthe United States during 2019. Each of these articles describe onecase in which the PTAB reversed an Examiner's Section-101rejection of a machine-learning-based patent application'sclaims. The first article of thisseries described the USPTO's 2019 Revised Patent Subject Matter Eligibility Guidance (2019PEG), which was issued on January 7, 2019. The 2019 PEG changed theanalysis provided by Examiners in rejecting patents under Section 1011 of thepatent laws, and bythe PTAB in reviewing appeals from theseExaminer rejections. The first article of this series alsoincludes a case that illustrates the effect of reciting AIcomponents in the claims of a patent application. The followingsection of this article describes another case where the PTABapplied the 2019 PEG to a machine-learning-based patent andconcluded that the Examiner was wrong.

Case 2: Appeal 2018-0044592 (Decided June 21,2019)

This case involves the PTAB reversing the Examiner's Section101 rejections of claims of the 14/316,186 patent application. Thisapplication relates to "a probabilistic programming compilerthat generates data-parallel inference code." The Examinercontended that "the claims are directed to the abstract ideaof 'mathematical relationships,' which the Examiner appearsto conclude are [also] mental processes i.e., identifying aparticular inference algorithm and producing inferencecode."

The PTAB quickly dismissed the "mathematical concept"category of abstract ideas. The PTAB stated: "the specificmathematical algorithm or formula is not explicitly recited in theclaims. As such, under the recent [2019 PEG], the claims do notrecite a mathematical concept." This is the same reasoningthat was provided for the PTAB decision in the previous article,once again requiring that a mathematical algorithm be"explicitly recited." As explained before, the 2019 PEGdoes not use the language "explicitly recited," so thePTAB's reasoning is not exactly lined-up with the language ofthe 2019 PEG however, the PTAB's ultimate conclusion isconsistent with the 2019 PEG.

Next, the PTAB addressed and dismissed the "organizinghuman activity" category of abstract ideas just as quickly.Then, the PTAB moved on to the third category of abstract ideas:"mental processes." The PTAB noted the following relevantlanguage from the specification of the patent application:

There are many different inference algorithms, most of which areconceptually complicated and difficult to implement at scale.. . .Probabilistic programming is a way to simplify the application ofmachine learning based on Bayesian inference.. . .Doing inference on probabilistic programs is computationallyintensive and challenging. Most of the algorithms developed toperform inference are conceptually complicated.

The PTAB opined that the method is complicated, based at leastpartially on the specification explicitly stating that the methodis complicated. Then, in determining whether the method of theclaims is able to be performed in the human mind, the PTAB foundthat this language from the specification was sufficient evidenceto prove the truth of the matter it asserted (i.e., that the methodis complicated). The PTAB did not seem to find the self-servingnature of the statements in the specification to be an issue.

The PTAB then stated:

In other words, when read in light of the Specification, theclaimed 'identifying a particular inference algorithm' isdifficult and challenging for non-experts due to theircomputational complexity. . . . Additionally, Appellant'sSpecification explicitly states that 'the compiler thengenerates inference code' not an individual using his/her mindor pen and paper.

First, as explained above, it seems that the PTAB used theassertions of "complexity" made in the specification toconclude that the method is complex and cannot be a mental process.Second, the PTAB seems to have used the fact that the algorithm isnot actually performed in the human mind as evidence that it cannotpractically be performed in the human mind. Footnote 14 of the 2019PEG states:

If a claim, under its broadest reasonable interpretation, coversperformance in the mind but for the recitation of generic computercomponents, then it is still in the mental processes categoryunless the claim cannot practically be performed in the mind.

Accordingly, the fact that the patent application provides thatthe method is performed on a computer, and not performed in a humanmind, should not be the sole reason for determining that it is nota mental process. However, as the PTAB demonstrated in thisopinion, the fact that a method is performed on a computer may beused as corroborative evidence for the argument that the method isnot a mental process.

This case illustrates:

(1) the probabilistic programming compiler that generatesdata-parallel inference code was held to not be an abstract idea,in this context;(2) reciting in the specification that the method is"complicated" did not seem to hurt the argument that themethod is in fact complicated, and is therefore not an abstractidea;(3) reciting that a method is performed on a computer, though notalone sufficient to overcome the "mental processes"category of abstract ideas, may be useful for corroborating otherevidence; and(4) the PTAB might not always use the exact language of the 2019PEG in its reasoning (e.g., the "explicitly recited"requirement), but seems to come to the same overall conclusion asthe 2019 PEG.

The next three articles will build on this background, and willprovide different examples of how the PTAB approaches reversingExaminer 101-rejections of machine-learning patents under the 2019PEG. Stay tuned for the analysis and lessons of the next case,which includes methods for overcoming 101 rejections where the PTABhas found that an abstract idea is "recited,"and focuses on Step 2A Prong 2.

Footnotes

1 35U.S.C. 101.

2 https://e-foia.uspto.gov/Foia/RetrievePdf?system=BPAI&flNm=fd2018004459-06-21-2019-1.

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

POPULAR ARTICLES ON: Intellectual Property from United States

Global Advertising Lawyers Alliance (GALA)

While my seats afforded me only a so-so sight-line to the stage, I had no trouble seeing the ocean of cell phones, in the hands of adoring fans, simultaneously recording (without authorization)...

Weintraub Tobin Chediak Coleman Grodin Law Corporation

Generally, the title to a single motion picture is not entitled to trademark protection. This is the same for the title to single books, songs and other singular creative works

Cowan Liebowitz & Latman PC

What can you do to protect your goodwill if you unknowingly select an unfortunate brand name, or through no fault of your own...

Read the original post:
Machine Learning Patentability In 2019: 5 Cases Analyzed And Lessons Learned Part 2 - Mondaq News Alerts

Google Teaches AI To Play The Game Of Chip Design – The Next Platform

If it wasnt bad enough that Moores Law improvements in the density and cost of transistors is slowing. At the same time, the cost of designing chips and of the factories that are used to etch them is also on the rise. Any savings on any of these fronts will be most welcome to keep IT innovation leaping ahead.

One of the promising frontiers of research right now in chip design is using machine learning techniques to actually help with some of the tasks in the design process. We will be discussing this at our upcoming The Next AI Platform event in San Jose on March 10 with Elias Fallon, engineering director at Cadence Design Systems. (You can see the full agenda and register to attend at this link; we hope to see you there.) The use of machine learning in chip design was also one of the topics that Jeff Dean, a senior fellow in the Research Group at Google who has helped invent many of the hyperscalers key technologies, talked about in his keynote address at this weeks 2020 International Solid State Circuits Conference in San Francisco.

Google, as it turns out, has more than a passing interest in compute engines, being one of the large consumers of CPUs and GPUs in the world and also the designer of TPUs spanning from the edge to the datacenter for doing both machine learning inference and training. So this is not just an academic exercise for the search engine giant and public cloud contender particularly if it intends to keep advancing its TPU roadmap and if it decides, like rival Amazon Web Services, to start designing its own custom Arm server chips or decides to do custom Arm chips for its phones and other consumer devices.

With a certain amount of serendipity, some of the work that Google has been doing to run machine learning models across large numbers of different types of compute engines is feeding back into the work that it is doing to automate some of the placement and routing of IP blocks on an ASIC. (It is wonderful when an idea is fractal like that. . . .)

While the pod of TPUv3 systems that Google showed off back in May 2018 can mesh together 1,024 of the tensor processors (which had twice as many cores and about a 15 percent clock speed boost as far as we can tell) to deliver 106 petaflops of aggregate 16-bit half precision multiplication performance (with 32-bit accumulation) using Googles own and very clever bfloat16 data format. Those TPUv3 chips are all cross-coupled using a 3232 toroidal mesh so they can share data, and each TPUv3 core has its own bank of HBM2 memory. This TPUv3 pod is a huge aggregation of compute, which can do either machine learning training or inference, but it is not necessarily as large as Google needs to build. (We will be talking about Deans comments on the future of AI hardware and models in a separate story.)

Suffice it to say, Google is hedging with hybrid architectures that mix CPUs and GPUs and perhaps someday other accelerators for reinforcement learning workloads, and hence the research that Dean and his peers at Google have been involved in that are also being brought to bear on ASIC design.

One of the trends is that models are getting bigger, explains Dean. So the entire model doesnt necessarily fit on a single chip. If you have essentially large models, then model parallelism dividing the model up across multiple chips is important, and getting good performance by giving it a bunch of compute devices is non-trivial and it is not obvious how to do that effectively.

It is not as simple as taking the Message Passing Interface (MPI) that is used to dispatch work on massively parallel supercomputers and hacking it onto a machine learning framework like TensorFlow because of the heterogeneous nature of AI iron. But that might have been an interesting way to spread machine learning training workloads over a lot of compute elements, and some have done this. Google, like other hyperscalers, tends to build its own frameworks and protocols and datastores, informed by other technologies, of course.

Device placement meaning, putting the right neural network (or portion of the code that embodies it) on the right device at the right time for maximum throughput in the overall application is particularly important as neural network models get bigger than the memory space and the compute oomph of a single CPU, GPU, or TPU. And the problem is getting worse faster than the frameworks and hardware can keep up. Take a look:

The number of parameters just keeps growing and the number of devices being used in parallel also keeps growing. In fact, getting 128 GPUs or 128 TPUv3 processors (which is how you get the 512 cores in the chart above) to work in concert is quite an accomplishment, and is on par with the best that supercomputers could do back in the era before loosely coupled, massively parallel supercomputers using MPI took over and federated NUMA servers with actual shared memory were the norm in HPC more than two decades ago. As more and more devices are going to be lashed together in some fashion to handle these models, Google has been experimenting with using reinforcement learning (RL), a special subset of machine learning, to figure out where to best run neural network models at any given time as model ensembles are running on a collection of CPUs and GPUs. In this case, an initial policy is set for dispatching neural network models for processing, and the results are then fed back into the model for further adaptation, moving it toward more and more efficient running of those models.

In 2017, Google trained an RL model to do this work (you can see the paper here) and here is what the resulting placement looked like for the encoder and decoder, and the RL model to place the work on the two CPUs and four GPUs in the system under test ended up with 19.3 percent lower runtime for the training runs compared to the manually placed neural networks done by a human expert. Dean added that this RL-based placement of neural network work on the compute engines does kind of non-intuitive things to achieve that result, which is what seems to be the case with a lot of machine learning applications that, nonetheless, work as well or better than humans doing the same tasks. The issue is that it cant take a lot of RL compute oomph to place the work on the devices to run the neural networks that are being trained themselves. In 2018, Google did research to show how to scale computational graphs to over 80,000 operations (nodes), and last year, Google created what it calls a generalized device placement scheme for dataflow graphs with over 50,000 operations (nodes).

Then we start to think about using this instead of using it to place software computation on different computational devices, we started to think about it for could we use this to do placement and routing in ASIC chip design because the problems, if you squint at them, sort of look similar, says Dean. Reinforcement learning works really well for hard problems with clear rules like Chess or Go, and essentially we started asking ourselves: Can we get a reinforcement learning model to successfully play the game of ASIC chip layout?

There are a couple of challenges to doing this, according to Dean. For one thing, chess and Go both have a single objective, which is to win the game and not lose the game. (They are two sides of the same coin.) With the placement of IP blocks on an ASIC and the routing between them, there is not a simple win or lose and there are many objectives that you care about, such as area, timing, congestion, design rules, and so on. Even more daunting is the fact that the number of potential states that have to be managed by the neural network model for IP block placement is enormous, as this chart below shows:

Finally, the true reward function that drives the placement of IP blocks, which runs in EDA tools, takes many hours to run.

And so we have an architecture Im not going to get a lot of detail but essentially it tries to take a bunch of things that make up a chip design and then try to place them on the wafer, explains Dean, and he showed off some results of placing IP blocks on a low-powered machine learning accelerator chip (we presume this is the edge TPU that Google has created for its smartphones), with some areas intentionally blurred to keep us from learning the details of that chip. We have had a team of human experts places this IP block and they had a couple of proxy reward functions that are very cheap for us to evaluate; we evaluated them in two seconds instead of hours, which is really important because reinforcement learning is one where you iterate many times. So we have a machine learning-based placement system, and what you can see is that it sort of spreads out the logic a bit more rather than having it in quite such a rectangular area, and that has enabled it to get improvements in both congestion and wire length. And we have got comparable or superhuman results on all the different IP blocks that we have tried so far.

Note: I am not sure we want to call AI algorithms superhuman. At least if you dont want to have it banned.

Anyway, here is how that low-powered machine learning accelerator for the RL network versus people doing the IP block placement:

And here is a table that shows the difference between doing the placing and routing by hand and automating it with machine learning:

And finally, here is how the IP block on the TPU chip was handled by the RL network compared to the humans:

Look at how organic these AI-created IP blocks look compared to the Cartesian ones designed by humans. Fascinating.

Now having done this, Google then asked this question: Can we train a general agent that is quickly effective at placing a new design that it has never seen before? Which is precisely the point when you are making a new chip. So Google tested this generalized model against four different IP blocks from the TPU architecture and then also on the Ariane RISC-V processor architecture. This data pits people working with commercial tools and various levels tuning on the model:

And here is some more data on the placement and routing done on the Ariane RISC-V chips:

You can see that experience on other designs actually improves the results significantly, so essentially in twelve hours you can get the darkest blue bar, Dean says, referring to the first chart above, and then continues with the second chart above. And this graph showing the wireline costs where we see if you train from scratch, it actually takes the system a little while before it sort of makes some breakthrough insight and was able to significantly drop the wiring cost, where the pretrained policy has some general intuitions about chip design from seeing other designs and people that get to that level very quickly.

Just like we do ensembles of simulations to do better weather forecasting, Dean says that this kind of AI-juiced placement and routing of IP block sin chip design could be used to quickly generate many different layouts, with different tradeoffs. And in the event that some feature needs to be added, the AI-juiced chip design game could re-do a layout quickly, not taking months to do it.

And most importantly, this automated design assistance could radically drop the cost of creating new chips. These costs are going up exponentially, and data we have seen (thanks to IT industry luminary and Arista Networks chairman and chief technology officer Andy Bechtolsheim), an advanced chip design using 16 nanometer processes cost an average of $106.3 million, shifting to 10 nanometers pushed that up to $174.4 million, and the move to 7 nanometers costs $297.8 million, with projections for 5 nanometer chips to be on the order of $542.2 million. Nearly half of that cost has been and continues to be for software. So we know where to target some of those costs, and machine learning can help.

The question is will the chip design software makers embed AI and foster an explosion in chip designs that can be truly called Cambrian, and then make it up in volume like the rest of us have to do in our work? It will be interesting to see what happens here, and how research like that being done by Google will help.

See the rest here:
Google Teaches AI To Play The Game Of Chip Design - The Next Platform

How to Train Your AI Soldier Robots (and the Humans Who Command Them) – War on the Rocks

Editors Note: This article was submitted in response to thecall for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It addresses the third question (part a.), which asks how institutions, organizational structures, and infrastructure will affect AI development, and will artificial intelligence require the development of new institutions or changes to existing institutions.

Artificial intelligence (AI) is often portrayed as a single omnipotent force the computer as God. Often the AI is evil, or at least misguided. According to Hollywood, humans can outwit the computer (2001: A Space Odyssey), reason with it (Wargames), blow it up (Star Wars: The Phantom Menace), or be defeated by it (Dr. Strangelove). Sometimes the AI is an automated version of a human, perhaps a human fighters faithful companion (the robot R2-D2 in Star Wars).

These science fiction tropes are legitimate models for military discussion and many are being discussed. But there are other possibilities. In particular, machine learning may give rise to new forms of intelligence; not natural, but not really artificial if the term implies having been designed in detail by a person. Such new forms of intelligence may resemble that of humans or other animals, and we will discuss them using language associated with humans, but we are not discussing robots that have been deliberately programmed to emulate human intelligence. Through machine learning they have been programmed by their own experiences. We speculate that some of the characteristics that humans have evolved over millennia will also evolve in future AI, characteristics that have evolved purely for their success in a wide range of situations that are real, for humans, or simulated, for robots.

As the capabilities of AI-enabled robots increase, and in particular as behaviors emerge that are both complex and outside past human experience, how will we organize, train, and command them and the humans who will supervise and maintain them? Existing methods and structures, such as military ranks and doctrine, that have evolved over millennia to manage the complexity of human behavior will likely be necessary. But because robots will evolve new behaviors we cannot yet imagine, they are unlikely to be sufficient. Instead, the military and its partners will need to learn new types of organization and new approaches to training. It is impossible to predict what these will be but very possible they will differ greatly from approaches that have worked in the past. Ongoing experimentation will be essential.

How to Respond to AI Advances

The development of AI, especially machine learning, will lead to unpredictable new types of robots. Advances in AI suggest that humans will have the ability to create many types of robots, of different shapes, sizes, or degrees of independence or autonomy. It is conceivable that humans may one day be able to design tiny AI bullets to pierce only designated targets, automated aircraft to fly as loyal wingmen alongside human pilots, or thousands of AI fish to swim up an enemys river. Or we could design AI not as a device but as a global grid that analyzes vast amounts of diverse data. Multiple programs funded by the Department of Defense are on their way to developing robots with varying degrees of autonomy.

In science fiction, robots are often depicted as behaving in groups (like the robot dogs in Metalhead). Researchers inspired by animal behaviors have developed AI concepts such as swarms, in which relatively simple rules for each robot can result in complex emergent phenomena on a larger scale. This is a legitimate and important area of investigation. Nevertheless, simply imitating the known behaviors of animals has its limits. After observing the genocidal nature of military operations among ants, biologists Bert Holldobler and E. O. Wilson wrote, If ants had nuclear weapons, they would probably end the world in a week. Nor would we want to limit AI to imitating human behavior. In any case, a major point of machine learning is the possibility of uncovering new behaviors or strategies. Some of these will be very different from all past experience; human, animal, and automated. We will likely encounter behaviors that, although not human, are so complex that some human language, such as personality, may seem appropriately descriptive. Robots with new, sophisticated patterns of behavior may require new forms of organization.

Military structure and scheme of maneuver is key to victory. Groups often fight best when they dont simply swarm but execute sophisticated maneuvers in hierarchical structures. Modern military tactics were honed over centuries of experimentation and testing. This was a lengthy, expensive, and bloody process.

The development of appropriate organizations and tactics for AI systems will also likely be expensive, although one can hope that through the use of simulation it will not be bloody. But it may happen quickly. The competitive international environment creates pressure to use machine learning to develop AI organizational structure and tactics, techniques, and procedures as fast as possible.

Despite our considerable experience organizing humans, when dealing with robots with new, unfamiliar, and likely rapidly-evolving personalities we confront something of a blank slate. But we must think beyond established paradigms, beyond the computer as all-powerful or the computer as loyal sidekick.

Humans fight in a hierarchy of groups, each soldier in a squad or each battalion in a brigade exercising a combination of obedience and autonomy. Decisions are constantly made at all levels of the organization. Deciding what decisions can be made at what levels is itself an important decision. In an effective organization, decision-makers at all levels have a good idea of how others will act, even when direct communication is not possible.

Imagine an operation in which several hundred underwater robots are swimming up a river to accomplish a mission. They are spotted and attacked. A decision must be made: Should they retreat? Who decides? Communications will likely be imperfect. Some mid-level commander, likely one of the robot swimmers, will decide based on limited information. The decision will likely be difficult and depend on the intelligence, experience, and judgment of the robot commander. It is essential that the swimmers know who or what is issuing legitimate orders. That is, there will have to be some structure, some hierarchy.

The optimal unit structure will be worked out through experience. Achieving as much experience as possible in peacetime is essential. That means training.

Training Robot Warriors

Robots with AI-enabled technologies will have to be exercised regularly, partly to test them and understand their capabilities and partly to provide them with the opportunity to learn from recreating combat. This doesnt mean that each individual hardware item has to be trained, but that the software has to develop by learning from its mistakes in virtual testbeds and, to the extent that they are feasible, realistic field tests. People learn best from the most realistic training possible. There is no reason to expect machines to be any different in that regard. Furthermore, as capabilities, threats, and missions evolve, robots will need to be continuously trained and tested to maintain effectiveness.

Training may seem a strange word for machine learning in a simulated operational environment. But then, conventional training is human learning in a controlled environment. Robots, like humans, will need to learn what to expect from their comrades. And as they train and learn highly complex patterns, it may make sense to think of such patterns as personalities and memories. At least, the patterns may appear that way to the humans interacting with them. The point of such anthropomorphic language is not that the machines have become human, but that their complexity is such that it is helpful to think in these terms.

One big difference between people and machines is that, in theory at least, the products of machine learning, the code for these memories or personalities, can be uploaded directly from one very experienced robot to any number of others. If all robots are given identical training and the same coded memories, we might end up with a uniformity among a units members that, in the aggregate, is less than optimal for the unit as a whole.

Diversity of perspective is accepted as a valuable aid to human teamwork. Groupthink is widely understood to be a threat. Its reasonable to assume that diversity will also be beneficial to teams of robots. It may be desirable to create a library of many different personalities or memories that could be assigned to different robots for particular missions. Different personalities could be deliberately created by using somewhat different sets of training testbeds to develop software for the same mission.

If AI can create autonomous robots with human-like characteristics, what is the ideal personality mix for each mission? Again, we are using the anthropomorphic term personality for the details of the robots behavior patterns. One could call it a robots programming if that did not suggest the existence of an intentional programmer. The robots personalities have evolved from the robots participation in a very large number of simulations. It is unlikely that any human will fully understand a given personality or be able to fully predict all aspects of a robots behavior.

In a simple case, there may be one optimum personality for all the robots of one type. In more complicated situations, where robots will interact with each other, having robots that respond differently to the same stimuli could make a unit more robust. These are things that military planners can hope to learn through testing and training. Of course, attributes of personality that may have evolved for one set of situations may be less than optimal, or positively dangerous, in another. We talk a lot about artificial intelligence. We dont discuss artificial mental illness. But there is no reason to rule it out.

Of course, humans will need to be trained to interact with the machines. Machine learning systems already often exhibit sophisticated behaviors that are difficult to describe. Its unclear how future AI-enabled robots will behave in combat. Humans, and other robots, will need experience to know what to expect and to deal with any unexpected behaviors that may emerge. Planners need experience to know which plans might work.

But the human-robot relationship might turn out to be something completely different. For all of human history, generals have had to learn their soldiers capabilities. They knew best exactly what their troops could do. They could judge the psychological state of their subordinates. They might even know when they were being lied to. But todays commanders do not know, yet, what their AI might prove capable of. In a sense, it is the AI troops that will have to train their commanders.

In traditional military services, the primary peacetime occupation of the combat unit is training. Every single servicemember has to be trained up to the standard necessary for wartime proficiency. This is a huge task. In a robot unit, planners, maintainers, and logisticians will have to be trained to train and maintain the machines but may spend little time working on their hardware except during deployment.

What would the units look like? What is the optimal unit rank structure? How does the human rank structure relate to the robot rank structure? There are a million questions as we enter uncharted territory. The way to find out is to put robot units out onto test ranges where they can operate continuously, test software, and improve machine learning. AI units working together can learn and teach each other and humans.

Conclusion

AI-enabled robots will need to be organized, trained, and maintained. While these systems will have human-like characteristics, they will likely develop distinct personalities. The military will need an extensive training program to inform new doctrines and concepts to manage this powerful, but unprecedented, capability.

Its unclear what structures will prove effective to manage AI robots. Only by continuous experimentation can people, including computer scientists and military operators, understand the developing world of multi-unit human and robot forces. We must hope that experiments lead to correct solutions. There is no guarantee that we will get it right. But there is every reason to believe that as technology enables the development of new and more complex patterns of robot behavior, new types of military organizations will emerge.

Thomas Hamilton is a Senior Physical Scientist at the nonprofit, nonpartisan RAND Corporation. He has a Ph.D. in physics from Columbia University and was a research astrophysicist at Harvard, Columbia, and Caltech before joining RAND. At RAND he has worked extensively on the employment of unmanned air vehicles and other technology issues for the Defense Department.

Image: Wikicommons (U.S. Air Force photo by Kevin L. Moses Sr.)

Here is the original post:
How to Train Your AI Soldier Robots (and the Humans Who Command Them) - War on the Rocks

Googles Machine Learning Is Making You More Effective In 2020 – Forbes

The collection of web-based software that Google offers to businesses and consumers is officially known as G Suite. Most people are familiar with Gmail and Google Docs, but quite a few do not realize that they offer a whole range of productivity and collaboration tools via your computer or mobile device.

HONG KONG, HONG KONG - November 27: A woman using an Macbook Pro as she uses Google G Suite on ... [+] November 27, 2017 in Hong Kong, Hong Kong. (Photo by studioEAST/Getty Images)

I have been working on another post about consumer-level uses of artificial intelligence (AI), not the media-hyped creepiness, but the practical, useful ways that AI is helping us do more and be more. Google started me thinking about this as I have watched it add various smart functions (think AI) to email as well as increasing ways to help me complete or enhance a document, spreadsheet, or presentation with the Explore function.It keeps learning from you and adjusting to you with these features.

Draft and send email responses quicker: Two relatively new, intelligent features include Smart Compose and Smart Reply. Gmail will suggest ways to complete your sentences while drafting an email and suggest responses to incoming messages as one-click buttons (at the bottom of the newly received message). This works in relatively simple messages that are calling for answers like these:

Enable Smart Compose and Smart Reply by going to Settings (that little gear icon in the upper right of your email inbox). Smart Reply is automatically enabled when users switch to the new Gmail.

On mobile and desktop or web, Smart Reply utilizes machine learning to give you better responses the more you use it. So if you're more of a thanks! than a thanks. person, itll suggest the response that is more authentic to you. Subtle difference, for sure, but I have noticed with certain people I interact with, the punctuation does change to show more emotion. I have not seen any emojis popping up, however. That may be a good thing.

For some of the newest features, you must go to Settings, then click Experimental Access. Features that are under test have a special little chemistry bottle icon or emoji. Most of the features in this post have already been fully tested and released to the general public.

Auto-reminders to respond: Gmails new Nudging function reportedly will now automatically bump both incoming and outgoing messages to the top of your inbox after a few days if neither party has responded. You can turn this feature on/off in Settings. However, I have not had this work properly, but maybe I am simply too efficient. Not. Either way, I have not noticed these reminders yet.

Machine Learning in Google Docs, Google Sheets, and Google Slides

The Explore button in the lower right corner of Docs, Sheets, or Slides is machine learning (ML) in action. You can visualize data in Sheets without using a formula. The explore button is a compass-looking type star and as you hover over it, it expands. Once clicked, it serves as a search tool within these products.

Explore and visualize in Sheets to help you decipher data easily by asking Explore with words, not formulas to get answer about your data. You can ask it a question like, how many units were sold on Black Friday, or what is my best selling product? or how much was spent on payroll last month, can be asked directly instead of creating formulas to get an answer. Explore in Sheets is available on the web, Android and iOS. On Android, you click the three vertical dots to get to the menu and then Explore is listed. When you first click it, it offers a try an example option and creates a new spreadsheet showcasing various examples.

Explore in Docs gives you a way to stay focused in the same tab. Using Explore, you get a little sidebar with Web, Images, and Drive results. It provides instant suggestions based on the content in your document including related topics to learn about, images to insert, or more content to check out in Docs. You can also find a related document from Drive or search Google right within Explore. Explore in Docs is available in a web browser, but I did not find it on my mobile apps for Android or iOS.

Explore in Slides makes designing a presentation simple. I think theres some AI/ML going on here, as Explore dynamically generates design suggestions, based on the content of your slides. Then, pick a recommendation and apply it with a single click, without having to crop, resize or reformat.

Are all of these features going to single-handedly make you the most productive person on the planet? No, but they are definitely small and constant improvements that point the way to a more customized and helpful use of artificial intelligence and machine learning.

If you are looking for other creative ways that people and organizations are using G Suite, there are tons of great customer stories that Google shares about how big and small organizations and companies use its free and enterprise-level products that may give you ideas for how you can leverage their cloud software. I find many of these case studies inspiring, but that is based on how organizations are responding to community needs.

Check out this one from Eagle County, Colorado during a wildfire there and this one from the City of Los Angeles with a real-time sheet to show police officers which homeless shelters have available beds.

More:
Googles Machine Learning Is Making You More Effective In 2020 - Forbes