Archive for the ‘Machine Learning’ Category

ManpowerGroup Returns to Viva Technology as HR Partner, Showcasing New AI, Machine-Learning and Data-Driven Predictive Performance Tools – PRNewswire

PARIS, June 15, 2021 /PRNewswire/ --ManpowerGroup (NYSE: MAN) joins the biggest names in tech as HR partner of the world-famous Viva Technology (VivaTech) conference held inParis and online this week. ManpowerGroup will share innovation that improves people's lives and solves one of the world's most pressing social issues - how to provide meaningful, sustainable employment for all. The hybrid event will attract more than 8,000 attendees and ManpowerGroup has partnered since its launch five years ago to support start-ups and accelerate tech for good.

"Our innovations are driven by impact - upskilling people at speed and scale and matching people to jobs with better accuracy than either humans or machines could do on their own," said Jonas Prising, ManpowerGroup Chairman & CEO. "We're excited to return to VivaTech to showcase how we'reusing AI, people analytics and human expertise to createa more resilient, future-ready workforce. Building a better, brighter future of work requires bold, disruptive ideas and collaboration across business, government and education this is how we will create sustainable skills, resilient communities, and greater prosperity for all."

ManpowerGroup will host 30 game-changing start-ups and showcase innovation and digital workforce transformation on its #FutureofWork lab including:

ManpowerGroup will host its Talent Center for the fifth year, an online and in-person space where in-demand tech workers can experience coaching, assessment and skills development and match with open positions in the world's leading tech companies.

Human Expertise: On Wednesday June 16ManpowerGroup's Chairman & CEOJonas Prisingwill be joined by Tomas Chamorro-Premuzic, ManpowerGroup's Chief Talent Scientist for Human Age Reconnected a discussion moderated by CNN's Margot Haddad on CEO takeaways from the crisis and AI, bias and ethics in recruitment.

Follow @ManpowerGroup at Viva Tech on Twitter and join the conversation using #sustainableskills #FutureofWork #VivaTech. https://vivatechnology.com/partners/manpower-group

To find out more about ManpowerGroup's Future for Workers insight series read The Skills Revolution Rebooton the impact of COVID-19 on digitization and skills and The Future for Workers, By Workers.

ABOUT MANPOWERGROUPManpowerGroup (NYSE: MAN), the leading global workforce solutions company, helps organizations transform in a fast-changing world of work by sourcing, assessing, developing and managing the talent that enables them to win. We develop innovative solutions for hundreds of thousands of organizations every year, providing them with skilled talent while finding meaningful, sustainable employment for millions of people across a wide range of industries and skills. Our expert family of brands Manpower, Experis and Talent Solutions creates substantially more value for candidates and clients across more than 75 countries and territories and has done so for over 70 years. We are recognized consistently for our diversity - as a best place to work for Women, Inclusion, Equality and Disability and in 2021 ManpowerGroup was named one of the World's Most Ethical Companies for the 12th year - all confirming our position as the brand of choice for in-demand talent.

SOURCE ManpowerGroup

http://www.manpowergroup.com

See more here:
ManpowerGroup Returns to Viva Technology as HR Partner, Showcasing New AI, Machine-Learning and Data-Driven Predictive Performance Tools - PRNewswire

Discover the theory of human decision-making using extensive experimentation and machine learning – Illinoisnewstoday.com

Discover a better theory

In recent years, the theory of human decision making has skyrocketed. However, these theories are often difficult to distinguish from each other and offer less improvement in explaining decision-making patterns than previous theories.Peterson et al. Leverage machine learning to evaluate classical decision theory, improve predictability, and generate new theories of decision making (see Perspectives by Bhatia and He). This method affects the generation of theory in other areas.

Science, Abe2629, this issue p. 1209abi7668, p. See also. 1150

Predicting and understanding how people make decisions is a long-standing goal in many areas, along with a quantitative model of human decision-making that informs both social science and engineering research. did. Show how large datasets can be used to accelerate progress towards this goal by enhancing machine learning algorithms that are constrained to generate interpretable psychological theories. .. Historical discoveries by conducting the largest experiments on risky choices to date and analyzing the results using gradient-based optimizations of differentiable decision theory implemented via artificial neural networks. A new, more accurate model of human decision-making in the form of summarizing, confirming that there is room for improvement of existing theories, and preserving insights from centuries of research.

Discover the theory of human decision-making using extensive experimentation and machine learning

Source link Discover the theory of human decision-making using extensive experimentation and machine learning

Go here to see the original:
Discover the theory of human decision-making using extensive experimentation and machine learning - Illinoisnewstoday.com

How to avoid the ethical pitfalls of artificial intelligence and machine learning – UNSW Newsroom

The modern business world is littered with examples where organisations hastily rolled out artificial intelligence (AI) and machine learning (ML)solutions without due consideration of ethical issues, which has led to very costly and painful learning lessons. Internationally, for example, IBM is getting sued afterallegedly misappropriating data from an appwhile Goldman Sachs is under investigation for using anallegedly discriminatory AI algorithm. A closer homegrown example was theRobodebtdebacle, in which the federal governmentdeployed ill-thought-through algorithmic automationtosend out letters torecipientsdemanding repayment ofsocial security payments dating back to 2010. The government settled a class action against it late last year at an eye-watering cost of $1.2 billion after theautomated mailoutssystemtargeted many legitimate social security recipients.

Thattargeting of legitimate recipientswas clearly illegal, says UNSW Business Schools Peter Leonard, a Professor of Practice for the School of Information Systems & Technology Management and the School of Management and Governance at UNSW Business School. Government decision-makersare required by law to take into accountallrelevant considerationsand only relevant considerations, andauthorising automated demands to be made of legitimate recipients was notproper application ofdiscretionsbyan administrative decision-maker.

Prof. Leonard saysRobodebtis an important example of what can go wrong with algorithms in which due care and consideration is not factored in. When automation goeswrong,it usually does soquicklyandat scale. And when things go wrong at scale, you dont need each payout to be much for it to be a very large amount when added together acrossacohort.

Robodebt is an important example of what can go wrong with systems that have both humans and machines in a decision-making chain. Photo: Shutterstock

Technological developments are very often ahead of both government laws and regulations as well as organisational policies around ethics and governance. AI and ML are classic examples ofthisand Prof. Leonard explains there is major translational work to be done in order to bolster companies ethical frameworks.

Theres still a very large gap between government policymakers, regulators, business, and academia. I dont think there are many people today bridging that gap, he observes. It requires translational work, with translation between those different spheres of activities and ways of thinking. Academics, for example, need to think outside their particular discipline,departmentor school. And they have to think about how businesses and other organisations actually make decisions, in order to adapt their view of what needs to be done to suit the dynamic and unpredictable nature of business activity nowadays.Soit isnt easy, but it never was.

Prof. Leonard says organisations are feeling their way to betterbehaviourin this space. Hethinksthat manyorganisationsnow care about adverse societal impacts of their business practices, butdontyet know how to build governance and assurance to mitigate risks associated with data and technology-driven innovation.They dont know how to translate what are often pretty high-level statementsaboutcorporate social responsibility,goodbehaviouror ethics call it what you will into consistently reliable action,to give practical effect to those principles in how they make their business decisions every day. That gap creates real vulnerabilities for many corporations, he says.

Data privacy serves as an example of what should be done in this space. Organisations have become quite good at working out how to evaluate whether a particular form of corporatebehaviouris appropriately protective of the data privacy rights of individuals. This is achieved through privacy impact assessments which are overseen by privacy officers, lawyers and other professionals who are trained to understand whether or not a particular practice in the collection and handling of personal information about individuals may cause harm to those individuals.

Theres an example of how what can be a pretty amorphous concept a breach of privacy is reduced to something concrete and given effect through a process that leads to an outcome with recommendations about what the business should do, Prof. Leonard says.

When things go wrong with data, algorithms and inferences, they usually go wrong at scale. Photo: Shutterstock

Disconnects also exist between key functional stakeholders required to make sound holistic judgements around ethics in AI and ML. There is a gap between the bit that is the data analytics AI, and the bit that is the making of the decision by an organisation. You can have really good technology and AI generating really good outputs that are then used really badly by humans, and as a result, this leads to really poor outcomes, says Prof. Leonard. So, you have to look not only at what the technology in the AI is doing, but how that is integrated into the making of the decision by an organisation.

This problem exists in many fields. Onefieldin which it is particularly prevalent is digital advertising. Chief marketing officers, for example, determine marketing strategies that are dependent upon the use of advertising technology which are in turn managed by a technology team. Separate to this is data privacy which is managed by a different team, and Prof. Leonard says each of these teamsdontspeak the same language as each other in order to arrive at a strategically cohesive decision.

Some organisations are addressing this issue by creating new roles, such as a chief data officer or customer experience officer, who is responsible for bridging functional disconnects in applied ethics. Such individuals will often have a background in or experience with technology, data science and marketing, in addition to a broader understanding of the business than is often the case with the CIO.

Were at a transitional point in time where the traditional view of IT and information systems management doesnt work anymore, because many of the issues arise out of analysis and uses of data, says Prof. Leonard. And those uses involve the making of decisions by people outside the technology team, many of whom dont understand the limitations of the technology in the data.

Why regulatorsneedteeth

Prof. Leonardwas recently appointed to theNSW inaugural AI Government Committee the first of its kind for any federal, state or territory government in Australiatoadvise the NSW Minister for Digital VictorDominelloon how todeliver on key commitments in the states AI strategy.One focusfor the committee ishow to reliably embed ethics in how, when and why NSW government departments and agencies useAIand other automation in their decision-making.

Prof. Leonard said governmentsand other organisationsthat publish aspirational statements and guidance on ethical principles of AIbut fail to go furtherneed to do better.For example, theFederal Governmentsethics principlesforuses ofartificial intelligenceby public and private sector entitieswere publishedover18 months ago, but there is little evidence of adoption across the Australian economy, or that these principles are being embedded into consistently reliable and verifiable business practices, he said.

What good is this? Itis like the 10 commandments.Theyarea great thing. But are people actually going to follow them? And what are we going to do if they dont?Prof. Leonard said it is notworth publishing statements of principles unlessthey are supplemented withprocesses and methodologies for assurance and governance of all automation-assisted decision-making. It is not enough to ensure that the AI component is fair, accountable and transparent: the end-to-end decision-making process must be reviewed.

Technological developments and analytics capabilities usually outpace laws, regulatory policy, audit processes and oversight frameworks. Photo: Shutterstock

While some regulation willalsobe needed to build the right incentives,Prof. Leonard saidorganisations need to first know how to assure good outcomes, before they are legally sanctionedand penalisedfor bad outcomes.The problem for the public sector is more immediate than for the business and not for profit sectors, because poor algorithmic inferences leading to incorrect administrative decisions can directly contravenestate andfederaladministrative law, he said.

In the business and not for profit sectors, thelegalconstraints are more limitedin scope (principally anti-discriminationandscope consumer protection law). Because the legal constraints are limited, Prof. Leonard observed, reporting oftheRobodebtdebacle has not led tosimilarurgency in the business sector asthat inthefederal government sector.

Organisations need to be empowered to thinkmethodically across andthroughpossible harms, whilethere alsoneeds to be adequate transparency in the system and government policy and regulators should not lag too far behind.A combination of these elements will help reduce the reliance on ethics within organisations internally, as they are provided with a strong framework for sound decision-making.And then you come behind with a big stick iftheyrenot using the tools or theyre not using the tools properly. Carrots alone and sticks alone never work; you need the combination of two, said Prof.Leonard.

The Australian Human Rights Commissionsreport on human rights and technologywas recently tabled in Federal Parliament.Human Rights Commissioner EdSantowstatedthat the combination oflearningsfromRobodebtand the Reports findings provide aonce-in-a-generationchallenge and opportunity to develop the proper regulations around emerging technologies tomitigate the risks around them and ensure they benefit all members of the community. Prof Leonard observed that the challenge is as much to how we govern automation aided decision making within organisations the human elementas it is to how we assure that technology and data analytics are fair, accountable and transparent.

Many organisations dont have the capabilities to anticipate when outcomes will be unfair or inappropriate with automation-assisted decision making. Photo: Shutterstock

A good example of the need for this can be seen in the Royal Commission into Misconduct in the Banking, Superannuation and Financial Services Industry. It noted key individuals who assess and make recommendations in relation to prudential risk within banks are relatively powerless compared to those who control profit centres. So, almost by definition, if you regard ethics and policing of economics as a cost within an organisation, and not an integral part of the making of profits by an organisation, you willend up with bad results because you dont value highly enough the management of prudential, ethical or corporate social responsibility risks, says Prof. Leonard. You name me a sector, and Ill give you an example of it.

While he notes that larger organisations will often fumble their way through to a reasonably good decision, another key risk exists among smaller organisations. They dont have processes around checks and balances and havent thought about corporate social responsibility yet becausetheyre not required to, says Prof. Leonard. Small organisations often work on the mantra of moving fast and breaking things and this approach can have a very big impact within a very short period of time,thanks to the potentially rapid growth rate of businesses in a digital economy.

Theyre the really dangerous ones, generally. This means the tools that you have to deliver have to be sufficiently simple and straightforward that they are readily applied, in such a way that an agile move fast and break things' type-business will actually apply them and give effect to thembefore they break things that really can cause harm, he says.

See the original post here:
How to avoid the ethical pitfalls of artificial intelligence and machine learning - UNSW Newsroom

Can Humans Ever Understand What Sperm Whales say? This Research Has Roadmap Towards It – Gadgets 360

A new papertitled, Cetacean Translation Initiative: a roadmap to deciphering the communication of sperm whales', explains how the scientists are going to try to decode whale vocalisations. The researchers are using machine learning techniques to try and translate the clicking and other noises made by sperm whales, to see if we can understand what the giant creatures are saying.

Whether known non-human communication systems exhibit similarly rich structure either of the same kind as human languages, or completely new remains an open question, reads the concluding sentence in the introduction of the paper, posted on to the preprint server arXiv.org. The paper has been authored by 16 scientific members of Project CETI collaboration.

It was only in the 1950s that we, humans, observed sperm whales made sounds. It took another two decades to understand for humans that they were using those sounds to communicate, according to the new research posted by CETI.

Researchers say that the past decade witnessed a ground-breaking rise of machine learning for human language analysis, and recent research has shown the promise that such tools may also be used for analysing acoustic communication in nonhuman species.

"We posit that the machine learning will be the cornerstone of the future collection, processing, and analysis of multimodal streams of data in animal communication studies," read the abstract of the paper.

And to further understand this, scientists have picked sperm whales, for their highly-developed neuroanatomical features, cognitive abilities, social structures, and discrete click-based encoding, making for an excellent starting point for advanced machine learning tools that can be applied to other animals in the future.

The paper is basically a roadmap towards this goal, they add. Scientists have outlined key elements needed for the collection and processing of massive bioacoustics data of sperm whales, detecting their basic communication units and language-like higher-level structures, and validating these models through interactive playback experiments.

They further say that technological advancements achieved during this effort are expected to help in the application of broader communities investigating non-human communication and animal behavioural research.

Researchers explain that the clicking sound sperm whales make, it appears, serves the dual purpose of echolocation at the depths to which they go and also use it in their social vocalisations. The communication clicks are more tightly packed, according to the CETI paper.

That a project as large as this one would have complexities and challenges is something not very difficult to understand.

David Gruber, a marine biologist, and CETI project leader said that figuring out what they have been able to discover thus far has been challenging, adding, sperm whales have "been so hard for humans to study for so many years." But now, "we actually do have the tools to be able to look at this more in-depth in a way that we haven't been able to before," he said adding, tools included AI, robotics, and drones.

A report in Live Science said that the CETI project has a massive stash of recordings of about 1 lakh sperm whale clicks, painstakingly gathered by marine biologists over many years. However, it said that the machine-learning algorithms might need somewhere close to 4 billion clicks before they start making any conclusions.

And to ensure this, CETI is setting up innumerable automated channels to collect recordings from sperm whales. The tools CETI is using include underwater microphones placed in waters frequented by sperm whales, microphones that can be dropped by eagle-eyed airborne drones as soon as they spot a pod of sperm whales gathering at the surface, and even robotic fish that can follow and listen to whales from a distance, the report said.

If you think collecting these sounds is the only challenge, then wait. According to a 2016 research in the journal Royal Society Open Science, sperm whales are known to have dialects as well. But finding answers to these questions is what CETI is dedicated to.

Go here to read the rest:
Can Humans Ever Understand What Sperm Whales say? This Research Has Roadmap Towards It - Gadgets 360

Machine Learning And Intelligent Process Automation; Interview with Bikram Singh, Co-Founder and CEO of EZOPS – TechBullion

Share

Share

Share

Email

With Artificial Intelligence, EZOPS can maximize data confidence, integrity, and control. This machine learning and intelligent process automation platform is one innovation to look out for, the CEO Bikram Singh shares more insights into the platform with us in this interview with TechBullion.

I am Bikram Singh and I am the CEO and Co-Founder of EZOPS.

I have built and managed operational services and technology solutions for banks, hedge funds, asset managers, fund administrators, and custodians.

From my experience in the financial industry, I know firsthand the pain points that plague data management teams. As a result, it has become my mission to develop an end to end platform that addresses the challenges teams face across the entire lifecycle of data. Through EZOPS, I am able to obtain my goal of providing financial institutions with a solution that drives operational efficiency and delivers quality data.

Prior to founding EZOPS, I had over 20 years of experience managing financial services operations and technology while working at McKinsey & Company, Lehman Brothers, Lava Trading, Goldman Sachs, and Citi.

EZOPS is AI-enabled software that harnesses the power of machine learning and intelligent process automation to revolutionize data control and drive transformative efficiency gains at some of the worlds largest financial services institutions.

Through my years of experience in financial services, I, along with Co-Founders Sarva Srinivasan and Dutt Chintalapati, realized that we could develop and implement automated workflows to solve for many of the challenges our clients faced every day. We combined our industry experience with our knowledge of machine learning and automation to develop EZOP in an effort to eliminate the longstanding redundancies and inefficiencies that have plagued the industry for decades to help transform how data is controlled at large financial institutions today.

EZOPS is the leader in cutting-edge innovation for the financial services sector, including: Global Banks, Regional Banks, Custodians, Asset Service Providers, Asset Management, Operations Outsourcers, Fintech, Corporate Treasury.

Our solutions help our clients transform their business operations and cover crucial areas such as Operations, Finance, Governance, Regulations, Compliance, and Audit to enhance quality & control for post-trade operations.

EZOPS offers comprehensive functionality that businesses of large scale and complexity need in order to manage the four pillars of operational data control reconciliation, research, remediation, and reporting all powered by Machine Learning and smart workflow management.

EZOPS intelligently automates repeatable actions, checks for errors, and offers insights that users might miss on their own. The goal is to streamline parts of the process that software can do better.

EZOPS platform combines machine-learning with smart workflow management functionality for comprehensive end-to-end automation.

It integrates siloed data and processes across the enterprise for cohesive exception management processing EZOPS ARO improves transparency and communication via alerts, notifications, messages, and emails.

It Facilitates source system remediation to OMS, PMS, accounting systems & sources for reference data, corporate actions & market data.

Since the financial crisis the landscape across the institutional financial sector has changed. This has further accelerated with the global pandemic and the drive for digital transformation.

The business of financial intermediation is entering the post-internet era and the next decade will see business models on the institutional side being disrupted as large financial institutions start taking a hard look at the collection of businesses they have and the associated fit with their respective business model and strategy.

As digitalization, shedding, restructuring, realignment takes place, it will present an opportunity for a variety of players. Many of whom will likely be unregulated, technologically savvier, and much more nimble than the institutions of the past.

Transactional volumes have increased during the pandemic in conjunction with an increased focus on regulatory reporting and compliance. At the same time markets and companies have become more fragmented.

This has led to an increased operational and technical infrastructures that were primarily built to support pre-crisis business complexity, volumes and regulatory reporting are proving to be costly to maintain and yielding less than desired business value.

EZOPS can be easily integrated into a clients current operating systems via cloud or on-premise installations. Clients are up and running in a matter of days depending on the complexity of their ecosystem & tech stack. Amazon Web Services (AWS) users can access EZOPS ARO capabilities via the Amazon Marketplace in a matter of hours. EZOPS multiple partner and channel integrations allow clients to switch on new capabilities seamlessly and in a frictionless manner.

Yes, we have a strategic partner ecosystem consisting of technology providers, consulting organizations, and financial software firms. Our partners compliment our software solution and support our clients globally. Solutions partners include: BNY Mellon, Riskfocus, Orchestrade, and Access Fintech. Technology partners include: Snowflake, Oracle, and AWS.

Website: https://www.ezops.com

LinkedIn: https://www.linkedin.com/company/ezopsinc/about/

Twitter: @ezopsinc

Facebook: @ezopsinc

Link:
Machine Learning And Intelligent Process Automation; Interview with Bikram Singh, Co-Founder and CEO of EZOPS - TechBullion