Archive for the ‘Machine Learning’ Category

How machine learning is changing the face of financial services – Techerati

As the financial services industry continues to leverage machine learning and predictive analytics, the volume of data being generated is ballooning. This has massive implications for data security

Artificial intelligence(AI)has become integrated into our everyday lives. It powerswhat we see in our social media newsfeeds, activates facial recognition (to unlock our smartphones), and even suggests music for us to listen to. Machine learning, a subset of AI, is progressively integrating into our everyday and changing how we live and make decisions.

Business changes all the time, but advances in todays technologies have accelerated the pace of change.Machine learning analyses historical data and behaviours to predict patterns and make decisions. It has proved hugely successful in retail for its ability to tailor products and services to customers.

Unsurprisingly, retail banking and machine learning are a perfect combination. Thanks to machine learning, functions such as fraud detection and credit scoring are now automated. Banks also leverage machine learning and predictive analytics to offer their customers a far more personalised user experience, recommend new products, and animate chatbots that help with routine transactions such as account checking and paying bills.

Machine learning is also disrupting the insurance sector. As more connected devices provide deeper insights into customer behaviours, insurers are enabled to set premiums and make payout decisions based on data. Insurtech firms are shaking things up by harnessing new technologies to develop enhanced solutions for customers.The potential for change is huge and, according to McKinsey, the [insurance] industry is on the verge of a seismic, tech-driven shift.

Few industries have as much historical and structured data than the financial services industry, making it the perfect playing field for machine learning technologies.

Read this article:
How machine learning is changing the face of financial services - Techerati

‘Technology is never neutral’: why we should remain wary of machine learning in children’s social care – Communitycare.co.uk

(credit: Pablo Lagarto / Adobe Stock)

On 1 February 2020, YouTuber Simon Weekert posted a video on YouTube claiming to have redirected traffic by faking traffic jams on Google Maps. The video shows Weekert walking slowly along traffic-free streets in Berlin, pulling a pile of second-hand mobile phones in a cart behind him and Google Maps generating traffic jam alerts because the phones had their location services turned on.

Weekerts performance act demonstrates the fragility and vulnerability of our systems and their difficulty in interpreting outliers, and highlights a kind of decisional blindness when we think of data as objective, unambiguous and interpretation free, as he put it. There are many other examples of decisional blindness relating to drivers following Google Maps and falling off cliffs or driving into rivers.

Google has the resources, expertise and technology to rapidly learn from this experience and make changes to avoid similar situations. But the same vulnerability to hacking or outliers applies to the use of machine learning in childrens social care (CSC) and this raises the question of whether the sector has the means to identity and rectify issues in a timely manner and without adverse effects for service users.

Have you ever had the experience of asking the wrong question in Google search and getting the right answer? Thats because of contextual computing that makes use of AI and machine learning.

At its heart, machine learning is the application of statistical techniques to identify patterns and enable computers to use data to progressively learn and improve their performance.

From Google search and Alexa to online shopping, and from games and health apps to WhatsApp and online dating, most online interactions are mediated by AI and machine learning. Like electricity, AI and machine learning will power every software and digital device and will transform and mediate every aspect of human experience mostly without end users giving them a thought.

But there are particular concerns about their applications in CSC and, therefore, a corresponding need for national standards for machine learning in social care and for greater transparency and scrutiny around the purpose, design, development, use, operation and ethics of machine learning in CSC. This was set out in What Works for Childrens Social Cares ethics review into machine learning, published at the end of January.

The quality of machine learning systems predictive analysis is dependent on the quality, completeness and representativeness of the dataset they draw on. But peoples lives are complex, and often case notes do not capture this complexity and instead are complemented by practitioners intuition and practice wisdom. Such data lacks the quality and structure needed for machine learning applications, making high levels of accuracy harder to achieve.

Inaccuracy in identifying children and families can result in either false positives that infringe on peoples rights and privacy, cause stress and waste time and resources, or false negatives that miss children and families in need of support and protection.

Advocates of machine learning often point out that systems only provide assistance and recommendations, and that it remains the professionals who make actual decisions. Yet decisional blindness can undermine critical thinking, and false positives and negatives can result in poor practice and stigmatisation, and can further exclusion, harm and inequality.

Its true that AI and machine learning can be used in empowering ways to support services or to challenge discrimination and bias. The use of Amazons Alexa to support service users in adult social care is, while not completely free of concerns, one example of positive application of AI in practice.

Another is Essex councils use of machine learning to produce anonymised aggregate data at community level of children who may not be ready for school by their fifth birthday. This data is then shared with parents and services who are part of the project to inform their funding allocation or changes to practice as need be. This is a case of predictive analytics being used in a way that is supportive of children and empowering for parents and professionals.

The Principal Children and Families Social Worker (PCFSW) Network is conducting a survey of practitioners to understand their current use of technology and challenges and the skills, capabilities and support that they need.

It only takes 10 minutes to complete the survey on digital professionalism and online safeguarding. Your responses will inform best practice and better support for social workers and social care practitioners to help ensure practitioners lead the changes in technology rather than technology driving practice and shaping practitioners professional identity.

But its more difficult to make such an assessment in relation to applications that use hundreds of thousands of peoples data, without their consent, to predict child abuse. While there are obvious practical challenges around seeking the permission of huge numbers of people, failing to do so shifts the boundaries of individual rights and privacy vis--vis surveillance and the power of public authorities. Unfortunately though, ethical concerns do not always influence the direction or speed of change.

Another controversial recent application of technology is the use of live facial recognition cameras in London. An independent report by Essex Universitylast year suggested concerns with inaccuracies in use of live facial recognition, while the Met Polices senior technologist, Johanna Morley said millions of pounds would need to be invested in purging police suspect lists and aligning front- and back-office systems to ensure the legality of facial recognition cameras. Despite these concerns, the Met will begin using facial recognition cameras in London streets, with the aim of tackling serious crime, including child sexual exploitation.

Research published in November 2015, meanwhile, showed that a flock of trained pigeons can spot cancer in images of biopsied tissue with 99% accuracy; that is comparable to what would be expected of a pathologist. At the time, one of the co-authors of the report suggested that the birds might be able to assess the quality of new imaging techniques or methods of processing and displaying images without forcing humans to spend hours or days doing detailed comparisons.

Although there are obvious cost efficiencies in recruiting pigeons instead of humans, I am sure most of us will not be too comfortable having a flock of pigeons as our pathologist or radiologist.

Many people would also argue more broadly that fiscal policy should not undermine peoples health and wellbeing. Yet the past decade of austerity, with 16bn in cuts in core government funding for local authorities by this year and a continued emphasis on doing more with less, has led to resource-led practices that are far from the aspirations of Children Act 1989 and of every child having the opportunity to achieve their potential.

Technology is never neutral and there are winners and losers in every change. Given the profound implications of AI and machine learning for CSC, it is essential such systems are accompanied by appropriate safeguards and processes that prevent and mitigate false positives and negatives and their adverse impact and repercussions. But in an environment of severe cost constraints, positive aspirations might not be matched with adequate funding to ensure effective prevention and adequate support for those negatively impacted by such technologies.

In spite of the recent ethics reviews laudable aspirations, there is also the real risk that many of the applications of machine learning pursued to date in CSC may cement current practice challenges by hard-coding austerity and current thresholds into systems and the future of services.

The US constitution was written and ratified by middle-aged white men and it took over 130 years for women to gain the right of suffrage and 176 years to recognise and outlaw discrimination based on race, sex, religion and national origin. Learning from history would suggest we must be cautious about reflecting childrens social cares operating context into systems, all designed, developed and implemented by experts and programmers who may not represent the diversity of the people who will be most affected by such systems.

Dr Peter Buzzi (@MHChat) is the director of Research and Management Consultancy Centre and the Safeguarding Research Institute. He is also the national research lead for the Principal Children and Families Social Worker (PCFSW) Networks online safeguarding research and practice development project.

The rest is here:
'Technology is never neutral': why we should remain wary of machine learning in children's social care - Communitycare.co.uk

Deep Instinct nabs $43M for a deep-learning cybersecurity solution that can suss an attack before it happens – TechCrunch

The worlds of artificial intelligence and cybersecurity have become deeply entwined in recent years, as organizations work to keep up with and ideally block increasingly sophisticated malicious hackers. Today, a startup thats built a deep learning solution that it claims can both identify and stop even viruses that have yet to be identified has raised a large round of funding from some big strategic partners.

Deep Instinct, which uses deep learning both to learn how to identify and stop known viruses and other hacking techniques, as well as to be able to identify completely new approaches that have not been identified before, has raised $43 million in a Series C.

The funding is being led by Millennium New Horizons, with Unbound (a London-based investment firm founded by Shravin Mittal), LG and Nvidia all participating. The investment brings the total raised by Deep Instinct to $100 million, with HP and Samsung among its previous backers. The tech companies are all strategics, in that (as in the case of HP) they bundle and resell Deep Instincts solutions, or use them directly in their own services.

The Israeli-based company is not disclosing valuation, but notably, it is already profitable.

Targeting as-yet unknown viruses is becoming a more important priority as cybercrime grows. CEO and founder Guy Caspi notes that currently there are more than350,000 new machine-generated malware created every day with increasingly sophisticated evasion techniques, such as zero-days and APTs (Advanced Persistent Threats). Nearly two-thirds of enterprises have been compromised in the past year by new and unknown malware attacks originating at endpoints, representing a 20% increase from the previous year, he added. And zero-day attacks are now four times more likely to compromise organizations. Most cyber solutions on the market cant protect against these new types of attacks and have therefore shifted to a detect-response approach, he said, which by design means that they assume a breach will happen.

While there is already a large profusion of AI-based cybersecurity tools on the market today, Caspi notes that Deep Instinct takes a critically different approach because of its use of deep neural network algorithms, which essentially are set up to mimic how a human brain thinks.

Deep Instinct is the first and currently the only company to apply end-to-end deep learning to cybersecurity, he said in an interview. In his view, this provides a more advanced form of threat protection than the common traditional machine learning solutions available in the market, which rely on feature extractions determined by humans, which means they are limited by the knowledge and experience of the security expert, and can only analyze a very small part of the available data (less than 2%, he says). Therefore, traditional machine learning-based solutions and other forms of AI have low detection rates of new, unseen malware and generate high false-positive rates. Theres been a growing body of research that supports this idea, although weve not seen many deep learning cybersecurity solutions emerge as a result (not yet, anyway).

He adds that deep learning is the only AI-basedautonomous system that can learn from any raw data, as its not limited by an experts technological knowledge. In other words, its not based just on what a human inputs into the algorithm, but is based on huge swathes of big data, sourced from servers, mobile devices and other endpoints, that are input in and automatically read by the system.

This also means that the system can be used in turn across a number of different end points. Many machine learning-based cybersecurity solutions, he notes, are geared at Windows environments. That is somewhat logical, given that Windows and Android account for the vast majority of attacks these days, but cross-OS attacks are now on the rise.

While Deep Instinct specializes in preventing first-seen, unknown cyberattacks like APTs and zero-day attacks, Caspi notes that in the past year there has been a rise in both the amount and the impact of cyberattacks covering other areas. In 2019, Deep Instinct saw an increase in spyware and ransomware on top of an increase in the level of sophistication of the attacks that are being used, specifically with more file-less attacks using scripts and powershell, living off the land attacks and the use of weaponized documents like Microsoft Office files and PDFs. These sit alongside big malware attacks like Emotet, Trickbot, New ServeHelper and Legion Loader.

Today the company sells services both directly and via partners (like HP), and its mainly focused on enterprise users. But since there is very little in the way of technical implementation (Our solution is mostly autonomous and all processes are automated [and] deep learning brain is handling most of the security, Caspi said), the longer-term plan is to build a version of the product that consumers could adopt, too.

With a large part of antivirus software often proving futile in protecting users against attacks these days, that could come as a welcome addition to the market, despite how crowded it already is.

There is no shortage of cybersecurity software providers, yet no company aside from Deep Instinct has figured out how to apply deep learning to automate malware analysis, said Ray Cheng, partner at Millennium New Horizons, in a statement. What excites us most about Deep Instinct is its proven ability to use its proprietary neural network to effectively detect viruses and malware no other software can catch. That genuine protection in an age of escalating threats, without the need of exorbitantly expensive or complicated systems is a paradigm change.

See original here:
Deep Instinct nabs $43M for a deep-learning cybersecurity solution that can suss an attack before it happens - TechCrunch

How Machine Learning Will Reshape The Future Of Investment Management – Forbes India

Image: ShutterstockThe 2020 outlook for Asset Management re-affirms impact of globalization and outperformance of private equity. While the developed worlds economy has sent mixed signals, all eyes are now on Asia and especially India, to drive the next phase of growth. The goal is to provide Investment Solutions for its mix of young as well as senior population. Its diversity cultural, economic, regional & regulatory, will pose the next challenge.

The application of Data Science & Machine Learning has delivered value for portfolio managers through quick and uniform decision-making. Strategic Beta Funds which have consistently generated added value, rely heavily on the robustness of their portfolio creation models which are excruciatingly data driven. Deploying Machine Learning algorithms helps assess credit worthiness of firms and individuals for lending and borrowing. Data Science and Machine Learning solutions eliminate human bias and calculation errors while evaluating investments in an optimum period.

Investment management is justified as an industry only to the extent that it can demonstrate a capacity to add value through the design of dedicated investor-centric investment solutions, as opposed to one-size-fits-all manager-centric investment products. After several decades of relative inertia, the much needed move towards investment solutions has been greatly facilitated by a true industrial revolution taking place in investment management, triggered by profound paradigm changes with the emergence of novel approaches such as factor investing, liability-driven and goal-based investing, as well as sustainable investing. Data science is expected to play an increasing role in these transformations.

This trend poses a critical challenge to global academic institutions: educating a new breed of young professionals and equipping them with the right skills to address the situation, and who could seize the fast-developing new job opportunities in this field. Continuous education gives the opportunity to meet with new challenges of this ever-changing world, especially in the investment industry.

As recently emphasized by our colleague Vijay Vaidyanathan, CEO, Optimal Asset Management, former EDHEC Business School PHD student, and online course instructor at EDHEC Business School, our financial well-being is second only to our physical well-being, and one of the key challenges we face is to enhance financial expertise. To achieve this, we cannot limit ourselves to the relatively small subset of the population who can afford to invest the significant time and expense of attending a formal, full-time degree programme on a university campus. Therefore, we must find ways to elevate the quality of financial professional financial education to ensure that all asset managers and asset owners are fully equipped to make intelligent and well-informed investment decisions.

Data science applied to asset management, and education in the field, is expected to affect not only investment professionals but also individuals. On this topic, we would like to share insights from Professor John Mulvey, Princeton University, who is also one of EDHEC on-line course instructors. John believes that machine learning applied to investment management is a real opportunity to assist individuals with their financial affairs in an integrated manner. Most people are faced with long-term critical decisions about saving, spending, and investing to achieve a wide variety of goals.

These decisions are often made without much professional guidance (except for wealthier clients), and without much technical training. Current personalized advisors are reasonable initial steps. Much more can be done in this area with modern data science and decision-making tools. Plus, younger people are more willing to trust fully automated computational systems. This domain is one of the most relevant and significant areas of development for future investment management.

By Nilesh Gaikwad, EDHEC Business School country manager in India, and Professor Lionel Martellini, EDHEC-Risk Institute Director.

Follow this link:
How Machine Learning Will Reshape The Future Of Investment Management - Forbes India

Manchester Digital unveils 72% growth for digital businesses in the region – Education Technology

Three quarters of Greater Manchester's digital tech businesses have experienced significant growth in the last 12 months

New figures from Manchester Digital, the independent trade body for digital and tech businesses in Greater Manchester, have revealed that 72% of businesses in the region have experienced growth in the last year, up from 54% in 2018.

Despite such prosperous results, companies are still calling out for talent, with developer roles standing out as the most in-demand for the seventh consecutive year. The other most sought-after skills in the next three years include data science (15%), UX (15%), and AI and machine learning (11%).

In the race to acquire top talent, almost 25% of Manchester vacancies advertised in the last 12 months remained unfilled, largely due to a lack of suitable candidates and inflated salary demands.

Unveiled at Manchester Digitals annual Skills Festival last week, the Annual Skills Audit, which evaluates data from 250 digital and tech companies and employees across the region, also analysed the various professional pathways into the sector.

The majority (77%) of candidates entering the sector harbour a degree of some sort; however, of the respondents who possessed a degree, almost a quarter claimed it was not relevant to tech, while a further 22% reported traversing through the sector from another career.

In other news: Jisc report calls for an end to pen and paper exams by 2025

On top of this, almost one in five respondents said they had self-taught or upskilled their way into the sector a positive step towards boosting diversity in terms of both the people and experience pools entering the sector.

Its positive to see a higher number of businesses reporting growth this year, particularly from SMEs. While the political and economic landscape is by no means settled, it seems that businesses have strategies in place to help them navigate through this uncertainty, said Katie Gallagher, managing director of Manchester Digital.

Whats particularly interesting in this years audit are the data sets around pathways into the tech sector, added Gallagher. While a lot of people still do report having degrees and wed like to see more variation here in terms of more people taking up apprenticeships, work experience placements etc. its interesting to see that a fair percentage are retraining, self-training or moving to the sector with a degree thats not directly related. Only by creating a talent pool from a wide and diverse range of people and backgrounds can we ensure that the sector continues to grow and thrive sustainably.

When asked what they liked about working for their current employer, employees across the region mentioned flexible work as the number one perk they value (40%). Career progression was also a crucial factor to those aged 18-21, with these respondents also identifying brand prestige as a reason to choose a particular employer.

For this first time this year, weve expanded the Skills Audit to include opinions from employees, as well as businesses. With the battle for talent still one of the biggest challenges employers face, were hoping that this part of the data set provides some valuable insights into why people choose employers and what they value most and consequently helps businesses set successful recruitment and retention strategies, Gallagher concluded.

Read more from the original source:
Manchester Digital unveils 72% growth for digital businesses in the region - Education Technology