Archive for the ‘Artificial Intelligence’ Category

Who will really dominate artificial intelligence capabilities in the future? – Tech Wire Asia

The US is far ahead of everyone else but China is keen on taking the lead, soon. Source: Shutterstock

IN THE digital age, countries all around the world are racing to excel with artificial intelligent (AI) technology.

The phenomenon is not a surprise considering that that AI is undeniably a powerful solution with elaborate enterprise use across industries from medical algorithms to autonomous vehicles.

For a while now, the US has been dominating the global race in AI development and capabilities, but according to the Global AI Index, it seems like China will be dominating the field in the near future.

As the first runner up, it is expected that China will overtake the US in about 5 to 10 years, based on the countrys impressive growth records.

Based on 7 key indicators such as research, infrastructure, talent, development, operating environment, commercial ventures, and government strategy measured over the course of 12 months it looks like China is promoting growth unlike any other.

Although the US is prominently in the lead by a great margin, China has already materialized efforts to establish a bigger influence based on the countrys Next Generation Artificial Intelligence Development Plan which it launched in 2017.

Not only that, it is reported that China alone has promised to spend up to US$22 billion a mammoth figure compared to the global governmental AI spending estimated at US$35 billion throughout the next decade or so.

Nevertheless, China must recognize some areas that it needs to improve in order to successfully lead with AI.

Recording a 58.3 percent on the index, China seems to lack in terms of talent, commercial ventures, research quality, and private funding.

However, the country has still shown significant growth in various other areas. especially in the contribution of AI code. According to the worlds biggest open-source development platform, Github, China developers have contributed 13,000 AI codes to date.

This is a big jump compared to the initial count of 150 in 2015. The US, however, is still in the lead with a record of 42,000 contributions.

The need to dominate the AI market seems to be the motivation for countries around the world as the technology is a defining asset that can shift the dynamics of the global economy.

Other prominent countries to watch out for are the UK, Canada, and Germany, ranking 3rd, 4th, and 5th place consecutively.

Another Asian country making a mark in the 7th spot is Singapore, promoting a high score in talent but room for improvement in terms of its operative environment.

Despite the quick progress, experts hope that all countries looking to excel in AI will do so with ethical considerations and strategic leadership in mind.

More here:

Who will really dominate artificial intelligence capabilities in the future? - Tech Wire Asia

Fels backs calls to use artificial intelligence as wage-theft detector – The Age

"The amount of underpayment occurring now is so large that there is an effect on wages generally and on making life difficult for law-abiding employers."

Senator Sheldon said artificial intelligence could be used to detect discrepancies in payment data held by the Australian Taxation Office on employers in industries such as retail, hospitality, agriculture and construction.

"You could do it for wages and superannuation, with an algorithm used as a first flag for human intervention," he said.

The problems of underpayment are systemic and not readily resolvable just by strong law enforcement - even though that's vital.

Alistair Muir, chief executive of Sydney-based consultancy Vanteum, said it was possible to "train artificial intelligence algorithms across multiple data sets to detect wage theft as described by Senator Sheldon, without ever needing to move, un-encrypt or disclose the data itself".

Melbourne University associate professor of computing Vanessa Teague said a "simple computer program" could be designed to detect evidence of wage underpayment using the rules laid out in the award system, but that any such project should safeguard workers' privacy by requiring informed consent.

Industrial Relations Minister Christian Porter did not rule out introducing data matching as part of his wage theft crackdown and said workplace exploitation "will not be tolerated by this government".

Mr Porter said the government accepted "in principle" the recommendations of the migrant worker taskforce which included taking a "whole of government" approach and giving the Fair Work Ombudsman expanded information gathering powers.

The taskforce report said inter-governmental information sharing was "an important avenue" for identifying wage under payment and could be used to "support successful prosecutions".

In the latest case of alleged wage underpayment in the hospitality industry, the company behind the Crown casino eatery fronted by celebrity chef Heston Blumenthal, Dinner by Heston, this week applied to be wound up after failing to comply with a statutory notice requiring it to back pay staff for unpaid overtime.

It follows revelations of underpayments totalling hundreds of millions of dollars by employers including restauranteur George Calombaris' Made Establishment, Qantas, Coles, Commonwealth Bank, Bunnings, Super Retail Group and the Australian Broadcasting Corporation.

Professional services firm PwC has estimated that employers are underpaying Australian workers by $1.4 billion a year, affecting 13 per cent of the nation's workforce.

AI Group chief executive Innes Willox said the employer peak body did not "see a need" for increased governmental data collection powers.

Australian Retail Association president Russell Zimmerman said retailers were not inherently opposed to data matching as employers who paid workers correctly had "nothing to fear" but was unsure how effective or accurate the approach would be.

"We don't support wage theft," Mr Zimmerman said.

He blamed the significant underpayments self-reported in recent months on difficulties navigating the "complex" retail award.

Senator Sheldon rejected this argument, saying the system was "only complicated if you don't want to pay".

"You get paid for eight hours, then after that you get overtime and you get weekend penalty rates," he said.

Australian Council of Trade Unions assistant secretary Liam OBrien said the workplace law system was "failing workers who are suffering from systemic wage theft".

The minister, who is consulting unions and business leaders on the detail of his wage theft bill including what penalty should apply if employers fail to prevent accidental underpayment said the draft legislation should be released "early in the new year".

Dana is health and industrial relations reporter for The Sydney Morning Herald and The Age.

Go here to see the original:

Fels backs calls to use artificial intelligence as wage-theft detector - The Age

China should step up regulation of artificial intelligence in finance, think tank says – msnNOW

Jason Lee/REUTERS A Chinese flag flutters in front of the Great Hall of the People in Beijing, China, May 27, 2019. REUTERS/Jason Lee

QINGDAO, China/BEIJING (Reuters) - China should introduce a regulatory framework for artificial intelligence in the finance industry, and enhance technology used by regulators to strengthen industry-wide supervision, policy advisers at a leading think tank said on Sunday.

"We should not deify artificial intelligence as it could go wrong just like any other technology," said the former chief of China's securities regulator, Xiao Gang, who is now a senior researcher at the China Finance 40 Forum.

"The point is how we make sure it is safe for use and include it with proper supervision," Xiao told a forum in Qingdao on China's east coast.

Technology to regulate intelligent finance - referring to banking, securities and other financial products that employ technology such as facial recognition and big-data analysis to improve sales and investment returns - has largely lagged development, showed a report from the China Finance 40 Forum.

Evaluation of emerging technologies and industry-wide contingency plans should be fully considered, while authorities should draft laws and regulations on privacy protection and data security, the report showed.

Lessons should be learned from the boom and bust of the online peer-to-peer (P2P) lending sector where regulations were not introduced quickly enough, said economics professor Huang Yiping at the National School of Development of Peking University.

China's P2P industry was once widely seen as an important source of credit, but has lately been undermined by pyramid-scheme scandals and absent bosses, sparking public anger as well as a broader government crackdown.

"Changes have to be made among policy makers," said Zhang Chenghui, chief of the finance research bureau at the Development Research Institute of the State Council.

"We suggest regulation on intelligent finance to be written in to the 14th five-year plan of the country's development, and each financial regulator - including the central bank, banking and insurance regulators and the securities watchdog - should appoint its own chief technology officer to enhance supervision of the sector."

Zhang also suggested the government brings together the data platforms of each financial regulatory body to better monitor potential risk and act quickly as problems arise.

(Reporting by Cheng Leng in Qingdao, China, and Ryan Woo in Beijing; Editing by Christopher Cushing)

See the rest here:

China should step up regulation of artificial intelligence in finance, think tank says - msnNOW

In 2020, lets stop AI ethics-washing and actually do something – MIT Technology Review

Last year, just as I was beginning to cover artificial intelligence, the AI world was getting a major wake-up call. There were some incredible advancements in AI research in 2018from reinforcement learning to generative adversarial networks (GANs) to better natural-language understanding. But the year also saw several high-profile illustrations of the harm these systems can cause when they are deployed too hastily.

A Tesla crashed on Autopilot, killing the driver, and a self-driving Uber crashed, killing a pedestrian. Commercial face recognition systems performed terribly in audits on dark-skinned people, but tech giants continued to peddle them anyway, to customers including law enforcement. At the beginning of this year, reflecting on these events, I wrote a resolution for the AI community: Stop treating AI like magic, and take responsibility for creating, applying, and regulating it ethically.

In some ways, my wish did come true. In 2019, there was more talk of AI ethics than ever before. Dozens of organizations produced AI ethics guidelines; companies rushed to establish responsible AI teams and parade them in front of the media. Its hard to attend an AI-related conference anymore without part of the programming being dedicated to an ethics-related message: How do we protect peoples privacy when AI needs so much data? How do we empower marginalized communities instead of exploiting them? How do we continue to trust media in the face of algorithmically created and distributed disinformation?

Sign up for The Algorithm artificial intelligence, demystified

But talk is just thatits not enough. For all the lip service paid to these issues, many organizations AI ethics guidelines remain vague and hard to implement. Few companies can show tangible changes to the way AI products and services get evaluated and approved. Were falling into a trap of ethics-washing, where genuine action gets replaced by superficial promises. In the most acute example, Google formed a nominal AI ethics board with no actual veto power over questionable projects, and with a couple of members whose inclusion provoked controversy. A backlash immediately led to its dissolution.

Meanwhile, the need for greater ethical responsibility has only grown more urgent. The same advancements made in GANs in 2018 have led to the proliferation of hyper-realistic deepfakes, which are now being used to target women and erode peoples belief in documentation and evidence. New findings have shed light on the massive climate impact of deep learning, but organizations have continued to train ever larger and more energy-guzzling models. Scholars and journalists have also revealed just how many humans are behind the algorithmic curtain. The AI industry is creating an entirely new class of hidden laborerscontent moderators, data labelers, transcriberswho toil away in often brutal conditions.

But not all is dark and gloomy: 2019 was the year of the greatest grassroots pushback against harmful AI from community groups, policymakers, and tech employees themselves. Several citiesincluding San Francisco and Oakland, California, and Somerville, Massachusettsbanned public use of face recognition, and proposed federal legislation could soon ban it from US public housing as well. Employees of tech giants like Microsoft, Google, and Salesforce also grew increasingly vocal against their companies use of AI for tracking migrants and for drone surveillance.

Within the AI community, researchers also doubled down on mitigating AI bias and reexamined the incentives that lead to the fields runaway energy consumption. Companies invested more resources in protecting user privacy and combating deepfakes and disinformation. Experts and policymakers worked in tandem to propose thoughtful new legislationmeant to rein in unintended consequences without dampening innovation. At the largest annual gathering in the field this year, I was both touched and surprised by how many of the keynotes, workshops, and posters focused on real-world problemsboth those created by AI and those it could help solve.

So here is my hope for 2020: that industry and academia sustain this momentum and make concrete bottom-up and top-down changes that realign AI development. While we still have time, we shouldnt lose sight of the dream animating the field. Decades ago, humans began the quest to build intelligent machines so they could one day help us solve some of our toughest challenges.

AI, in other words, is meant to help humanity prosper. Lets not forget.

To have more stories like this delivered directly to your inbox,sign upfor our Webby-nominated AI newsletter The Algorithm. It's free.

Visit link:

In 2020, lets stop AI ethics-washing and actually do something - MIT Technology Review

AI-based health app: Putting patients first – ETHealthworld.com

Doxtros AI mission is to deliver personalised healthcare better, faster and economically for every individual. It has been designed around a doctors brain to understand and recognize the unique way that humans express their symptoms.

How has Doxtro brought a change in Artificial Intelligence (AI) in the field of medicine?Our AI feature asks questions to the user so that the doctors can understand the health concerns of patients better. The feature provides valuable insights to the doctor through inputs gathered from patients before they go for a consultation. The primary insights provided are based on how patients express symptoms, patients medical history and current symptoms and machine learning into the demography based health issues and not to prescribe medicines or medical advice.

How will this app help a patient who is unable to read or write?The apps user flow is designed in such a way that the patients can get connected to a doctor through a voice call with basic chatting ability by just typing their health concern simply in the free text box. The users can continue to chat or choose to connect through a voice call. Languages supported at the moment are Hindi and English. With the basic knowledge of these two languages, we made sure that the user can use the app through voice mode and consult a doctor.

Is there a feedback system in your app?Yes, we give the highest priority to users feedback and doctors as well. Users can rate and write reviews about the doctor in the app itself once the consultation is completed. We also follow a proactive process on the feedback system. Our customer engagement executives are assigned to collate regular user feedback, document the same and action it respective functional teams internally. This is being done, because, in general, not all users will come forward to write a review, whether it is a good or bad experience. We consider this feedback seriously to improve our quality of care.

How frequently can a patient contact the doctor through your app?There are no restrictions in terms of access to the doctor in the app. The users can also add their family members, facilitate consultations with doctors and store their respective health records in the app. Currently, we offer 12 specialisations, general physician, dermatologists, cardiologists, gynaecologists, paediatricians, sexologists, diabetologists, psychologists, psychiatrists, nutritionists, dentists and gastroenterologists.

The users may have various health issues and may have varying need to connect with different specialists at different times. Based on their need, they can contact any available specialists, n number of times. Post the consultation, the window is open for 48 hours for free follow up questions with the same doctor for the users to clarify any doubts.

How is Doxtro different from other healthcare apps that use AI?What distinguishes our technology is the fact that it has been designed around a doctors brain to understand and recognize the unique way that humans express their symptoms. Doxtro AI works with two major roles in the system. Data aspect of the AI which drives the ability to do self-diagnosis and Machine Learning (ML) aspect to assist with triage. Doxtro puts patients at the centre of care, AI-assisted conversations help the patient describe symptoms, understands it and offer information to ensure the patient understands their condition and connects the right specialist.

Doxtro AI asks smart questions about patients symptoms while also considering their age, gender, and medical history. The AI in our app is used to help users understand their health issues and to choose the right doctor. All this is accomplished by ML and natural language processing technologies that we use.

How do doctors benefit from this app?Our AI engine provides great insights to the physicians to understand the patients health issues better, thus saving their valuable time and ensuring doctors focus on doctoring. Doxtro AI puts together a patients response history to ensure that the doctor has context, along with this, augmented diagnostics help to translate symptoms into potential conditions based on patients conversation with the AI and saves the time of doctors for a better diagnosis of the patients health condition.

This supports the doctors to reach out to larger people in need especially considering the shortage of qualified doctors in India. Our app enhances their practice especially with smart tools like AI, excellent workflow and ease of use.

How long has the app been there for and what exactly is your user base?Doxtro app has been in the market for more than 18 months and we have a registered user base of more than 2 Lacs as of now.

What kind of patterns have you noticed in patients?We see a lot of people adapting to the online consultation, especially the ones who need the right qualified and verified doctors. Lot more people resort to proactive wellness than illness. Doxtro's main focus is in wellness and having the right qualified and verified doctors on board. So we see increasing trends of people using Doxtro mobile app.

As per the Security and Data Privacy policy, we do not have any access to any patients' data. All the voice or chat interactions are fully encrypted and the entire application is hosted in the cloud. Hence, we won't be able to arrive at any patterns.

View original post here:

AI-based health app: Putting patients first - ETHealthworld.com