Archive for the ‘Artificial Intelligence’ Category

Engineering Professor Helps Head Up Successful ‘Frontier’ Artificial … – University of Arkansas Newswire

Courtesy of Khoa Luu

One of the presentations at CVPR 2023

The annual conference on Computer Vision and Pattern Recognition, CVPR 2023, in Vancouver, Canada, attracted global attention tovarious uses of artificial intelligence, drawing over 1,000 attendees from more than 75 countries.

Khoa Luu, assistant professor in the Department of Electrical Engineering and Computer Science and area chair of the main conference, said, "This is one of the frontier AI conferences. This is why it attracts a lot of tech companies like Google, Amazon, Facebook, Microsoft, et cetera," he said. "These companies see emerging technologies and next generation AI products within this conference and attract a lot of research and scientists."

The conference received over 9,000 submissions, with 2,359 papers ultimately accepted for presentation. The U of A had a notable presence at CVPR 2023. Its doctoral students showcased their work at the main conference and in several workshops.

Naga VS Raviteja Chappa, a Ph.D. student, achieved third place at the best paper competition of the ninth International Workshop on Computer Vision in Sports. Xuan-Bac Nguyen, also a Ph.D. student, received the prestigious best reviewer award at the main conference. These students also served as panelists to review other submissions to the main conference.

Luu expressed his commitment to encouraging student participation in conferences.

"I do my best to secure grants and encourage graduate students to attend this conference," Luu said. "I do this so they can learn how to become professional researchers in the future. This conference gives them the opportunity to meet corporate attendees, communicate and interact with professionals in the industry. Our students often just stay in the computer lab creating code during their academic careers. They need to reach out and, you know, see and feel the beautiful insights and desires within the industry. It is more than just coding or being a robot machine. They need to see how things are applied and practiced. They need to learn how to present in a professional way and how to communicate with other people. I think this is a critical skill for a graduate student. They can learn this from the conference, and they must learn that they cannot just stay in the lab."

To facilitate student participation, various grants and sponsorships played a pivotal role. Luu expressed his gratitude to the sponsors, including the U of A, for supporting the students' research and enabling their enriching experiences at CVPR 2023.

Not only did Luu serve as the area chair for the main conference, he also was the co-organizer of the CVPR 2023 Precognition Workshop. He did this in collaboration with Aurora Innovation Inc., Google Research, Carnegie Mellon University, University of Houston and HKUST (Guangzhou).

Luu would like to give special thanks to all those who collaborated and sponsors:

Computer Vision and Pattern Recognition 2024 will be held June 17-21, 2024, at the Seattle Convention Center.

Read the rest here:
Engineering Professor Helps Head Up Successful 'Frontier' Artificial ... - University of Arkansas Newswire

Scientific discovery in the age of artificial intelligence – Nature.com

LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436444 (2015). This survey summarizes key elements of deep learning and its development in speech recognition, computer vision and and natural language processing.

Article ADS CAS PubMed Google Scholar

de Regt, H. W. Understanding, values, and the aims of science. Phil. Sci. 87, 921932 (2020).

Article MathSciNet Google Scholar

Pickstone, J. V. Ways of Knowing: A New History of Science, Technology, and Medicine (Univ. Chicago Press, 2001).

Han, J. et al. Deep potential: a general representation of a many-body potential energy surface. Commun. Comput. Phys. 23, 629639 (2018). This paper introduced a deep neural network architecture that learns the potential energy surface of many-body systems while respecting the underlying symmetries of the system by incorporating group theory.

Akiyama, K. et al. First M87 Event Horizon Telescope results. IV. Imaging the central supermassive black hole. Astrophys. J. Lett. 875, L4 (2019).

Article ADS CAS Google Scholar

Wagner, A. Z. Constructions in combinatorics via neural networks. Preprint at https://arxiv.org/abs/2104.14516 (2021).

Coley, C. W. et al. A robotic platform for flow synthesis of organic compounds informed by AI planning. Science 365, eaax1566 (2019).

Article CAS PubMed Google Scholar

Bommasani, R. et al. On the opportunities and risks of foundation models. Preprint at https://arxiv.org/abs/2108.07258 (2021).

Davies, A. et al. Advancing mathematics by guiding human intuition with AI. Nature 600, 7074 (2021). This paper explores how AI can aid the development of pure mathematics by guiding mathematical intuition.

Article ADS CAS PubMed PubMed Central MATH Google Scholar

Jumper, J. et al. Highly accurate protein structure prediction with AlphaFold. Nature 596, 583589 (2021).This study was the first to demonstrate the ability to predict protein folding structures using AI methods with a high degree of accuracy, achieving results that are at or near the experimental resolution. This accomplishment is particularly noteworthy, as predicting protein folding has been a grand challenge in the field of molecular biology for over 50 years.

Article ADS CAS PubMed PubMed Central Google Scholar

Stokes, J. M. et al. A deep learning approach to antibiotic discovery. Cell 180, 688702 (2020).

Article CAS PubMed PubMed Central Google Scholar

Bohacek, R. S., McMartin, C. & Guida, W. C. The art and practice of structure-based drug design: a molecular modeling perspective. Med. Res. Rev. 16, 350 (1996).

Article CAS PubMed Google Scholar

Bileschi, M. L. et al. Using deep learning to annotate the protein universe. Nat. Biotechnol. 40, 932937 (2022).

Bellemare, M. G. et al. Autonomous navigation of stratospheric balloons using reinforcement learning. Nature 588, 7782 (2020). This paper describes a reinforcement-learning algorithm for navigating a super-pressure balloon in the stratosphere, making real-time decisions in the changing environment.

Article ADS CAS PubMed Google Scholar

Tshitoyan, V. et al. Unsupervised word embeddings capture latent knowledge from materials science literature. Nature 571, 9598 (2019).

Article ADS CAS PubMed Google Scholar

Zhang, L. et al. Deep potential molecular dynamics: a scalable model with the accuracy of quantum mechanics. Phys. Rev. Lett. 120, 143001 (2018).

Article ADS CAS PubMed Google Scholar

Deiana, A. M. et al. Applications and techniques for fast machine learning in science. Front. Big Data 5, 787421 (2022).

Karagiorgi, G. et al. Machine learning in the search for new fundamental physics. Nat. Rev. Phys. 4, 399412 (2022).

Zhou, C. & Paffenroth, R. C. Anomaly detection with robust deep autoencoders. In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 665674 (2017).

Hinton, G. E. & Salakhutdinov, R. R. Reducing the dimensionality of data with neural networks. Science 313, 504507 (2006).

Article ADS MathSciNet CAS PubMed MATH Google Scholar

Kasieczka, G. et al. The LHC Olympics 2020 a community challenge for anomaly detection in high energy physics. Rep. Prog. Phys. 84, 124201 (2021).

Article ADS CAS Google Scholar

Govorkova, E. et al. Autoencoders on field-programmable gate arrays for real-time, unsupervised new physics detection at 40 MHz at the Large Hadron Collider. Nat. Mach. Intell. 4, 154161 (2022).

Article Google Scholar

Chamberland, M. et al. Detecting microstructural deviations in individuals with deep diffusion MRI tractometry. Nat. Comput. Sci. 1, 598606 (2021).

Article PubMed PubMed Central Google Scholar

Rafique, M. et al. Delegated regressor, a robust approach for automated anomaly detection in the soil radon time series data. Sci. Rep. 10, 3004 (2020).

Article ADS CAS PubMed PubMed Central Google Scholar

Pastore, V. P. et al. Annotation-free learning of plankton for classification and anomaly detection. Sci. Rep. 10, 12142 (2020).

Article ADS CAS PubMed PubMed Central Google Scholar

Naul, B. et al. A recurrent neural network for classification of unevenly sampled variable stars. Nat. Astron. 2, 151155 (2018).

Article ADS Google Scholar

Lee, D.-H. et al. Pseudo-label: the simple and efficient semi-supervised learning method for deep neural networks. In ICML Workshop on Challenges in Representation Learning (2013).

Zhou, D. et al. Learning with local and global consistency. In Advances in Neural Information Processing Systems 16, 321328 (2003).

Radivojac, P. et al. A large-scale evaluation of computational protein function prediction. Nat. Methods 10, 221227 (2013).

Article CAS PubMed PubMed Central Google Scholar

Barkas, N. et al. Joint analysis of heterogeneous single-cell RNA-seq dataset collections. Nat. Methods 16, 695698 (2019).

Article CAS PubMed PubMed Central Google Scholar

Tran, K. & Ulissi, Z. W. Active learning across intermetallics to guide discovery of electrocatalysts for CO2 reduction and H2 evolution. Nat. Catal. 1, 696703 (2018).

Article CAS Google Scholar

Jablonka, K. M. et al. Bias free multiobjective active learning for materials design and discovery. Nat. Commun. 12, 2312 (2021).

Article ADS CAS PubMed PubMed Central Google Scholar

Roussel, R. et al. Turn-key constrained parameter space exploration for particle accelerators using Bayesian active learning. Nat. Commun. 12, 5612 (2021).

Article ADS CAS PubMed PubMed Central Google Scholar

Ratner, A. J. et al. Data programming: creating large training sets, quickly. In Advances in Neural Information Processing Systems 29, 35673575 (2016).

Ratner, A. et al. Snorkel: rapid training data creation with weak supervision. In International Conference on Very Large Data Bases 11, 269282 (2017). This paper presents a weakly-supervised AI system designed to annotate massive amounts of data using labeling functions.

Butter, A. et al. GANplifying event samples. SciPost Phys. 10, 139 (2021).

Article ADS Google Scholar

Brown, T. et al. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33, 18771901 (2020).

Ramesh, A. et al. Zero-shot text-to-image generation. In International Conference on Machine Learning 139, 88218831 (2021).

Littman, M. L. Reinforcement learning improves behaviour from evaluative feedback. Nature 521, 445451 (2015).

Article ADS CAS PubMed Google Scholar

Cubuk, E. D. et al. Autoaugment: learning augmentation strategies from data. In IEEE Conference on Computer Vision and Pattern Recognition 113123 (2019).

Reed, C. J. et al. Selfaugment: automatic augmentation policies for self-supervised learning. In IEEE Conference on Computer Vision and Pattern Recognition 26742683 (2021).

ATLAS Collaboration et al. Deep generative models for fast photon shower simulation in ATLAS. Preprint at https://arxiv.org/abs/2210.06204 (2022).

Mahmood, F. et al. Deep adversarial training for multi-organ nuclei segmentation in histopathology images. IEEE Trans. Med. Imaging 39, 32573267 (2019).

Article Google Scholar

Teixeira, B. et al. Generating synthetic X-ray images of a person from the surface geometry. In IEEE Conference on Computer Vision and Pattern Recognition 90599067 (2018).

Lee, D., Moon, W.-J. & Ye, J. C. Assessing the importance of magnetic resonance contrasts using collaborative generative adversarial networks. Nat. Mach. Intell. 2, 3442 (2020).

Article Google Scholar

Kench, S. & Cooper, S. J. Generating three-dimensional structures from a two-dimensional slice with generative adversarial network-based dimensionality expansion. Nat. Mach. Intell. 3, 299305 (2021).

Article Google Scholar

Wan, C. & Jones, D. T. Protein function prediction is improved by creating synthetic feature samples with generative adversarial networks. Nat. Mach. Intell. 2, 540550 (2020).

Article Google Scholar

Repecka, D. et al. Expanding functional protein sequence spaces using generative adversarial networks. Nat. Mach. Intell. 3, 324333 (2021).

Article Google Scholar

Marouf, M. et al. Realistic in silico generation and augmentation of single-cell RNA-seq data using generative adversarial networks. Nat. Commun. 11, 166 (2020).

Article ADS CAS PubMed PubMed Central Google Scholar

Ghahramani, Z. Probabilistic machine learning and artificial intelligence. Nature 521, 452459 (2015).This survey provides an introduction to probabilistic machine learning, which involves the representation and manipulation of uncertainty in models and predictions, playing a central role in scientific data analysis.

Article ADS CAS PubMed Google Scholar

Cogan, J. et al. Jet-images: computer vision inspired techniques for jet tagging. J. High Energy Phys. 2015, 118 (2015).

Article Google Scholar

Zhao, W. et al. Sparse deconvolution improves the resolution of live-cell super-resolution fluorescence microscopy. Nat. Biotechnol. 40, 606617 (2022).

Article CAS PubMed Google Scholar

Brbi, M. et al. MARS: discovering novel cell types across heterogeneous single-cell experiments. Nat. Methods 17, 12001206 (2020).

Article PubMed Google Scholar

Qiao, C. et al. Evaluation and development of deep neural networks for image super-resolution in optical microscopy. Nat. Methods 18, 194202 (2021).

Article CAS PubMed Google Scholar

Andreassen, A. et al. OmniFold: a method to simultaneously unfold all observables. Phys. Rev. Lett. 124, 182001 (2020).

Article ADS CAS PubMed Google Scholar

Bergenstrhle, L. et al. Super-resolved spatial transcriptomics by deep data fusion. Nat. Biotechnol. 40, 476479 (2021).

Vincent, P. et al. Extracting and composing robust features with denoising autoencoders. In International Conference on Machine Learning 10961103 (2008).

Kingma, D. P. & Welling, M. Auto-encoding variational Bayes. In International Conference on Learning Representations (2014).

Eraslan, G. et al. Single-cell RNA-seq denoising using a deep count autoencoder. Nat. Commun. 10, 390 (2019).

Article ADS CAS PubMed PubMed Central Google Scholar

Goodfellow, I., Bengio, Y. & Courville, A. Deep Learning (MIT Press, 2016).

Olshausen, B. A. & Field, D. J. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381, 607609 (1996).

Article ADS CAS PubMed Google Scholar

View original post here:
Scientific discovery in the age of artificial intelligence - Nature.com

The Role of Artificial Intelligence in Baseball – Fagen wasanni

In a recent Blue Jays baseball game, I observed a concerning trend: multiple obvious balls were being called as strikes. This had a detrimental effect on the game, as the confused batters were swinging at pitches that were clearly outside the strike zone, resulting in additional strikes and ultimately leading to their quick outs.

This issue highlights the need for accurate determination of the crux or essence of the game. Fortunately, artificial intelligence (AI) has the potential to fulfill this role. Surprisingly, AI technology already exists but is currently unused in the context of baseball.

By harnessing the power of AI, the game could benefit from precise and unbiased assessments of whether a pitch is a ball or a strike. AI algorithms could analyze the trajectory and location of each pitch, taking into account the individual batters strike zone. This would eliminate the subjective human element and ensure consistency in the game.

Moreover, AI could enhance other aspects of baseball as well. It could be used to accurately determine if a runner is safe or out, reducing the uncertainty and controversy surrounding close calls. Additionally, AI could aid in the analysis of player performance, providing valuable insights for coaches and strategists.

Implementing AI technology in baseball would require a collaborative effort from baseball organizations, technology developers, and governing bodies. Embracing AI could revolutionize the sport, making it fairer and more objective.

In conclusion, the use of artificial intelligence in baseball has the potential to address the issue of incorrect calls and improve the overall fairness and accuracy of the game. With the existing technology just waiting to be utilized, it is time for baseball to tap into the power of AI and embrace its benefits.

Read the original post:
The Role of Artificial Intelligence in Baseball - Fagen wasanni

The Role of Artificial Intelligence in Everyday Life and Business – Fagen wasanni

Artificial Intelligence (AI) is a rapidly evolving technology that has the potential to disrupt and enhance various industries. It is changing the way we work, learn, and operate businesses. Mobile applications are utilizing AI to intelligently search for solutions and provide expanded possibilities.

Simply put, AI combines computer science and robust datasets to enable problem-solving. It includes sub-fields such as machine learning and deep learning. AI has already been applied in various ways, from voice or language recognition in mobile apps to facial recognition software used by law enforcement agencies.

According to Nicole Alexander, the Head of Global Marketing at Meta and a Professor of Marketing and Technology at NYU, technology permeates every element of our society. It is AI technology that is becoming increasingly integrated into our everyday lives. For example, mobile apps can predict how individuals or their children may look in the future, and tasks previously performed by humans are now computerized with AI.

While advocating for the cautions and protections required in dealing with this revolutionizing era of AI, Alexander emphasizes the importance of governance, responsibility, and diverse training sets to prevent harm and maximize AI benefits. As an ecosystem develops around AI and rules are explored, businesses should tap into what AI can do for them. Small and large companies need to develop responsible and ethical AI systems, considering marginalized communities.

AI is not only a playground for tech entrepreneurs but also a growing conversation in government. It has positive implications for healthcare systems, urban planning, and the needs of communities. Alexander prepares graduate students to understand the positive effects of AI and urges executives to embrace their role and responsibility in decision-making. By understanding the underlying effects of AI, leaders can develop AI systems that align with their organizations values and communicate effectively with new employees.

In conclusion, AI is transforming various aspects of our lives and businesses. As it continues to evolve, it is crucial to prioritize responsible and ethical AI development, while considering marginalized communities and the impact on society as a whole.

Visit link:
The Role of Artificial Intelligence in Everyday Life and Business - Fagen wasanni

Which tasks shouldn’t we delegate to artificial Intelligence? | theHRD – The HR Director Magazine

Contributor: Sergio Vasquez Bronfman, Associate Professor of Digital Transformation - ESCP Business School. | Published: 7 August 2023

Sergio Vasquez Bronfman, Associate Professor of Digital Transformation - ESCP Business School. 3 August 2023

Since the early years of artificial intelligence (AI), several examples have shown the risks of an inappropriate use of it.

First, there is ELIZA, the first conversational robot developed by professor Joseph Weizenbaum at the MIT in the late 60s. This artificial intelligence program simulated a session with a psychiatrist. Weizenbaum introduced this program to some psychiatrists and psychoanalysts in order to show that a machine could not really imitate a human being. He was surprised when he saw many of them delighted to see ELIZA working as if it were a real psychiatrist, and even promote its use to develop psychiatry and psychoanalysis on a large scale and at low cost. Weizenbaum reacted by calling on psychiatrists and psychoanalysts: How can you imagine for a moment to delegate something as intimate as a session with one of you to a machine?

A second example is the Soviet false nuclear alarm of September 1983, when their computerized missile warning system reported four nuclear missile launches from the USA. As the number of missiles detected was very small, the Soviet officer on duty at the time disobeyed procedure and told his superiors that he thought it was a false alarm (normally, a nuclear attack would involve dozens or even hundreds of nuclear missiles). Fortunately, his advice was followed, preventing a Soviet retaliation that could have been the start of a nuclear war between the Communist countries and the free world. It was later established that the false alarm had been created by a misinterpretation of the data by the Soviet artificial intelligence software.

Finally, we can refer to the case of Eric Loomis, a repeat offender with a criminal record, sentenced to 6 years in prison by the Supreme Court of the State of Wisconsin (in the USA), in 2017. This conviction was based at least in part on the recommendation of an AI-based software program called Compas, which is marketed and sold to the courts. The program is one incarnation of a new trend in artificial intelligence: one that aims to help judges make better decisions. Loomis later claimed that his right to a fair trial had been violated because neither he nor his lawyers had been able to examine or challenge the algorithm behind the recommendation.

These examples (and many others) gave rise to important political and ethical debates since at least the 1970s, about which tasks we should delegate to AI and which we should not, even if it is technologically possible. Already important at that time, these issues have come back even more strongly with the new wave of AI, based on neural networks and deep learning, which has led to amazing results, the latest example being ChatGPT and other products of generative AI. There are essentially two main approaches: on the one hand, there is the ethics that should be injected into AI programs, and on the other, the ethics of the use of artificial intelligence, i.e. the tasks that can be delegated to it.

As for the first alternative, several examples show that AI systems can lead to biased results because the data on which they work are biased. It would then be enough to correct these biases for AI to work properly. But the problem is much more complex than that, because data does not capture everything about most real problems. Data is a proxy of reality, which usually is much more complex. In particular, data cannot capture the current and future context. The limitations of teaching an algorithm to understand right and wrong should warn against overconfidence in our ability to train them to behave ethically. We can go even further and say that machines, because they are machines, will never behave ethically because they cannot imagine what a good life would be and what it would take to live it. They will never be able to behave morally per se because they cannot distinguish between good and evil.

In a seminal book, Computer Power and Human Reason, Joseph Weizenbaum poses an essential question: are there ideas that will never be understood by a machine because they are related to goals that are inappropriate for machines? This question is essential because it goes to the core of the existence (or not) of a fundamental difference between human beings and machines. Weizenbaum argues that the comprehension of humans and machines is of a different nature. Human comprehension is based on the fact of having a brain, but also a body and a nervous system, and of being social animals, something a machine will never be (even if social robotics is undergoing significant development nowadays, something that Weizenbaum imagined nearly 50 years ago). The basis on which humans make decisions is totally different from that of AI. The key point is not whether computers will be able to make decisions on justice, or high-level political and military decisions, because they probably will be able to. The point is that computers should not be entrusted to perform these tasks because they would necessarily be made on a basis that no human being could accept, i.e. only on a calculation basis. Therefore, these issues cannot be addressed by questions that start with can we? The limits we must place on the use of computers can only be stated in terms of should we?

The fundamental ethical issue of AI thus seems to us to be the transfer of responsibility from the human being to the machine (I didnt kill her, it was the autonomous car!, I didnt press the nuclear button, it was the artificial intelligence!). Even if in the European Union the GDPR (General Data Protection Regulation) prevents decisions about humans from being made by a computer, we know how things are in justice administrations and HR departments: people are always overwhelmed, and they will not take the time to discuss the advice given by AI (Nothing personal, Bob; we just asked the AI and it said that you should be fired. But we made the decision!). The facts described and analysed here show that since we currently dont know how to make computers wise, we should not delegate them tasks that require wisdom. Rather than trying to teach algorithms to behave ethically, the real issue is: Who is responsible here?

Visit link:
Which tasks shouldn't we delegate to artificial Intelligence? | theHRD - The HR Director Magazine