Archive for the ‘Machine Learning’ Category

Humans in the Loop: AI & Machine Learning in the Bloomberg Terminal – Yahoo Finance

Originally published on bloomberg.com

NORTHAMPTON, MA / ACCESSWIRE / May 12, 2023 / The Bloomberg Terminal provides access to more than 35 million financial instruments across all asset classes. That's a lot of data, and to make it useful, AI and machine learning (ML) are playing an increasingly central role in the Terminal's ongoing evolution.

Machine learning is about scouring data at speed and scale that is far beyond what human analysts can do. Then, the patterns or anomalies that are discovered can be used to derive powerful insights and guide the automation of all kinds of arduous or tedious tasks that humans used to have to perform manually.

While AI continues to fall short of human intelligence in many applications, there are areas where it vastly outshines the performance of human agents. Machines can identify trends and patterns hidden across millions of documents, and this ability improves over time. Machines also behave consistently, in an unbiased fashion, without committing the kinds of mistakes that humans inevitably make.

"Humans are good at doing things deliberately, but when we make a decision, we start from whole cloth," says Gideon Mann, Head of ML Product & Research in Bloomberg's CTO Office. "Machines execute the same way every time, so even if they make a mistake, they do so with the same error characteristic."

The Bloomberg Terminal currently employs AI and ML techniques in several exciting ways, and we can expect this practice to expand rapidly in the coming years. The story begins some 20 years ago

Keeping Humans in the Loop

When we started in the 80s, data extraction was a manual process. Today, our engineers and data analysts build, train, and use AI to process unstructured data at massive speeds and scale - so our customers are in the know faster.

The rise of the machines

Prior to the 2000s, all tasks related to data collection, analysis, and distribution at Bloomberg were performed manually, because the technology did not yet exist to automate them. The new millennium brought some low-level automation to the company's workflows, with the emergence of primitive models operating by a series of if-then rules coded by humans. As the decade came to a close, true ML took flight within the company. Under this new approach, humans annotate data in order to train a machine to make various associations based on their labels. The machine "learns" how to make decisions, guided by this training data, and produces ever more accurate results over time. This approach can scale dramatically beyond traditional rules-based programming.

Story continues

In the last decade, there has been an explosive growth in the use of ML applications within Bloomberg. According to James Hook, Head of the company's Data department, there are a number of broad applications for AI/ML and data science within Bloomberg.

One is information extraction, where computer vision and/or natural language processing (NLP) algorithms are used to read unstructured documents - data that's arranged in a format that's typically difficult for machines to read - in order to extract semantic meaning from them. With these techniques, the Terminal can present insights to users that are drawn from video, audio, blog posts, tweets, and more.

Anju Kambadur, Head of Bloomberg's AI Engineering group, explains how this works:

"It typically starts by asking questions of every document. Let's say we have a press release. What are the entities mentioned in the document? Who are the executives involved? Who are the other companies they're doing business with? Are there any supply chain relationships exposed in the document? Then, once you've determined the entities, you need to measure the salience of the relationships between them and associate the content with specific topics. A document might be about electric vehicles, it might be about oil, it might be relevant to the U.S., it might be relevant to the APAC region - all of these are called topic codes' and they're assigned using machine learning."

All of this information, and much more, can be extracted from unstructured documents using natural language processing models.

Another area is quality control, where techniques like anomaly detection are used to spot problems with dataset accuracy, among other areas. Using anomaly detection methods, the Terminal can spot the potential for a hidden investment opportunity, or flag suspicious market activity. For example, if a financial analyst was to change their rating of a particular stock following the company's quarterly earnings announcement, anomaly detection would be able to provide context around whether this is considered a typical behavior, or whether this action is worthy of being presented to Bloomberg clients as a data point worth considering in an investment decision.

And then there's insight generation, where AI/ML is used to analyze large datasets and unlock investment signals that might not otherwise be observed. One example of this is using highly correlated data like credit card transactions to gain visibility into recent company performance and consumer trends. Another is analyzing and summarizing the millions of news stories that are ingested into the Bloomberg Terminal each day to understand the key questions and themes that are driving specific markets or economic sectors or trading volume in a specific company's securities.

Humans in the loop

When we think of machine intelligence, we imagine an unfeeling autonomous machine, cold and impartial. In reality, however, the practice of ML is very much a team effort between humans and machines. Humans, for now at least, still define ontologies and methodologies, and perform annotations and quality assurance tasks. Bloomberg has moved quickly to increase staff capacity to perform these tasks at scale. In this scenario, the machines aren't replacing human workers; they are simply shifting their workflows away from more tedious, repetitive tasks toward higher level strategic oversight.

"It's really a transfer of human skill from manually extracting data points to thinking about defining and creating workflows," says Mann.

Ketevan Tsereteli, a Senior Researcher in Bloomberg Engineering's Artificial Intelligence (AI) group, explains how this transfer works in practice.

"Previously, in the manual workflow, you might have a team of data analysts that would be trained to find mergers and acquisition news in press releases and to extract the relevant information. They would have a lot of domain expertise on how this information is reported across different regions. Today, these same people are instrumental in collecting and labeling this information, and providing feedback on an ML model's performance, pointing out where it made correct and incorrect assumptions. In this way, that domain expertise is gradually transferred from human to machine."

Humans are required at every step to ensure the models are performing optimally and improving over time. It's a collaborative effort involving ML engineers who build the learning systems and underlying infrastructure, AI researchers and data scientists who design and implement workflows, and annotators - journalists and other subject matter experts - who collect and label training data and perform quality assurance.

"We have thousands of analysts in our Data department who have deep subject matter expertise in areas that matter most to our clients, like finance, law, and government," explains ML/AI Data Strategist Tina Tseng. "They not only understand the data in these areas, but also how the data is used by our customers. They work very closely with our engineers and data scientists to develop our automation solutions."

Annotation is critical, not just for training models, but also for evaluating their performance.

"We'll annotate data as a truth set - what they call a "golden" copy of the data," says Tseng. "The model's outputs can be automatically compared to that evaluation set so that we can calculate statistics to quantify how well the model is performing. Evaluation sets are used in both supervised and unsupervised learning."

Check out "Best Practices for Managing Data Annotation Projects," a practical guide published by Bloomberg's CTO Office and Data department about planning and implementing data annotation initiatives.

READ NOW

View additional multimedia and more ESG storytelling from Bloomberg on 3blmedia.com.

Contact Info:Spokesperson: BloombergWebsite: https://www.3blmedia.com/profiles/bloombergEmail: info@3blmedia.com

SOURCE: Bloomberg

View source version on accesswire.com: https://www.accesswire.com/754570/Humans-in-the-Loop-AI-Machine-Learning-in-the-Bloomberg-Terminal

See the rest here:
Humans in the Loop: AI & Machine Learning in the Bloomberg Terminal - Yahoo Finance

Study Finds Four Predictive Lupus Disease Profiles Using Machine … – Lupus Foundation of America

A new study using machine learning (ML) identified four distinct lupus disease profiles or autoantibody clusters that are predictive of long-term disease, treatment requirements, organ involvement and risk of death. Machine learning refers to the process by which a machine or computer can imitate human behavior to learn and optimize complicated tasks such as statistical analysis and predictive modeling using large datasets. Autoantibodies are antibodies produced by the immune system and directed against proteins in the body. Proteins are often a cause or marker for many autoimmune diseases, including lupus.

Researchers observed 805 people with lupus, looking at demographic, clinical, and laboratory data within 15-months of their diagnosis, then again at 3-years, and 5-years with the disease. After analyzing the data, the researchers used predictive ML which revealed four distinct clusters or lupus disease profiles associated with important lupus outcomes:

Further studies are needed to determine other lupus biomarkers and understand disease pathogenesis through ML approaches. The researchers suggest ML studies can also help to inform diagnosis and treatment strategies for people with lupus. Learn more about lupus research.

Read the study

View original post here:
Study Finds Four Predictive Lupus Disease Profiles Using Machine ... - Lupus Foundation of America

Application of supervised machine learning algorithms for … – Nature.com

Study area and period

This study was conducted at public hospitals of Afar regional state Northeastern Ethiopia from February to June of 2021. Afar regional state is one of the nine federal states of Ethiopia located in the Northeastern part of the country 588 kms from Addis Ababa. The altitude of the region ranges from 1500m above mean sea level (m.a.s.l) in the western highlands to 120m.a.s.l in the Danakil/Dallol depression. The total geographical area of the region is about 270,000 km2 and is geographically located between 39o34 and 42o28 East Longitude and 8o49 and 14o30 North Latitude (CSA, 2008). It has an estimated population of 1.2 million, of which, 90% are pastoralists (56% male and 44% female) and 10% are agro-pastoralists.

A retrospective cross-sectional study design using medical database and medical chart record review was used.

All hospital clients who ever diagnosed or will be diagnosed and/or suspected for type-2 diabetes in public hospitals of Afar regional state.

All clients who ever diagnosed for diabetes disease status, and confirmed as free from type-2 diabetes (normal) and type-2 diabetic patient in public hospitals of Afar region starting from the year when the database for electronic health information record was fully functional up to the date of sample collection (2012 GCApril 22/2020) were considered as part of the study population.

All clients who ever diagnosed for diabetes disease status and confirmed as free from type-2 diabetes (normal) and type-2 diabetic patients in public hospitals of Afar region starting from 2012 GC to April 22/2020 GC.

Patients who can unable to obtain required information due to incomplete or total absence of their record status; which cannot be found in their registration book, diabetic patient under follow-up with unknown start date and diabetic patient referred from other hospitals were excluded from the study population to avoid misleading of the machine learning algorithms.

All patients who have been diagnosed for diabetes and confirmed as a type-2 diabetic patient and normal after standard diagnostic activities from 2012 GC up to April 22/2020 GC were used as a sample. The whole dataset used as a sample because data mining needs considerable amount of data for effective prediction and classification by reducing the probability of error to be committed9. Based on this, the study has been conducted on a total of 2239 population.

From this record, the clients who ever diagnosed for diabetes disease status were extracted and collected with their required variables and some variables which are not available in the database and the parameters for the normal clients were searched from the medical record book by its medical registration number (MRN) because in DHIS database of public hospitals there is no recording site for normal individuals after every diagnosis.

Data related to that have been confirmed as a type-2 diabetic patient and normal at public hospitals of Afar regional state from the date of database being fully functional (2012) up to data obtaining date (April 22/2020) GC were collected from DHIS database public hospitals. Clients who have been positive for type-2 diabetes were obtained from the database and those who were normal were collected from the medical registration book by medical chart review and used for comparative purposes. The parameters of these samples were collected from their first date of their diagnosis by cross-checking with their start date of their diagnosis to avoid the misleading of machine learning algorithms. From this, the essential variables were collected, therefore, in this study was used to classify and predict type-2 diabete disease status among clients of public hospitals for all ages and both sexes who were diagnosed for diabetes in the region.

The dependent variable is type-2 diabetes disease status with dichotomous response to the question tested and confirmed as type-2 diabetic patients if yes=1 and if no=0.

The above dependent variable was then being modeled to historical predictor variables and reasons that were selected based on existing evidences. The independent variables that have been assessed in this research were:-

Diastolic Blood Pressure (DBP)

Systolic Blood Pressure (SBP)

Fasting Blood Sugar Level (FBS)

Random Blood Sugar Level(RBS)

Body Mass Index (BMI)

Age

Sex

Accuracy Accuracy of classifier refers to the ability of classifier to predict the class label correctly, and it also refers to how well a given predictor can guess the value of the predicted attribute for a new data10.

Classification Is a data mining function that assigns items in a collection to target categories or classes. The goal of classification is to accurately predict (for categorical independent variables) the target class for each case in the data11.

Confusion Matrix Is a simple performance analysis tool typically used in supervised machine learning. It is used to represent the test result of a prediction model. Each column of the matrix represents the instances in a predicted class, while each row represents the instances in an actual class12.

Kappa Statistics Are a metric that compares Observed Accuracy with Expected Accuracy (Random chance) that the samples randomly would be expected to reveal13.

Receiver Operating Characteristic test (ROC) Are a plot of the true positive rate against the false positive rate for the different possible cut points of a diagnostic test. The area under the curve (AUC) is a measure of diagnostic accuracy8.

Prediction Are a data mining approach that aims at building a prediction (for continuous independent variables) model for classifying new instances into one of the two class values (yes or no)14.

Sensitivity Is the proportion of positive cases that were correctly identified, which is the most important thing in disease diagnosis15.

Specificity Is defined as the proportion of negative cases that were classified correctly16.

Data related to the outcome variable of type-2 diabetic disease status were selected and extracted from the dataset of DHIS database and from the medical registration book of public hospitals of the region. Furthermore, data cleaning, labeling, coding were done for all selected variables. On the data preprocessing phase, data manipulation and data transformation for incomplete data handling and missing value management were conducted to achieve the best data quality for prediction and classification of type-2 diabetes disease status.

The raw (preprocessed) data were contained 13 independent; one dependent variable which is categorical variable with 2,239 samples (instances).

Moreover, in this phase, the variable set which had few records (only 2 records) due to its expensiveness, i.e., glycated hemoglobin level, and the other variables which had no role on prediction of type-2 diabetes disease like (MRN and start date), the variables which have similarity among each other (height and weight with BMI) were removed from the final predictor list of variables.

The processed variables were proceeding to data mining activity for prediction and classification of type-2 diabetes disease status and their missing values of each record were replaced with mean value of each variable. Finally, after the data were processed, a total of 2239 clients with their 7 major explanatory variables (attributes) based on variable selection method that were diagnosed for diabetes disease were included for classification and prediction of type-2 diabetic disease status using supervised machine learning algorithms (Table 1).

Data of clients for diabetes disease status were collected from DHIS database of public hospitals of the region, and then it was checked for its completeness prior to analysis to increase data quality. Data preprocessing was carried out before final analysis to treat missing values and incomplete records. This processed data was changed in to comma separated value (CSV) and attribute relation file format (ARFF) to be loaded in to WEKA-3.7 tool which was used for data analysis purposes. Then the major classification and prediction supervised machine learning algorithms were applied.

On the descriptive part, appropriate descriptive statistical methods such as frequencies, percentages, tables, and graphs were used to summarize and present the findings. For the inferential part, the major supervised machine learning techniques of data mining algorithm for prediction of type-2 diabetes (Logistic regression, ANN, RF, K-NN, SVM, DT pruned J48 and Nave Bayes) were used. The performance evaluation based on their output for effective prediction and classification of type-2 diabetes disease status among these algorithms was assessed.

Since the main objective of this study is applying supervised machine learning algorithms for classifying and predicting new clients whether they have type-2 diabetes or not using the information extracted from the diabetes dataset. The model building phase in the data mining process of this investigation was carried out under classification and prediction data mining approach. This classification and prediction tasks was conducted using the major supervised machine learning algorithms (DT, pruned J48, artificial neural network, k-nearest neighbor, random forest, Nave Bayes, support vector machine and Logistic regression) for classifying and predicting the presence or absence of type-2 diabetes disease up on the 70% of the full dataset which was the training data set. To increase classification accuracy, it is better to use many of the dataset for training and few dataset for testing on percentage split of 70 : 30 division12.

After that, this task was repeated on the testing dataset, 30% of the full dataset which doesnt have the target class of type-2 diabetes disease status. Finally, the performance of these supervised machine learning algorithms was evaluated based on their capacity of classification and prediction on the testing dataset15. All of these tasks were implemented using WEKA 3.7 data mining tool (Fig.1).

Schematic representation of data mining approach applied using supervised machine learning algorithms for classification and prediction of type-2 diabetic disease status of public hospitals of Afar regional state in 2021.

Frequency and percentage were used to report categorical variables and mean followed by standard deviation for continuous explanatory variables were used. In addition, confusion matrices were done to show the proportion of different categories of each characteristic with respect to the outcome variable (type-2 diabetes disease status).

Commonly used variable selection methods available in commercial software packages, i.e., WEKA tool are Wrapper Subset Forward Selection, Best First -D 1 -N 5 variable selection, and linear forward selection methods. For this study, the Best First -D 1 -N 5 variable selection method was used at tenfold cross-validation and seed number 50 methods which is better for binary outcome studies17.

Receiver operating characteristics (ROC) curve was used to assess the general accuracy of the model to the dataset using the area under receiver operating characteristics (AUC). ROC curve is a commonly used measure for summarizing the discriminatory ability of a binary prediction model.

The ROC curve describes the relationship between sensitivity and specificity of the classifier. Since the ROC curve cannot quantitatively evaluate the classifiers, AUC is usually adopted as the evaluation index. AUC value refers to the area under the ROC curve. An ideal classification model has an AUC value of 1, with a value between 0.5 and 1.0, and the larger AUC represents that the classification model has better performance18.

The performance evaluation (model diagnosis) of different supervised machine learning techniques of data mining algorithms for prediction and classification of type-2 diabetes were carried out. This was being done using cross-validation and different confusion matrices.

Cross-validation is a technique used to evaluate the performance of classification algorithms. It is used to evaluate error rate for learning techniques. The dataset is portioned in to n-folds; each fold is used for testing and training purposes. The procedure repeats for n times in testing and training dataset. In a tenfold cross validation the data is divided in to 10 parts where each part is approximately the same to form the full dataset. Each term is held out and during the learning scheme which trained on the remaining nine-tenths, the error rate is calculated in the holdout set. Learning procedure executes 10 times on training sets and finally the error rates for 10 sets are averages to yield an overall error rate10. A confusion matrix is used to present the accuracy of classifiers obtained through classification. It is used to show the relationship between outcomes and predicted classes (Table 2).

In addition to the confusion matrices, there are also different parameters used to compare the performance of supervised machine learning algorithms for their classification and prediction capacity. The table below contains performance comparison matrices with their respective formulas (Table 3).

Each model which was used for classification and prediction algorithm was diagnosed based on their accuracy. Since this study was a medical database record review design which is having known class (confirmed as type-2 diabetic patient or not), then we were used for the supervised data mining techniques. The machine learning algorithms and model specifications for prediction and classification of diabetes disease were focused on high performance algorithm and from high dimensional medical dataset. These model comparisons were conducted according to the following table format (Table 4).

Classification models have been used to determine categorical class labels, meanwhile prediction models were used to predict continuous functions, this was done in the following steps:- Data cleaning, Relevance analysis, and Data Transformation. The obtained datasets was preprocessed and split into 2 sets, training (70% of the total dataset) and test data (30% of the dataset) (13, 15). The following models were specified for data mining in classification and prediction of type-2 diabetes disease status accordingly.

The study was approved by the Institutional Review Board of Samara University. A letter of support was obtained from Samara University. All results of this research were based on the use of secondary data and the data collection was performed prospectively. Therefore, an informed written consent form from the public hospital DHIS Database coordinator was obtained and the study was conducted in accordance with the ethical standards of the institutional and national research committee.

See the article here:
Application of supervised machine learning algorithms for ... - Nature.com

Financial Leaders Investing in Analytics, AI and Machine Learning … – CPAPracticeAdvisor.com

A new survey shows that continued inflation and economic disruptions are the top concerns for more than half of organizations in 2023. Despite this, most organizations expect their revenues to either increase or stay the same this year. As a result, three quarters of organizations plan to resume business travel in 2023 and half of organizations surveyed plan to invest in analytic technologies that can help navigate uncertain economic conditions.

The Enterprise Financial Decision-Makers Outlook April 2023 semi-annual survey was published by OneStream Software, a leader in corporate performance management (CPM) solutions for the worlds leading enterprises. Conducted by Hanover Research, the survey targeted finance leaders across North America to identify trends and investment priorities in response to economic challenges and other forces in the upcoming year.

When asked about current business drivers and plans for 2023, financial leaders are focused on the following factors:

COVID is still prevalent, but the business impact is shrinking

As the world returns to some type of normal following the pandemic, organizations are planning to reintroduce business travel but are still wary of supply chains. More than half of financial leaders expect COVID-related supply chain disruptions to continue into 2024 (54%) or beyond, down 18% from the Spring 2022survey.Business travel is poised for a comeback this year as 75% of organizations plan to resume this practice in 2023. In the Spring 2022 survey, most organizations (80%) had planned to resume business travel, but the study shows very few have actually implemented the plan (10%), citing the costs of flights, hotel, food and the lack of necessity.

Analytic technology is gaining focus to help navigate uncertainty

Trends in the survey foreshadow an increased usage of analytic technology that improves productivity and supports more agile decision-making across the enterprise. Cloud-based planning and reporting solutions remain the most used data analysis tool (91%), however, most organizations also use predictive analytics (85%), business intelligence (84%) and ML/AI (75%) tools at least intermittently. About half of organizations are planning to invest more in each of these tools this year, compared to 2022.

Adoption momentum for these tools started during the pandemic with no sign of slowing down. According to theSpring 2021survey, organizations said that in comparison to pre-pandemic they were increasing investments in artificial intelligence (59%), predictive analytics (58%), cloud-based planning and reporting solutions (57%) and machine learning (54%).

Organizations are realizing the value of AI

According to the survey, two-thirds of organizations (68%) have adopted an automated machine learning (AutoML) solution to supplement some of their workforce needs, a significant uptick when compared toSpring 2022(56%). In theFall 2022survey, 48% of respondents planned to investigate an AutoML solution in the future, which suggests respondents stayed true to their word and dove in on the technology in the last six months.

Finance leaders see opportunities for improvement in many areas with the help of AI/ML technologies, including ChatGPT. The tasks and processes they believe these technologies will be most useful for include financial reporting, financial planning & analysis, sales/revenue forecasting, sales & marketing and customer service.

Along with investing in new technology, almost all organizations (91%) are investing or planning to invest in new solutions that specifically support finance functions. The most common solutions are cloud-based applications (52%), AI/ML (43%), advanced predictive analytics (42%) and budgeting/planning systems (42%).

The current economic headwinds have finance leaders acutely aware of their investment decisions and weighing the benefits vs. the costs, said Bill Koefoed, Chief Financial Officer, OneStream. With revenue growth through economic uncertainty in mind, financial leaders are looking to invest in solutions that can support more agile decision-making, while delivering a fast return on investment. AutoAI and other AI innovations coming to light in the last couple of years have the potential to improve the speed and accuracy of forecasting and support more informed, confident decision making. OneStream is a proud innovator in this space and partners with organizations around the globe to help them navigate these challenging times.

Read more here:
Financial Leaders Investing in Analytics, AI and Machine Learning ... - CPAPracticeAdvisor.com

The Surprising Synergy Between Acupuncture and AI – WIRED

I used to fall asleep at night with needles in my face. One needle shallowly planted in the inner corners of each eyebrow, one per temple, one in the middle of each eyebrow above the pupil, a few by my nose and mouth. Id wake up hours later, the hair-thin, stainless steel pins having been surreptitiously removed by a parent. Sometimes theyd forget about the treatment, and in the morning wed search my pillow for needles. My very farsighted left eye gradually became only somewhat farsighted, and my mildly nearsighted right eye eventually achieved a perfect score at the optometrists. By the time I was six, my glasses had disappeared from the picture albums.

The story of my recovered eyesight was the first thing Id think to mention when people found out that my parents are specialists in traditional Chinese medicine (TCM) and asked me what I thought of the practice. It was a concrete and rather miraculous firsthand experience, and I knew what it meantto begin to see the world more clearly while under my mother and fathers care.

Otherwise, I rarely knew what to say. I would recall hearing TCM mentioned in relation to poor evidence or badly designed studies and feel challenged to providesome defense for a line of work seen as illegitimate. I would feel a pull of obligation to defend Chinese medicine as a way to protect my parents, their care and toils, but also an urge to resist shouldering that obligation for the sake of someone elses fleeting curiosity and perhaps entertainment.

Mostly, I wished I had a better understanding of TCM, even just for myself. Now that I work in machine learning (ML), Im often struck by the parallels between this cutting-edge technology and the ancient practice of TCM. For one, I cant quite explain either satisfactorily.

Its not that there arent explanations for how the field of Chinese medicine works. I, and many others, just find the theories dubious. According to both classical and modern theory, blood and qipronounced chi, variously interpreted to mean something like vapormove around and regulate the body, which itself is not considered separate from the mind.

Qi flows through channels called meridians. The anatomical charts hanging on the walls of my parents clinics feature meridians scoring the body in neat, straight linesfrom chest to finger, or from the waist to the inner thighoverlaid on diagrams of the bones and organs. At various points along these meridians, needles can be inserted to remove blockages, improving the flow of qi. All TCM treatments ultimately revolve around qi: Acupuncture banishes unhealthy qi and circulates healthy qi from the outside; herbal medicines do so from the inside.

On my parents charts, the meridians and acupuncture points are depicted like a subway map and seem to float slightly upward, tethered only loosely to the recognizable shapes of intestines and joints underneath. This lack of visual correspondence is reflected in the science; little evidence has been found for the physical existence of meridians, or of qi. Studies have investigated whether meridians are special conduits for electrical signalsbut these experiments werebadly designedor whether they arerelated to fascia, the thin stretchy tissue that surrounds almost all internal body parts. All of this work is recent, and results have been inconclusive.

In contrast, the effectiveness of acupuncture, particularly for ailments likeneck disorders andlow back pain, is well-supported in modern scientific journals. Insurance companies are convinced; most of my mothers patients come to her for acupuncture because its covered by New Zealands national insurance plan.

Read more:
The Surprising Synergy Between Acupuncture and AI - WIRED