Gamifying Fear: Vr Exposure Therapy Shown To Be Effective At Treating Severe Phobias

LinkedIn
Girl using virtual reality goggles watching spider. Photo: Donald Iain Smith/Gett Images

By Cassidy Ward, SyFy

In the 2007 horror film House of Fears (now streaming on Peacock!), a group of teenagers enters the titular haunted house the night before it is set to open. Once inside, they encounter a grisly set of horrors leaving some of them dead and others terrified. For many, haunted houses are a fun way to intentionally trigger a fear response. For others, fear is something they live with on a daily basis and it’s anything but fun.

Roughly 8% of adults report a severe fear of flying; between 3 and 15% endure a fear of spiders; and between 3 and 6% have a fear of heights. Taken together, along with folks who have a fear of needles, dogs, or any number of other life-altering phobias, there’s a good chance you know someone who is living with a fear serious enough to impact their lives. You might even have such a phobia yourself.

There are, thankfully, a number of treatments a person can undergo in order to cope with a debilitating phobia. However, those treatments often require traveling someplace else and having access to medical care, something which isn’t always available or possible. With that in mind, scientists from the Department of Psychological Medicine at the University of Otago have investigated the use of virtual reality to remotely treat severe phobias with digital exposure therapy. Their findings were published in the Australian and New Zealand Journal of Psychiatry.

Prior studies into the efficacy of virtual reality for the treatment of phobias were reliant on high-end VR rigs which can be expensive and difficult to acquire for the average patient. They also focused on specific phobias. The team at the University of Otago wanted something that could reach a higher number of patients, both in terms of content and access to equipment.

They used oVRcome, a widely available smartphone app anyone can download from their phone’s app store. The app has virtual reality content related to a number of common phobias in addition to the five listed above. Moreover, because it runs on your smartphone, it can be experienced using any number of affordable VR headsets which your phone slides into.

Participants enter in their phobias and their severity on a scale and are presented with a series of virtual experiences designed to gently and progressively expose the user to their fear. The study involved 129 people between the ages of 18 and 64, all of which reported all five of the target phobias. They used oVRcome over the course of six weeks with weekly emailed questionnaires measuring their progress. Participants also had access to a clinical psychologist in the event that they experienced any adverse effects from the study.

Participants were given a baseline score measuring the severity of their phobia and were measured again at a follow up 12 weeks after the start of the program. At baseline, participants averaged a score of 28 out of 40, indicating moderate to severe symptoms. By the end of the trial, the average score was down to 7, indicating minimal symptoms. Some participants even indicated they had overcome their phobia to the extent that they felt comfortable booking a flight, scheduling a medical procedure involving needles, or capturing and releasing a spider from their home, something they weren’t comfortable doing at the start.

Part of what makes the software so effective is the diversity of programming available and the ability for an individual to tailor their experiences based on their own unique experience. Additionally, exposure therapy is coupled with additional virtual modules including relaxation, mindfulness, cognitive techniques, and psychoeducation.

Click here to read the full article on SyFy.

The latest video game controller isn’t plastic. It’s your face.
LinkedIn
Dunn playing “Minecraft” using voice commands on the Enabled Play controller, face expression controls via a phone and virtual buttons on Xbox's adaptive controller. (Courtesy of Enabled Play Game Controller)

By Amanda Florian, The Washington Post

Over decades, input devices in the video game industry have evolved from simple joysticks to sophisticated controllers that emit haptic feedback. But with Enabled Play, a new piece of assistive tech created by self-taught developer Alex Dunn, users are embracing a different kind of input: facial expressions.

While companies like Microsoft have sought to expand accessibility through adaptive controllers and accessories, Dunn’s new device takes those efforts even further, translating users’ head movements, facial expressions, real-time speech and other nontraditional input methods into mouse clicks, key strokes and thumbstick movements. The device has users raising eyebrows — quite literally.

“Enabled Play is a device that learns to work with you — not a device you have to learn to work with,” Dunn, who lives in Boston, said via Zoom.

Dunn, 26, created Enabled Play so that everyone — including his younger brother with a disability — can interface with technology in a natural and intuitive way. At the beginning of the pandemic, the only thing he and his New Hampshire-based brother could do together, while approximately 70 miles apart, was game.

“And that’s when I started to see firsthand some of the challenges that he had and the limitations that games had for people with really any type of disability,” he added.

At 17, Dunn dropped out of Worcester Polytechnic Institute to become a full-time software engineer. He began researching and developing Enabled Play two and a half years ago, which initially proved challenging, as most speech-recognition programs lagged in response time.

“I built some prototypes with voice commands, and then I started talking to people who were deaf and had a range of disabilities, and I found that voice commands didn’t cut it,” Dunn said.

That’s when he started thinking outside the box.

Having already built Suave Keys, a voice-powered program for gamers with disabilities, Dunn created Snap Keys — an extension that turns a user’s Snapchat lens into a controller when playing games like Call of Duty, “Fall Guys,” and “Dark Souls.” In 2020, he won two awards for his work at Snap Inc.’s Snap Kit Developer Challenge, a competition among third-party app creators to innovate Snapchat’s developer tool kit.

With Enabled Play, Dunn takes accessibility to the next level. With a wider variety of inputs, users can connect the assistive device — equipped with a robust CPU and 8 GB of RAM — to a computer, game console or other device to play games in whatever way works best for them.

Dunn also spent time making sure Enabled Play was accessible to people who are deaf, as well as people who want to use nonverbal audio input, like “ooh” or “aah,” to perform an action. Enabled Play’s vowel sound detection model is based on “The Vocal Joystick,” which engineers and linguistics experts at the University of Washington developed in 2006.

“Essentially, it looks to predict the word you are going to say based on what is in the profile, rather than trying to assume it could be any word in the dictionary,” Dunn said. “This helps cut through machine learning bias by learning more about how the individual speaks and applies it to their desired commands.”

Dunn’s AI-enabled controller takes into account a person’s natural tendencies. If a gamer wants to set up a jump command every time they open their mouth, Enabled Play would identify that person’s individual resting mouth position and set that as the baseline.

In January, Enabled Play officially launched in six countries — its user base extending from the U.S. to the U.K., Ghana and Austria. For Dunn, one of his primary goals was to fill a gap in accessibility and pricing compared to other assistive gaming devices.

“There are things like the Xbox Adaptive Controller. There are things like the HORI Flex [for Nintendo Switch]. There are things like Tobii, which does eye-tracking and stuff like that. But it still seemed like it wasn’t enough,” he said.

Compared to some devices that are only compatible with one gaming system or computer at a time, Dunn’s AI-enabled controller — priced at $249.99 — supports a combination of inputs and outputs. Speech therapists say that compared to augmentative and alternative communication (AAC) devices, which are medically essential for some with disabilities, Dunn’s device offers simplicity.

“This is just the start,” said Julia Franklin, a speech language pathologist at Community School of Davidson in Davidson, N.C. Franklin introduced students to Enabled Play this summer and feels it’s a better alternative to other AAC devices on the market that are often “expensive, bulky and limited” in usability. Many sophisticated AAC systems can range from $6,000 to $11,500 for high-tech devices, with low-end eye-trackers running in the thousands. A person may also download AAC apps on their mobile devices, which range from $49.99 to $299.99 for the app alone.

“For many people who have physical and cognitive differences, they often exhaust themselves to learn a complex AAC system that has limits,” she said. “The Enabled Play device allows individuals to leverage their strengths and movements that are already present.”

Internet users have applauded Dunn for his work, noting that asking for accessibility should not equate to asking for an “easy mode” — a misconception often cited by critics of making games more accessible.

“This is how you make gaming accessible,” one Reddit user wrote about Enabled Play. “Not by dumbing it down, but by creating mechanical solutions that allow users to have the same experience and accomplish the same feats as [people without disabilities].” Another user who said they regularly worked with young patients with cerebral palsy speculated that Enabled Play “would quite literally change their lives.”

Click here to read the full article on The Washington Post.

Diagnosing Mental Health Disorders Through AI Facial Expression Evaluation
LinkedIn
Researchers from Germany have developed a method for identifying mental disorders based on facial expressions interpreted by computer vision.

By , Unite

Researchers from Germany have developed a method for identifying mental disorders based on facial expressions interpreted by computer vision.

The new approach can not only distinguish between unaffected and affected subjects, but can also correctly distinguish depression from schizophrenia, as well as the degree to which the patient is currently affected by the disease.

The researchers have provided a composite image that represents the control group for their tests (on the left in the image below) and the patients who are suffering from mental disorders (right). The identities of multiple people are blended in the representations, and neither image depicts a particular individual:

Individuals with affective disorders tend to have raised eyebrows, leaden gazes, swollen faces and hang-dog mouth expressions. To protect patient privacy, these composite images are the only ones made available in support of the new work.

Until now, facial affect recognition has been primarily used as a potential tool for basic diagnosis. The new approach, instead, offers a possible method to evaluate patient progress throughout treatment, or else (potentially, though the paper does not suggest it) in their own domestic environment for outpatient monitoring.

The paper states*:

‘Going beyond machine diagnosis of depression in affective computing, which has been developed in previous studies, we show that the measurable affective state estimated by means of computer vision contains far more information than the pure categorical classification.’

The researchers have dubbed this technique Opto Electronic Encephalography (OEG), a completely passive method of inferring mental state by facial image analysis instead of topical sensors or ray-based medical imaging technologies.

The authors conclude that OEG could potentially be not just a mere secondary aide to diagnosis and treatment, but, in the long term, a potential replacement for certain evaluative parts of the treatment pipeline, and one that could cut down on the time necessary for patient monitoring and initial diagnosis. They note:

‘Overall, the results predicted by the machine show better correlations compared to the pure clinical observer rating based questionnaires and are also objective. The relatively short measurement period of a few minutes for the computer vision approaches is also noteworthy, whereas hours are sometimes required for the clinical interviews.’

However, the authors are keen to emphasize that patient care in this field is a multi-modal pursuit, with many other indicators of patient state to be considered than just their facial expressions, and that it is too early to consider that such a system could entirely substitute traditional approaches to mental disorders. Nonetheless, they consider OEG a promising adjunct technology, particularly as a method to grade the effects of pharmaceutical treatment in a patient’s prescribed regime.

The paper is titled The Face of Affective Disorders, and comes from eight researchers across a broad range of institutions from the private and public medical research sector.

Data

(The new paper deals mostly with the various theories and methods that are currently popular in patient diagnosis of mental disorders, with less attention than is usual to the actual technologies and processes used in the tests and various experiments)

Data-gathering took place at University Hospital at Aachen, with 100 gender-balanced patients and a control group of 50 non-affected people. The patients included 35 sufferers from schizophrenia and 65 people suffering from depression.

For the patient portion of the test group, initial measurements were taken at the time of first hospitalization, and the second prior to their discharge from hospital, spanning an average interval of 12 weeks. The control group participants were recruited arbitrarily from the local population, with their own induction and ‘discharge’ mirroring that of the actual patients.

In effect, the most important ‘ground truth’ for such an experiment must be diagnoses obtained by approved and standard methods, and this was the case for the OEG trials.

However, the data-gathering stage obtained additional data more suited for machine interpretation: interviews averaging 90 minutes were captured over three phases with a Logitech c270 consumer webcam running at 25fps.

The first session comprised of a standard Hamilton interview (based on research originated around 1960), such as would normally be given on admission. In the second phase, unusually, the patients (and their counterparts in the control group) were shown videos of a series of facial expressions, and asked to mimic each of these, while stating their own estimation of their mental condition at that time, including emotional state and intensity. This phase lasted around ten minutes.

In the third and final phase, the participants were shown 96 videos of actors, lasting just over ten seconds each, apparently recounting intense emotional experiences. The participants were then asked to evaluate the emotion and intensity represented in the videos, as well as their own corresponding feelings. This phase lasted around 15 minutes.

Click here to read the full article on Unite.

Your favourite Instagram face might not be a human. How AI is taking over influencer roles
LinkedIn
South Korean influencer Rozy has over 130,000 followers on Instagram.

By Mint

South Korean influencer Rozy has over 130,000 followers on Instagram. She posts photos of globetrotting adventures, she sings, dances and models. The interesting fact is, unlike most popular faces on the medium, Rozy is not a real human. However, this digitally rendered being looks so real that it’s often mistaken for flesh and blood.

How Rozy was designed?
Seoul-based company that created Rozy describes her as a blended personality – part human, part AI, and part robot. She is “able to do everything that humans cannot … in the most human-like form,” Sidus Studio X says on its website.

Sidus Studio X explains sometimes they create an image of Rozy from head to toe while other times it is just a superimposed photo where they put her head onto the body of a human model.

Rozy was launched in 2020 and since then, she pegged several brand deals and sponsorships, and participated in several virtual fashion shows and also released two singles.

And a CNN report claims, that Rozy is not alone, there are several others like her. Facebook and Instagram together have more than 200 virtual influencers on their platforms

The CGI (computer-generated imagery) technology behind Rozy isn’t new. It is ubiquitous in today’s entertainment industry, where artists use it to craft realistic nonhuman characters in movies, computer games and music videos. But it has only recently been used to make influencers, the report reads.

South Korean retail brand Lotte Home Shopping created its virtual influencer — Lucy, who now has 78,000 Instagram followers.

Lee Bo-hyun, Lotte representative, said that Lucy’s image is more than a pretty face. She studied industrial design, and works in car design. She posts about her job and interests, such as her love for animals and kimbap — rice rolls wrapped in seaweed.

There is a risk attached
However, there is always a risk attached to it. Facebook and Instagram’s parent company Meta has acknowledged the risks.

In a blog post, it said, “Like any disruptive technology, synthetic media has the potential for both good and harm. Issues of representation, cultural appropriation and expressive liberty are already a growing concern,” the company said in a blog post.

“To help brands navigate the ethical quandaries of this emerging medium and avoid potential hazards, (Meta) is working with partners to develop an ethical framework to guide the use of (virtual influencers).”

However, even though the elder generation is quite skeptical, the younger lot is comfortable communicating with virtual influencers.

Lee Na-kyoung, a 23-year-old living in Incheon, began following Rozy about two years ago thinking she was a real person. Rozy followed her back, sometimes commenting on her posts, and a virtual friendship blossomed — one that has endured even after Lee found out the truth, CNN report said.

“We communicated like friends and I felt comfortable with her — so I don’t think of her as an AI but a real friend,” Lee said.

Click here to read the full article on Mint.

GM just secured enough cathode material for 5 million electric vehicles
LinkedIn
GM garage filled with white vans

By Andrew J. Hawkins, The Verge

General Motors needs a lot of cathode active materials (CAM) if it’s to reach its goal of making enough electric vehicles to become a completely carbon neutral company by 2040. How much is enough? How about 950,000 tons of the stuff.

GM now says it’s reached a deal with LG Chem, one of South Korea’s premier battery making firms, to lock down a supply of CAM starting later this year. CAM is basically what makes a battery a battery, consisting of components like processed nickel, lithium and other materials, and representing about 40 percent of the total cost of a battery cell.

The majority of EV battery cathodes are made with NCM — nickel, cobalt, and magnesium. Cobalt is a key component in this mix, but it’s also the most expensive material in the battery and mined under conditions that often violate human rights, leading it to be called the “blood diamond of batteries.” As a result, GM and other companies like Tesla, are rushing to create a cobalt-free battery. GM’s Ultium batteries, for example, will add aluminum — making the mix NCMA — and reduce the cobalt content by 70 percent.

LG Chem will begin supplying CAM to the automaker starting in the latter half of 2022 and lasting until 2030. GM says this will be enough battery material to power approximately 5 million electric vehicles, which should help the company in its quest to catch up to Tesla.

GM has said it plans to spend $30 billion by 2025 on the creation of 30 new plug-in models in its bid to overtake Elon Musk’s company as the leading EV company in the world. Tesla still dominates the relatively small EV market in the US, with around 66 percent market share, while GM only has around 6 percent. This year, the company was even outsold by legacy auto rivals like Ford and Hyundai, according to CNBC.

In a furious bid to catch up and become more vertically integrated, GM is trying to get a stronger grasp on its supply chain, which includes battery manufacturing. The company has said it will spend over $4 billion on the construction of two battery factories in North America in partnership with South Korea’s LG Chem.

GM said today that it will also explore localizing a CAM production facility with LG Chem by the end of 2025. Previously, the company announced that it will construct a new cathode factory in North America in a joint venture with South Korea’s Posco Chemical.

Click here to read the full article on The Verg.

Terrence Howard Claims He Invented ‘New Hydrogen Technology’ To Defend Uganda
LinkedIn
Terrence Howard on the red carpet for

By BET

Former Empire actor and red carpet scientist Terrence Howard is currently visiting Uganda as part of a government effort to draw investors from the African diaspora to the nation. He is claiming he has what it needs to change the world.

According to Vice, Howard made a lofty presentation on Wednesday, July 13, addressing officials and claiming to have developed a “new hydrogen technology.”

Famously, Howard argued in Rolling Stone that one times one equals two, and now he says his new system, The Lynchpin, would be able to clean the ocean and defend Uganda from exploitation via cutting-edge drone technology. The proprietary technology he announced in a 2021 press release is said to hold 86 patents.

“I was able to identify the grand unified field equation they’ve been looking for and put it into geometry,” he shared in front of an audience of Ugandan dignitaries. “We’re talking about unlimited bonding, unlimited predictable structures, supersymmetry.”

“The Lynchpins are now able to behave as a swarm, as a colony, that can defend a nation, that can harvest food, that can remove plastics from the ocean, that can give the children of Uganda and the people of Uganda an opportunity to spread this and sell these products throughout the world,” he added.

Howard, who briefly quit acting in 2019 only to come out of retirement in 2020, has seemingly made rewriting history a personal side hustle. According to Vice, he made nebulous claims that rapidly went viral on social media, saying, “I’ve made some discoveries in my own personal life with the science that, y’know, Pythagoras was searching for. I was able to open up the flower of life properly and find the real wave conjugations we’ve been looking for 10,000 years.”

While his latest claims have yet to be clarified, Howard was invited to speak by Frank Tumwebaze, the minister of agriculture, animal industries, and fishery.

Click here to read the full article on BET.

Can Virtual Reality Help Autistic Children Navigate the Real World?
LinkedIn
Mr. Ravindran adjusts his son’s VR headset between lessons. “It was one of the first times I’d seen him do pretend play like that,” Mr. Ravindran said of the time when his son used Google Street View through a headset, then went into his playroom and acted out what he had experienced in VR. “It ended up being a light bulb moment.

By Gautham Nagesh, New York Times

This article is part of Upstart, a series on young companies harnessing new science and technology.

Vijay Ravindran has always been fascinated with technology. At Amazon, he oversaw the team that built and started Amazon Prime. Later, he joined the Washington Post as chief digital officer, where he advised Donald E. Graham on the sale of the newspaper to his former boss, Jeff Bezos, in 2013.

By late 2015, Mr. Ravindran was winding down his time at the renamed Graham Holdings Company. But his primary focus was his son, who was then 6 years old and undergoing therapy for autism.

“Then an amazing thing happened,” Mr. Ravindran said.

Mr. Ravindran was noodling around with a virtual reality headset when his son asked to try it out. After spending 30 minutes using the headset in Google Street View, the child went to his playroom and started acting out what he had done in virtual reality.

“It was one of the first times I’d seen him do pretend play like that,” Mr. Ravindran said. “It ended up being a light bulb moment.”

Like many autistic children, Mr. Ravindran’s son struggled with pretend play and other social skills. His son’s ability to translate his virtual reality experience to the real world sparked an idea. A year later, Mr. Ravindran started a company called Floreo, which is developing virtual reality lessons designed to help behavioral therapists, speech therapists, special educators and parents who work with autistic children.

The idea of using virtual reality to help autistic people has been around for some time, but Mr. Ravindran said the widespread availability of commercial virtual reality headsets since 2015 had enabled research and commercial deployment at much larger scale. Floreo has developed almost 200 virtual reality lessons that are designed to help children build social skills and train for real world experiences like crossing the street or choosing where to sit in the school cafeteria.

Last year, as the pandemic exploded demand for telehealth and remote learning services, the company delivered 17,000 lessons to customers in the United States. Experts in autism believe the company’s flexible platform could go global in the near future.

That’s because the demand for behavioral and speech therapy as well as other forms of intervention to address autism is so vast. Getting a diagnosis for autism can take months — crucial time in a child’s development when therapeutic intervention can be vital. And such therapy can be costly and require enormous investments of time and resources by parents.

The Floreo system requires an iPhone (version 7 or later) and a V.R. headset (a low-end model costs as little as $15 to $30), as well as an iPad, which can be used by a parent, teacher or coach in-person or remotely. The cost of the program is roughly $50 per month. (Floreo is currently working to enable insurance reimbursement, and has received Medicaid approval in four states.)

A child dons the headset and navigates the virtual reality lesson, while the coach — who can be a parent, teacher, therapist, counselor or personal aide — monitors and interacts with the child through the iPad.

The lessons cover a wide range of situations, such as visiting the aquarium or going to the grocery store. Many of the lessons involve teaching autistic children, who may struggle to interpret nonverbal cues, to interpret body language.

Autistic self-advocates note that behavioral therapy to treat autism is controversial among those with autism, arguing that it is not a disease to be cured and that therapy is often imposed on autistic children by their non-autistic parents or guardians. Behavioral therapy, they say, can harm or punish children for behaviors such as fidgeting. They argue that rather than conditioning autistic people to act like neurotypical individuals, society should be more welcoming of them and their different manner of experiencing the world.

“A lot of the mismatch between autistic people and society is not the fault of autistic people, but the fault of society,” said Zoe Gross, the director of advocacy at the Autistic Self Advocacy Network. “People should be taught to interact with people who have different kinds of disabilities.”

Mr. Ravindran said Floreo respected all voices in the autistic community, where needs are diverse. He noted that while Floreo was used by many behavioral health providers, it had been deployed in a variety of contexts, including at schools and in the home.

“The Floreo system is designed to be positive and fun, while creating positive reinforcement to help build skills that help acclimate to the real world,” Mr. Ravindran said.

In 2017, Floreo secured a $2 million fast track grant from the National Institutes of Health. The company is first testing whether autistic children will tolerate headsets, then conducting a randomized control trial to test the method’s usefulness in helping autistic people interact with the police.

Early results have been promising: According to a study published in the Autism Research journal (Mr. Ravindran was one of the authors), 98 percent of the children completed their lessons, quelling concerns about autistic children with sensory sensitivities being resistant to the headsets.

Ms. Gross said she saw potential in virtual reality lessons that helped people rehearse unfamiliar situations, such as Floreo’s lesson on crossing the street. “There are parts of Floreo to get really excited about: the airport walk through, or trick or treating — a social story for something that doesn’t happen as frequently in someone’s life,” she said, adding that she would like to see a lesson for medical procedures.

However, she questioned a general emphasis by the behavioral therapy industry on using emerging technologies to teach autistic people social skills.

A second randomized control trial using telehealth, conducted by Floreo using another N.I.H. grant, is underway, in hopes of showing that Floreo’s approach is as effective as in-person coaching.

But it was those early successes that convinced Mr. Ravindran to commit fully to the project.

“There were just a lot of really excited people.,” he said. “When I started showing families what we had developed, people would just give me a big hug. They would start crying that there was someone working on such a high-tech solution for their kids.”

Clinicians who have used the Floreo system say the virtual reality environment makes it easier for children to focus on the skill being taught in the lessons, unlike in the real world where they might be overwhelmed by sensory stimuli.

Celebrate the Children, a nonprofit private school in Denville, N.J., for children with autism and related challenges, hosted one of the early pilots for Floreo; Monica Osgood, the school’s co-founder and executive director, said the school had continued to use the system.

Click here to read the full article on New York Times.

Doctors using AI catch breast cancer more often than either does alone
LinkedIn
scan of breast tissue with cancer

By , MIT Technology Review

Radiologists assisted by an AI screen for breast cancer more successfully than they do when they work alone, according to new research. That same AI also produces more accurate results in the hands of a radiologist than it does when operating solo.

The large-scale study, published this month in The Lancet Digital Health, is the first to directly compare an AI’s performance in breast cancer screening according to whether it’s used alone or to assist a human expert. The hope is that such AI systems could save lives by detecting cancers doctors miss, free up radiologists to see more patients, and ease the burden in places where there is a dire lack of specialists.

The software being tested comes from Vara, a startup based in Germany that also led the study. The company’s AI is already used in over a fourth of Germany’s breast cancer screening centers and was introduced earlier this year to a hospital in Mexico and another in Greece.

The Vara team, with help from radiologists at the Essen University Hospital in Germany and the Memorial Sloan Kettering Cancer Center in New York, tested two approaches. In the first, the AI works alone to analyze mammograms. In the other, the AI automatically distinguishes between scans it thinks look normal and those that raise a concern. It refers the latter to a radiologist, who would review them before seeing the AI’s assessment. Then the AI would issue a warning if it detected cancer when the doctor did not.

To train the neural network, Vara fed the AI data from over 367,000 mammograms—including radiologists’ notes, original assessments, and information on whether the patient ultimately had cancer—to learn how to place these scans into one of three buckets: “confident normal,” “not confident” (in which no prediction is given), and “confident cancer.” The conclusions from both approaches were then compared with the decisions real radiologists originally made on 82,851 mammograms sourced from screening centers that didn’t contribute scans used to train the AI.

The second approach—doctor and AI working together—was 3.6% better at detecting breast cancer than a doctor working alone, and raised fewer false alarms. It accomplished this while automatically setting aside scans it classified as confidently normal, which amounted to 63% of all mammograms. This intense streamlining could slash radiologists’ workloads.

After breast cancer screenings, patients with a normal scan are sent on their way, while an abnormal or unclear scan triggers follow-up testing. But radiologists examining mammograms miss 1 in 8 cancers. Fatigue, overwork, and even the time of day all affect how well radiologists can identify tumors as they view thousands of scans. Signs that are visually subtle are also generally less likely to set off alarms, and dense breast tissue—found mostly in younger patients—makes signs of cancer harder to see.

Radiologists using the AI in the real world are required by German law to look at every mammogram, at least glancing at those the AI calls fine. The AI still lends them a hand by pre-filling reports on scans labeled normal, though the radiologist can always reject the AI’s call.

Thilo Töllner, a radiologist who heads a German breast cancer screening center, has used the program for two years. He’s sometimes disagreed when the AI classified scans as confident normal and manually filled out reports to reflect a different conclusion, but he says “normals are almost always normal.” Mostly, “you just have to press enter.”

Mammograms the AI has labeled as ambiguous or “confident cancer” are referred to a radiologist—but only after the doctor has offered an initial, independent assessment.

Radiologists classify mammograms on a 0 to 6 scale known as BI-RADS, where lower is better. A score of 3 indicates that something is probably benign, but worth checking up on. If Vara has assigned a BI-RADS score of 3 or higher to a mammogram the radiologist labels normal, a warning appears.

AI generally excels at image classification. So why did Vara’s AI on its own underperform a lone doctor? Part of the problem is that a mammogram alone can’t determine whether someone has cancer—that requires removing and testing the abnormal-looking tissue. Instead, the AI examines mammograms for hints.

Christian Leibig, lead author on the study and director of machine learning at Vara, says that mammograms of healthy and cancerous breasts can look very similar, and both types of scans can present a wide range of visual results. This complicates AI training. So does the low prevalence of cancer in breast screenings (according to Leibig, “in Germany, it’s roughly six in 1,000”). Because AIs trained to catch cancer are mostly trained on healthy breast scans, they can be prone to false positives.

The study tested the AI only on past mammogram decisions and assumed that radiologists would agree with the AI each time it issued a decision of “confident normal” or “confident cancer.” When the AI was unsure, the study defaulted to the original radiologist’s reading. That means it couldn’t test how using AI affects radiologists’ decisions—and whether any such changes may create new risks. Töllner admits he spends less time scrutinizing scans Vara labels normal than those it deems suspicious. “You get quicker with the normals because you get confident with the system,” he says.

Click here to read the full article on MIT Technology Review.

At 17, she was her family’s breadwinner on a McDonald’s salary. Now she’s gone into space
LinkedIn
Amazon founder and CEO Jeff Bezos announced he'll be on board a spaceflight next month, in a capsule attached to a rocket made by his space exploration company Blue Origin. Bezos is seen here in 2019.

By Jackie Wattles, CNN

A rocket built by Jeff Bezos’ Blue Origin carried its fifth group of passengers to the edge of space, including the first-ever Mexican-born woman to make such a journey.

The 60-foot-tall suborbital rocket took off from Blue Origin’s facilities in West Texas at 9:26am ET, vaulting a group of six people to more than 62 miles above the Earth’s surface — which is widely deemed to make the boundary of outer space — and giving them a few minutes of weightlessness before parachuting to landing.

Most of the passengers paid an undisclosed sum for their seats. But Katya Echazarreta, an engineer and science communicator from Guadalajara, Mexico, was selected by a nonprofit called Space for Humanity to join this mission from a pool of thousands of applicants. The organization’s goal is to send “exceptional leaders” to space and allow them to experience the overview effect, a phenomenon frequently reported by astronauts who say that viewing the Earth from space give them a profound shift in perspective.

Echazarreta told CNN Business that she experienced that overview effect “in my own way.”

“Looking down and seeing how everyone is down there, all of our past, all of our mistakes, all of our obstacles, everything — everything is there,” she said. “And the only thing I could think of when I came back down was that I need people to see this. I need Latinas to see this. And I think that it just completely reinforced my mission to continue getting primarily women and people of color up to space and doing whatever it is they want to do.”

Echazarreta is the first Mexican-born woman to travel to space and the second Mexican after Rodolfo Neri Vela, a scientist who joined one of NASA’s Space Shuttle missions in 1985.

She moved to the United States with her family at the age of seven, and she recalls being overwhelmed in a new place where she didn’t speak the language, and a teacher warned her she might have to be held back.
“It just really fueled me and I think ever since then, ever since the third grade, I kind of just went off and have not stopped,” Echazarreta recalled in an Instagram interview.

When she was 17 and 18, Echazarreta said she was also the main breadwinner for her family on a McDonald’s salary.

“I had sometimes up to four [jobs] at the same time, just to try to get through college because it was really important for me,” she said.
These days, Echazarreta is working on her master’s degree in engineering at Johns Hopkins University. She previously worked at NASA’s famed Jet Propulsion Laboratory in California. She also boasts a following of more than 330,000 users on TikTok, hosts a science-focused YouTube series and is a presenter on the weekend CBS show “Mission Unstoppable.”

Space for Humanity — which was founded in 2017 by Dylan Taylor, a space investor who recently joined a Blue Origin flight himself — chose her for her impressive contributions. “We were looking for some like people who were leaders in their communities, who have a sphere of influence; people who are doing really great work in the world already, and people who are passionate about whatever that is,” Rachel Lyons, the nonprofit’s executive director, told CNN Business.

Click here to read the full article on CNN.

Disability Inclusion Is Coming Soon to the Metaverse
LinkedIn
Disabled avatars from the metaverse in a wheelchair

By Christopher Reardon, PC Mag

When you think of futurism, you probably don’t think of the payroll company ADP—but that’s where Giselle Mota works as the company’s principal consultant on the “future of work.” Mota, who has given a Ted Talk(Opens in a new window) and has written(Opens in a new window) for Forbes, is committed to bringing more inclusion and access to the Web3 and metaverse spaces. She’s also been working on a side project called Unhidden, which will provide disabled people with accurate avatars, so they’ll have the option to remain themselves in the metaverse and across Web3.

To See and Be Seen
The goal of Unhidden is to encourage tech companies to be more inclusive, particularly of people with disabilities. The project has launched and already has a partnership with the Wanderland(Opens in a new window) app, which will feature Unhidden avatars through its mixed-reality(Opens in a new window) platform at the VivaTech Conference in Paris and the DisabilityIN Conference in Dallas. The first 12 avatars will come out this summer with Mota, Dr. Tiffany Jana, Brandon Farstein, Tiffany Yu, and other global figures representing disability inclusion.

The above array of individuals is known as the NFTY Collective(Opens in a new window). Its members hail from countries including America, the UK, and Australia, and the collective represents a spectrum of disabilities, ranging from the invisible type, such as bipolar and other forms of neurodiversity, to the more visible, including hypoplasia and dwarfism.

Hypoplasia causes the underdevelopment of an organ or tissue. For Isaac Harvey, the disease manifested by leaving him with no arms and short legs. Harvey uses a wheelchair and is the president of Wheels for Wheelchairs, along with being a video editor. He got involved with Unhidden after being approached by its co-creator along with Mota, Victoria Jenkins, who is an inclusive fashion designer.

Click here to read the full article on PC Mag.

For people with disabilities, AI can only go so far to make the web more accessible
LinkedIn
AI technology

By Kate Kaye, Protocol

“It’s a lot to listen to a robot all day long,” said Tina Pinedo, communications director at Disability Rights Oregon, a group that works to promote and defend the rights of people with disabilities.

But listening to a machine is exactly what many people with visual impairments do while using screen reading tools to accomplish everyday online tasks such as paying bills or ordering groceries from an ecommerce site.

“There are not enough web developers or people who actually take the time to listen to what their website sounds like to a blind person. It’s auditorily exhausting,” said Pinedo.

Whether struggling to comprehend a screen reader barking out dynamic updates to a website, trying to make sense of poorly written video captions or watching out for fast-moving imagery that could induce a seizure, the everyday obstacles blocking people with disabilities from a satisfying digital experience are immense.

Needless to say, technology companies have tried to step in, often promising more than they deliver to users and businesses hoping that automated tools can break down barriers to accessibility. Although automated tech used to check website designs for accessibility flaws have been around for some time, companies such as Evinced claim that sophisticated AI not only does a better job of automatically finding and helping correct accessibility problems, but can do it for large enterprises that need to manage thousands of website pages and app content.

Still, people with disabilities and those who regularly test for web accessibility problems say automated systems and AI can only go so far. “The big danger is thinking that some type of automation can replace a real person going through your website, and basically denying people of their experience on your website, and that’s a big problem,” Pinedo said.

Why Capital One is betting on accessibility AI
For a global corporation such as Capital One, relying on a manual process to catch accessibility issues is a losing battle.

“We test our entire digital footprint every month. That’s heavily reliant on automation as we’re testing almost 20,000 webpages,” said Mark Penicook, director of Accessibility at the banking and credit card company, whose digital accessibility team is responsible for all digital experiences across Capital One including websites, mobile apps and electronic messaging in the U.S., the U.K. and Canada.

Accessibility isn’t taught in computer science.
Even though Capital One has a team of people dedicated to the effort, Penicook said he has had to work to raise awareness about digital accessibility among the company’s web developers. “Accessibility isn’t taught in computer science,” Penicook told Protocol. “One of the first things that we do is start teaching them about accessibility.”

One way the company does that is by celebrating Global Accessibility Awareness Day each year, Penicook said. Held on Thursday, the annual worldwide event is intended to educate people about digital access and inclusion for those with disabilities and impairments.

Before Capital One gave Evinced’s software a try around 2018, its accessibility evaluations for new software releases or features relied on manual review and other tools. Using Evinced’s software, Penicook said the financial services company’s accessibility testing takes hours rather than weeks, and Capital One’s engineers and developers use the system throughout their internal software development testing process.

It was enough to convince Capital One to invest in Evinced through its venture arm, Capital One Ventures. Microsoft’s venture group, M12, also joined a $17 million funding round for Evinced last year.

Evinced’s software automatically scans webpages and other content, and then applies computer vision and visual analysis AI to detect problems. The software might discover a lack of contrast between font and background colors that make it difficult for people with vision impairments like color blindness to read. The system might find images that do not have alt text, the metadata that screen readers use to explain what’s in a photo or illustration. Rather than pointing out individual problems, the software uses machine learning to find patterns that indicate when the same type of problem is happening in several places and suggests a way to correct it.

“It automatically tells you, instead of a thousand issues, it’s actually one issue,” said Navin Thadani, co-founder and CEO of Evinced.

The software also takes context into account, factoring in the purpose of a site feature or considering the various operating systems or screen-reader technologies that people might use when visiting a webpage or other content. For instance, it identifies user design features that might be most accessible for a specific purpose, such as a button to enable a bill payment transaction rather than a link.

Some companies use tools typically referred to as “overlays” to check for accessibility problems. Many of those systems are web plug-ins that add a layer of automation on top of existing sites to enable modifications tailored to peoples’ specific requirements. One product that uses computer vision and machine learning, accessiBe, allows people with epilepsy to choose an option that automatically stops all animated images and videos on a site before they could pose a risk of seizure. The company raised $28 million in venture capital funding last year.

Another widget from TruAbilities offers an option that limits distracting page elements to allow people with neurodevelopmental disorders to focus on the most important components of a webpage.

Some overlay tools have been heavily criticized for adding new annoyances to the web experience and providing surface-level responses to problems that deserve more robust solutions. Some overlay tech providers have “pretty brazen guarantees,” said Chase Aucoin, chief architect at TPGi, a company that provides accessibility automation tools and consultation services to customers, including software development monitoring and product design assessments for web development teams.

“[Overlays] give a false sense of security from a risk perspective to the end user,” said Aucoin, who himself experiences motor impairment. “It’s just trying to slap a bunch of paint on top of the problem.”

In general, complicated site designs or interfaces that automatically hop to a new page section or open a new window can create a chaotic experience for people using screen readers, Aucoin said. “A big thing now is just cognitive; how hard is this thing for somebody to understand what’s going on?” he said.

Even more sophisticated AI-based accessibility technologies don’t address every disability issue. For instance, people with an array of disabilities either need or prefer to view videos with captions, rather than having sound enabled. However, although automated captions for videos have improved over the years, “captions that are computer-generated without human review can be really terrible,” said Karawynn Long, an autistic writer with central auditory processing disorder and hyperlexia, a hyperfocus on written language.

“I always appreciate when written transcripts are included as an option, but auto-generated ones fall woefully short, especially because they don’t include good indications of non-linguistic elements of the media,” Long said.

Click here to read the full article on Protocol.

Boeing Skyscraper Pride

Danaher

Danaher

Alight

Alight Solutions

Leidos

Upcoming Events

  1. City Career Fair
    January 19, 2022 - November 4, 2022
  2. The Small Business Expo–Multiple Event Dates
    February 17, 2022 - December 1, 2022
  3. 44th Annual BDPA National Conference
    August 18, 2022 - August 20, 2022
  4. Diversity Alliance for Science (DA4S) West Coast Conference
    August 30, 2022 - September 1, 2022
  5. Diversity Alliance for Science (DA4S) Matchmaking Events
    September 1, 2022
  6. Commercial UAV Expo Americas
    September 6, 2022 - September 8, 2022

Upcoming Events

  1. City Career Fair
    January 19, 2022 - November 4, 2022
  2. The Small Business Expo–Multiple Event Dates
    February 17, 2022 - December 1, 2022
  3. 44th Annual BDPA National Conference
    August 18, 2022 - August 20, 2022
  4. Diversity Alliance for Science (DA4S) West Coast Conference
    August 30, 2022 - September 1, 2022
  5. Diversity Alliance for Science (DA4S) Matchmaking Events
    September 1, 2022
  6. Commercial UAV Expo Americas
    September 6, 2022 - September 8, 2022