Terrence Howard Claims He Invented ‘New Hydrogen Technology’ To Defend Uganda

LinkedIn
Terrence Howard on the red carpet for

By BET

Former Empire actor and red carpet scientist Terrence Howard is currently visiting Uganda as part of a government effort to draw investors from the African diaspora to the nation. He is claiming he has what it needs to change the world.

According to Vice, Howard made a lofty presentation on Wednesday, July 13, addressing officials and claiming to have developed a “new hydrogen technology.”

Famously, Howard argued in Rolling Stone that one times one equals two, and now he says his new system, The Lynchpin, would be able to clean the ocean and defend Uganda from exploitation via cutting-edge drone technology. The proprietary technology he announced in a 2021 press release is said to hold 86 patents.

“I was able to identify the grand unified field equation they’ve been looking for and put it into geometry,” he shared in front of an audience of Ugandan dignitaries. “We’re talking about unlimited bonding, unlimited predictable structures, supersymmetry.”

“The Lynchpins are now able to behave as a swarm, as a colony, that can defend a nation, that can harvest food, that can remove plastics from the ocean, that can give the children of Uganda and the people of Uganda an opportunity to spread this and sell these products throughout the world,” he added.

Howard, who briefly quit acting in 2019 only to come out of retirement in 2020, has seemingly made rewriting history a personal side hustle. According to Vice, he made nebulous claims that rapidly went viral on social media, saying, “I’ve made some discoveries in my own personal life with the science that, y’know, Pythagoras was searching for. I was able to open up the flower of life properly and find the real wave conjugations we’ve been looking for 10,000 years.”

While his latest claims have yet to be clarified, Howard was invited to speak by Frank Tumwebaze, the minister of agriculture, animal industries, and fishery.

Click here to read the full article on BET.

The latest video game controller isn’t plastic. It’s your face.
LinkedIn
Dunn playing “Minecraft” using voice commands on the Enabled Play controller, face expression controls via a phone and virtual buttons on Xbox's adaptive controller. (Courtesy of Enabled Play Game Controller)

By Amanda Florian, The Washington Post

Over decades, input devices in the video game industry have evolved from simple joysticks to sophisticated controllers that emit haptic feedback. But with Enabled Play, a new piece of assistive tech created by self-taught developer Alex Dunn, users are embracing a different kind of input: facial expressions.

While companies like Microsoft have sought to expand accessibility through adaptive controllers and accessories, Dunn’s new device takes those efforts even further, translating users’ head movements, facial expressions, real-time speech and other nontraditional input methods into mouse clicks, key strokes and thumbstick movements. The device has users raising eyebrows — quite literally.

“Enabled Play is a device that learns to work with you — not a device you have to learn to work with,” Dunn, who lives in Boston, said via Zoom.

Dunn, 26, created Enabled Play so that everyone — including his younger brother with a disability — can interface with technology in a natural and intuitive way. At the beginning of the pandemic, the only thing he and his New Hampshire-based brother could do together, while approximately 70 miles apart, was game.

“And that’s when I started to see firsthand some of the challenges that he had and the limitations that games had for people with really any type of disability,” he added.

At 17, Dunn dropped out of Worcester Polytechnic Institute to become a full-time software engineer. He began researching and developing Enabled Play two and a half years ago, which initially proved challenging, as most speech-recognition programs lagged in response time.

“I built some prototypes with voice commands, and then I started talking to people who were deaf and had a range of disabilities, and I found that voice commands didn’t cut it,” Dunn said.

That’s when he started thinking outside the box.

Having already built Suave Keys, a voice-powered program for gamers with disabilities, Dunn created Snap Keys — an extension that turns a user’s Snapchat lens into a controller when playing games like Call of Duty, “Fall Guys,” and “Dark Souls.” In 2020, he won two awards for his work at Snap Inc.’s Snap Kit Developer Challenge, a competition among third-party app creators to innovate Snapchat’s developer tool kit.

With Enabled Play, Dunn takes accessibility to the next level. With a wider variety of inputs, users can connect the assistive device — equipped with a robust CPU and 8 GB of RAM — to a computer, game console or other device to play games in whatever way works best for them.

Dunn also spent time making sure Enabled Play was accessible to people who are deaf, as well as people who want to use nonverbal audio input, like “ooh” or “aah,” to perform an action. Enabled Play’s vowel sound detection model is based on “The Vocal Joystick,” which engineers and linguistics experts at the University of Washington developed in 2006.

“Essentially, it looks to predict the word you are going to say based on what is in the profile, rather than trying to assume it could be any word in the dictionary,” Dunn said. “This helps cut through machine learning bias by learning more about how the individual speaks and applies it to their desired commands.”

Dunn’s AI-enabled controller takes into account a person’s natural tendencies. If a gamer wants to set up a jump command every time they open their mouth, Enabled Play would identify that person’s individual resting mouth position and set that as the baseline.

In January, Enabled Play officially launched in six countries — its user base extending from the U.S. to the U.K., Ghana and Austria. For Dunn, one of his primary goals was to fill a gap in accessibility and pricing compared to other assistive gaming devices.

“There are things like the Xbox Adaptive Controller. There are things like the HORI Flex [for Nintendo Switch]. There are things like Tobii, which does eye-tracking and stuff like that. But it still seemed like it wasn’t enough,” he said.

Compared to some devices that are only compatible with one gaming system or computer at a time, Dunn’s AI-enabled controller — priced at $249.99 — supports a combination of inputs and outputs. Speech therapists say that compared to augmentative and alternative communication (AAC) devices, which are medically essential for some with disabilities, Dunn’s device offers simplicity.

“This is just the start,” said Julia Franklin, a speech language pathologist at Community School of Davidson in Davidson, N.C. Franklin introduced students to Enabled Play this summer and feels it’s a better alternative to other AAC devices on the market that are often “expensive, bulky and limited” in usability. Many sophisticated AAC systems can range from $6,000 to $11,500 for high-tech devices, with low-end eye-trackers running in the thousands. A person may also download AAC apps on their mobile devices, which range from $49.99 to $299.99 for the app alone.

“For many people who have physical and cognitive differences, they often exhaust themselves to learn a complex AAC system that has limits,” she said. “The Enabled Play device allows individuals to leverage their strengths and movements that are already present.”

Internet users have applauded Dunn for his work, noting that asking for accessibility should not equate to asking for an “easy mode” — a misconception often cited by critics of making games more accessible.

“This is how you make gaming accessible,” one Reddit user wrote about Enabled Play. “Not by dumbing it down, but by creating mechanical solutions that allow users to have the same experience and accomplish the same feats as [people without disabilities].” Another user who said they regularly worked with young patients with cerebral palsy speculated that Enabled Play “would quite literally change their lives.”

Click here to read the full article on The Washington Post.

Diagnosing Mental Health Disorders Through AI Facial Expression Evaluation
LinkedIn
Researchers from Germany have developed a method for identifying mental disorders based on facial expressions interpreted by computer vision.

By , Unite

Researchers from Germany have developed a method for identifying mental disorders based on facial expressions interpreted by computer vision.

The new approach can not only distinguish between unaffected and affected subjects, but can also correctly distinguish depression from schizophrenia, as well as the degree to which the patient is currently affected by the disease.

The researchers have provided a composite image that represents the control group for their tests (on the left in the image below) and the patients who are suffering from mental disorders (right). The identities of multiple people are blended in the representations, and neither image depicts a particular individual:

Individuals with affective disorders tend to have raised eyebrows, leaden gazes, swollen faces and hang-dog mouth expressions. To protect patient privacy, these composite images are the only ones made available in support of the new work.

Until now, facial affect recognition has been primarily used as a potential tool for basic diagnosis. The new approach, instead, offers a possible method to evaluate patient progress throughout treatment, or else (potentially, though the paper does not suggest it) in their own domestic environment for outpatient monitoring.

The paper states*:

‘Going beyond machine diagnosis of depression in affective computing, which has been developed in previous studies, we show that the measurable affective state estimated by means of computer vision contains far more information than the pure categorical classification.’

The researchers have dubbed this technique Opto Electronic Encephalography (OEG), a completely passive method of inferring mental state by facial image analysis instead of topical sensors or ray-based medical imaging technologies.

The authors conclude that OEG could potentially be not just a mere secondary aide to diagnosis and treatment, but, in the long term, a potential replacement for certain evaluative parts of the treatment pipeline, and one that could cut down on the time necessary for patient monitoring and initial diagnosis. They note:

‘Overall, the results predicted by the machine show better correlations compared to the pure clinical observer rating based questionnaires and are also objective. The relatively short measurement period of a few minutes for the computer vision approaches is also noteworthy, whereas hours are sometimes required for the clinical interviews.’

However, the authors are keen to emphasize that patient care in this field is a multi-modal pursuit, with many other indicators of patient state to be considered than just their facial expressions, and that it is too early to consider that such a system could entirely substitute traditional approaches to mental disorders. Nonetheless, they consider OEG a promising adjunct technology, particularly as a method to grade the effects of pharmaceutical treatment in a patient’s prescribed regime.

The paper is titled The Face of Affective Disorders, and comes from eight researchers across a broad range of institutions from the private and public medical research sector.

Data

(The new paper deals mostly with the various theories and methods that are currently popular in patient diagnosis of mental disorders, with less attention than is usual to the actual technologies and processes used in the tests and various experiments)

Data-gathering took place at University Hospital at Aachen, with 100 gender-balanced patients and a control group of 50 non-affected people. The patients included 35 sufferers from schizophrenia and 65 people suffering from depression.

For the patient portion of the test group, initial measurements were taken at the time of first hospitalization, and the second prior to their discharge from hospital, spanning an average interval of 12 weeks. The control group participants were recruited arbitrarily from the local population, with their own induction and ‘discharge’ mirroring that of the actual patients.

In effect, the most important ‘ground truth’ for such an experiment must be diagnoses obtained by approved and standard methods, and this was the case for the OEG trials.

However, the data-gathering stage obtained additional data more suited for machine interpretation: interviews averaging 90 minutes were captured over three phases with a Logitech c270 consumer webcam running at 25fps.

The first session comprised of a standard Hamilton interview (based on research originated around 1960), such as would normally be given on admission. In the second phase, unusually, the patients (and their counterparts in the control group) were shown videos of a series of facial expressions, and asked to mimic each of these, while stating their own estimation of their mental condition at that time, including emotional state and intensity. This phase lasted around ten minutes.

In the third and final phase, the participants were shown 96 videos of actors, lasting just over ten seconds each, apparently recounting intense emotional experiences. The participants were then asked to evaluate the emotion and intensity represented in the videos, as well as their own corresponding feelings. This phase lasted around 15 minutes.

Click here to read the full article on Unite.

Meet Afro-Latina Scientist Dr. Jessica Esquivel
LinkedIn
Dr. Jessica Esquivel

By Erica Nahmad, Be Latina

It’s undeniable that representation matters and the idea of what a scientist could or should look like is changing, largely thanks to pioneers like Afro-Latina scientist Dr. Jessica Esquivel, who is breaking barriers for women in STEM one step at a time.

Dr. Esquivel isn’t just extraordinary because of what she is capable of as an Afro-Latina astrophysicist — she’s also extraordinary in her vulnerability and relatability. She’s on a mission to break barriers in science and to show the humanity behind scientists.

Dr. Esquivel makes science accessible to everyone, no matter what you look like or where you come from. As one of the only Afro-Latina scientists in her field, and one of the only women who looked like her to pursue a Ph.D. in physics, Dr. Esquivel knows a thing or two about the importance of representation, especially in STEM fields and science labs.

Women make up only 28% of the science, technology, engineering, and math workforce in the U.S. Those disparities are even more severe when you start to look at minority populations.

“When you start looking at the intersections of race and gender and then even sexuality, those numbers drop significantly,” Esquivel told CBS Chicago. “There are only about 100 to 150 black women with their Ph.D. in physics in the country!”

Fighting against the isolation of uniqueness
Dr. Jessica Esquivel recalls being a nontraditional student and being “the only” when she entered graduate school for physics — the only woman in her class, the only Black, the only Mexican, the only lesbian — and all of that made her feel very isolated.

“On top of such rigorous material, the isolation and otherness that happens due to being the only or one of few is an added burden marginalized people, especially those with multiple marginalized identities, have to deal with,” Dr. Esquivel told BeLatina in an email interview. On top of feeling like an outsider, isolation was also consuming. “Being away from family at a predominately white institution, where the number of microaggressions was constant, really affected my mental health and, in turn, my coursework and research, so it was important to surround myself with mentors who supported me and believed in my ability to be a scientist.”

While she anticipated that the physics curriculum would be incredibly challenging, she was definitely not prepared for how hard the rest of the experience would be and how it would impact her as a student and a scientist.

The challenges she faced professionally and personally made her realize early on just how crucial representation is in academia and all fields, but especially in STEM. “It was really impactful for me to learn that there were other Black women who had made it out of the grad school metaphorical trenches. It’s absolutely important to create inclusive spaces where marginalized people, including Black, Latina, and genderqueer people, can thrive,” she said.

“The secrets of our universe don’t discriminate, these secrets can and should be unraveled by all those who wish to embark on that journey, and my aim is to clear as many barriers and leave these physics spaces better than I entered them.”

When inclusion and equal opportunities are the ultimate goal
Dr. Jessica Esquivel isn’t just dedicating her time and energy to studying complex scientific concepts — think quantum entanglement, space-time fabric, the building blocks of the universe… some seriously abstract physics concepts straight out of a sci-fi movie, as she explains. On top of her research, she put in so much extra work to show people, especially younger generations of women of color, that the physics and STEM world is not some old white man’s club where this prestigious knowledge is only available to them. Dr. Esquivel is an expert in her field; she knows things that no one else currently knows and has the ability and the power to transfer that knowledge to others and pass it down to others. There is a place for everyone, including people who look like her, in the STEM world, and she’s on a mission to inspire others while working to increase diversity, equity, and inclusion in the STEM space.

“Many of us who are underrepresented in STEM have taken on the responsibility of spearheading institutional change toward more just, equitable, and inclusive working environments as a form of survival,” she explains. “I’m putting in more work on top of the research I do because I recognize that I do better research if I feel supported and if I feel like I can bring my whole self to my job. My hope is that one day Black and brown women and gender-queer folks interested in science can pursue just that and not have to fight for their right to be a scientist or defend that they are worthy of doing science.”

Click here to read the full article on Be Latina.

Your favourite Instagram face might not be a human. How AI is taking over influencer roles
LinkedIn
South Korean influencer Rozy has over 130,000 followers on Instagram.

By Mint

South Korean influencer Rozy has over 130,000 followers on Instagram. She posts photos of globetrotting adventures, she sings, dances and models. The interesting fact is, unlike most popular faces on the medium, Rozy is not a real human. However, this digitally rendered being looks so real that it’s often mistaken for flesh and blood.

How Rozy was designed?
Seoul-based company that created Rozy describes her as a blended personality – part human, part AI, and part robot. She is “able to do everything that humans cannot … in the most human-like form,” Sidus Studio X says on its website.

Sidus Studio X explains sometimes they create an image of Rozy from head to toe while other times it is just a superimposed photo where they put her head onto the body of a human model.

Rozy was launched in 2020 and since then, she pegged several brand deals and sponsorships, and participated in several virtual fashion shows and also released two singles.

And a CNN report claims, that Rozy is not alone, there are several others like her. Facebook and Instagram together have more than 200 virtual influencers on their platforms

The CGI (computer-generated imagery) technology behind Rozy isn’t new. It is ubiquitous in today’s entertainment industry, where artists use it to craft realistic nonhuman characters in movies, computer games and music videos. But it has only recently been used to make influencers, the report reads.

South Korean retail brand Lotte Home Shopping created its virtual influencer — Lucy, who now has 78,000 Instagram followers.

Lee Bo-hyun, Lotte representative, said that Lucy’s image is more than a pretty face. She studied industrial design, and works in car design. She posts about her job and interests, such as her love for animals and kimbap — rice rolls wrapped in seaweed.

There is a risk attached
However, there is always a risk attached to it. Facebook and Instagram’s parent company Meta has acknowledged the risks.

In a blog post, it said, “Like any disruptive technology, synthetic media has the potential for both good and harm. Issues of representation, cultural appropriation and expressive liberty are already a growing concern,” the company said in a blog post.

“To help brands navigate the ethical quandaries of this emerging medium and avoid potential hazards, (Meta) is working with partners to develop an ethical framework to guide the use of (virtual influencers).”

However, even though the elder generation is quite skeptical, the younger lot is comfortable communicating with virtual influencers.

Lee Na-kyoung, a 23-year-old living in Incheon, began following Rozy about two years ago thinking she was a real person. Rozy followed her back, sometimes commenting on her posts, and a virtual friendship blossomed — one that has endured even after Lee found out the truth, CNN report said.

“We communicated like friends and I felt comfortable with her — so I don’t think of her as an AI but a real friend,” Lee said.

Click here to read the full article on Mint.

Lack of women in hi-tech is a ‘vicious issue’ that must be solved – Female execs.
LinkedIn
diverse students looking at computer screen in a college classroom environment with female execs

By Zachy Hennessey, The Jerusalem Post

“Let’s start by establishing that hi-tech is really the best place for women,” began Dorit Dor, Chief Product Officer for Check Point, during a panel at Tuesday night’s first inaugural Women’s Entrepreneurship Summit from The Jerusalem Post and WE (Women’s Entrepreneurship). During the event, executives from throughout the hi-tech industry gathered to share their knowledge and experience with female entrepreneurs across the country.

Dor elaborated on the juxtaposition between the many good opportunities for women in hi-tech and the relative lack of their presence in the sector. “As well as learning technology, it’s the best opportunity for getting paid,” she said. “It’s the best opportunity for life balance because you could work from home in all the hi-tech industry, it’s the best for every reason you could think of to work in high tech – and still very few select this.”

“We have an issue,” she continued, and explained why she believes the current branding of hi-tech is repulsive for diverse groups of workers. “For example, in cyber, you wear a hoodie and drink a lot of coke, or the men doing it in high school are not socially acceptable,” she said. These impressions make women fearful that they wouldn’t be socially accepted if they were in the industry, Dor suggested.

Besides problematic branding, the hi-tech industry offers several other hurdles for women, explained Dor, including the requirement to “opt in” in order to achieve success and the need to loudly self-advocate for themselves. “Usually, women don’t do this very well,” she said.

In an effort to correct these issues, Check Point runs initiatives helping young kids choose hi-tech and mentoring women to speak up for themselves and pursue promotion. “In the end, if you had a whole list of [mid-level employees] that are women, maybe that would help as well,” she said.

“Cyber security is obviously one of the biggest trends in the Israeli eco-system, as attackers become more sophisticated, so will our solutions be more effective and comprehensive,” said Badian.

“Half of all engineers in Microsoft Israel R&D are focused on cyber security products and bring innovation to that field, so we can be prepared for the threats of the future,” she added.

“Another big trend we see on the rise is climate tech, I’m confident we will see the Israeli entrepreneurial spirit tackle this important issue and we hope to see more and more technological solutions for what might be one of the biggest challenges facing us all,” she concluded.

Investment in women isn’t doing well
Yifat Oron is the senior managing director at Blackstone, a hi-tech investment firm with $941 billion in assets under management. She elaborated on the current shortage of investment in female entrepreneurs, which isn’t doing gangbusters, to say the least.

“$330b. invested in tech by VCs last year – what’s the percentage invested in women entrepreneurs? Two percent,” Oron remarked. “A little less bad is the amount of money invested in companies that have women in the founding team: 16%. It’s still very bad.”

By means of explanation, Oron indicated that the lack of investment in women stems from a lack of female investors.

“The statistics are not glamorous at all. It’s [something like] 15% of general partners [GPs] are women,” she said, while acknowledging that even as little as 10 years ago, these numbers wouldn’t be as “high” – in this sense, some progress has been made. Regardless, she pointed out, “If we’re not going to have GPs that are women, we’re not going to have entrepreneurs that are women.”

To help female entrepreneurship along, Oron explained that “Blackstone – as did most older investment firms – had to do some work to elevate the number of women investors, because this is a very much a men-led business.”

As such, Blackstone has made an effort to train and hire women, launch mentorship programs and invest in hi-tech awareness in high schools. These efforts have been fairly effective.

“Half of our incoming class this year of new employees are women; hopefully most of them are going to stay throughout their careers with us,” Oron said. Last year, Blackstone invested $10b. in women-led companies.

These successes are not just happenstance, however.

“It’s not happening just because it’s happening,” noted Oron. “We’re doing a lot of work, and everybody here who is employing people needs to take charge and make sure they spend a lot of energy on that as well.”

She concluded with a note regarding the importance of female representation in the business hierarchy. “If you want to be able to do the right thing, you have to have a well-balanced leadership,” she said.

“Not necessarily just CEOs; you have to have a lot of women represented well across every single layer of the organization. Research has shown that heterogeneous leadership and boards perform better than homogeneous ones. It’s pretty simple.”

Click here to read the full article on The Jerusalem Post.

GM just secured enough cathode material for 5 million electric vehicles
LinkedIn
GM garage filled with white vans

By Andrew J. Hawkins, The Verge

General Motors needs a lot of cathode active materials (CAM) if it’s to reach its goal of making enough electric vehicles to become a completely carbon neutral company by 2040. How much is enough? How about 950,000 tons of the stuff.

GM now says it’s reached a deal with LG Chem, one of South Korea’s premier battery making firms, to lock down a supply of CAM starting later this year. CAM is basically what makes a battery a battery, consisting of components like processed nickel, lithium and other materials, and representing about 40 percent of the total cost of a battery cell.

The majority of EV battery cathodes are made with NCM — nickel, cobalt, and magnesium. Cobalt is a key component in this mix, but it’s also the most expensive material in the battery and mined under conditions that often violate human rights, leading it to be called the “blood diamond of batteries.” As a result, GM and other companies like Tesla, are rushing to create a cobalt-free battery. GM’s Ultium batteries, for example, will add aluminum — making the mix NCMA — and reduce the cobalt content by 70 percent.

LG Chem will begin supplying CAM to the automaker starting in the latter half of 2022 and lasting until 2030. GM says this will be enough battery material to power approximately 5 million electric vehicles, which should help the company in its quest to catch up to Tesla.

GM has said it plans to spend $30 billion by 2025 on the creation of 30 new plug-in models in its bid to overtake Elon Musk’s company as the leading EV company in the world. Tesla still dominates the relatively small EV market in the US, with around 66 percent market share, while GM only has around 6 percent. This year, the company was even outsold by legacy auto rivals like Ford and Hyundai, according to CNBC.

In a furious bid to catch up and become more vertically integrated, GM is trying to get a stronger grasp on its supply chain, which includes battery manufacturing. The company has said it will spend over $4 billion on the construction of two battery factories in North America in partnership with South Korea’s LG Chem.

GM said today that it will also explore localizing a CAM production facility with LG Chem by the end of 2025. Previously, the company announced that it will construct a new cathode factory in North America in a joint venture with South Korea’s Posco Chemical.

Click here to read the full article on The Verg.

Six Flags Is Making Its Parks More Accessible for Visitors with Special Needs
LinkedIn
Six Flags

By Antonia DeBianchi, People

Six Flags has announced its expanding accessibility for park-goers with special needs.

On Thursday, the theme park company shared some new initiatives that are intended to make the amusement parks more inclusive. One of the new safety programs includes a special “restraint harness” for all Six Flags thrill rides for guests with some physical disabilities, per a release.

Six Flags, which has over 20 theme parks around the U.S., Canada and Mexico, notes that 98% of rides have an “individually designed harness.” The new innovation has multiple sizes to accommodate park-goers with “physical disabilities such as a missing limb or appendages starting at 54″ tall.”

“Six Flags is proud to be the industry leader on these innovative programs that allows our guests to enjoy the more thrilling rides that our parks have to offer,” Selim Bassoul, Six Flags President and CEO, said in a statement.

Along with the new harness, the amusement park company announced that all properties are now accredited as Certified Autism Centers in partnership with the International Board of Credentialing and Continuing Education Standards (IBCCES). Park leadership will be trained in helping provide various support elements for guests with autism.

Included in this initiative are special guides to help visitors plan the day, highlighting sensory impacts of each attraction and ride.

Six Flags joins other major theme parks that are already Certified Autism Centers, including SeaWorld Orlando, Sesame Place San Diego and Legoland Florida Resort.

“This offering, coupled with the IBCCES certification at our parks, shows our unwavering commitment to diversity, equity and inclusion. Our company is truly dedicated to this initiative and making sure that encompasses our guests with abilities and disabilities,” Bassoul added.

Some more features that the parks will offer as Certified Autism Centers are “low sensory areas” to allow visitors who have sensory sensitivities to take a break in a calm environment. Trained team members will also be on hand to assist park-goers, according to the release.

Click here to read the full article on People.

Gamifying Fear: Vr Exposure Therapy Shown To Be Effective At Treating Severe Phobias
LinkedIn
Girl using virtual reality goggles watching spider. Photo: Donald Iain Smith/Gett Images

By Cassidy Ward, SyFy

In the 2007 horror film House of Fears (now streaming on Peacock!), a group of teenagers enters the titular haunted house the night before it is set to open. Once inside, they encounter a grisly set of horrors leaving some of them dead and others terrified. For many, haunted houses are a fun way to intentionally trigger a fear response. For others, fear is something they live with on a daily basis and it’s anything but fun.

Roughly 8% of adults report a severe fear of flying; between 3 and 15% endure a fear of spiders; and between 3 and 6% have a fear of heights. Taken together, along with folks who have a fear of needles, dogs, or any number of other life-altering phobias, there’s a good chance you know someone who is living with a fear serious enough to impact their lives. You might even have such a phobia yourself.

There are, thankfully, a number of treatments a person can undergo in order to cope with a debilitating phobia. However, those treatments often require traveling someplace else and having access to medical care, something which isn’t always available or possible. With that in mind, scientists from the Department of Psychological Medicine at the University of Otago have investigated the use of virtual reality to remotely treat severe phobias with digital exposure therapy. Their findings were published in the Australian and New Zealand Journal of Psychiatry.

Prior studies into the efficacy of virtual reality for the treatment of phobias were reliant on high-end VR rigs which can be expensive and difficult to acquire for the average patient. They also focused on specific phobias. The team at the University of Otago wanted something that could reach a higher number of patients, both in terms of content and access to equipment.

They used oVRcome, a widely available smartphone app anyone can download from their phone’s app store. The app has virtual reality content related to a number of common phobias in addition to the five listed above. Moreover, because it runs on your smartphone, it can be experienced using any number of affordable VR headsets which your phone slides into.

Participants enter in their phobias and their severity on a scale and are presented with a series of virtual experiences designed to gently and progressively expose the user to their fear. The study involved 129 people between the ages of 18 and 64, all of which reported all five of the target phobias. They used oVRcome over the course of six weeks with weekly emailed questionnaires measuring their progress. Participants also had access to a clinical psychologist in the event that they experienced any adverse effects from the study.

Participants were given a baseline score measuring the severity of their phobia and were measured again at a follow up 12 weeks after the start of the program. At baseline, participants averaged a score of 28 out of 40, indicating moderate to severe symptoms. By the end of the trial, the average score was down to 7, indicating minimal symptoms. Some participants even indicated they had overcome their phobia to the extent that they felt comfortable booking a flight, scheduling a medical procedure involving needles, or capturing and releasing a spider from their home, something they weren’t comfortable doing at the start.

Part of what makes the software so effective is the diversity of programming available and the ability for an individual to tailor their experiences based on their own unique experience. Additionally, exposure therapy is coupled with additional virtual modules including relaxation, mindfulness, cognitive techniques, and psychoeducation.

Click here to read the full article on SyFy.

Can Virtual Reality Help Autistic Children Navigate the Real World?
LinkedIn
Mr. Ravindran adjusts his son’s VR headset between lessons. “It was one of the first times I’d seen him do pretend play like that,” Mr. Ravindran said of the time when his son used Google Street View through a headset, then went into his playroom and acted out what he had experienced in VR. “It ended up being a light bulb moment.

By Gautham Nagesh, New York Times

This article is part of Upstart, a series on young companies harnessing new science and technology.

Vijay Ravindran has always been fascinated with technology. At Amazon, he oversaw the team that built and started Amazon Prime. Later, he joined the Washington Post as chief digital officer, where he advised Donald E. Graham on the sale of the newspaper to his former boss, Jeff Bezos, in 2013.

By late 2015, Mr. Ravindran was winding down his time at the renamed Graham Holdings Company. But his primary focus was his son, who was then 6 years old and undergoing therapy for autism.

“Then an amazing thing happened,” Mr. Ravindran said.

Mr. Ravindran was noodling around with a virtual reality headset when his son asked to try it out. After spending 30 minutes using the headset in Google Street View, the child went to his playroom and started acting out what he had done in virtual reality.

“It was one of the first times I’d seen him do pretend play like that,” Mr. Ravindran said. “It ended up being a light bulb moment.”

Like many autistic children, Mr. Ravindran’s son struggled with pretend play and other social skills. His son’s ability to translate his virtual reality experience to the real world sparked an idea. A year later, Mr. Ravindran started a company called Floreo, which is developing virtual reality lessons designed to help behavioral therapists, speech therapists, special educators and parents who work with autistic children.

The idea of using virtual reality to help autistic people has been around for some time, but Mr. Ravindran said the widespread availability of commercial virtual reality headsets since 2015 had enabled research and commercial deployment at much larger scale. Floreo has developed almost 200 virtual reality lessons that are designed to help children build social skills and train for real world experiences like crossing the street or choosing where to sit in the school cafeteria.

Last year, as the pandemic exploded demand for telehealth and remote learning services, the company delivered 17,000 lessons to customers in the United States. Experts in autism believe the company’s flexible platform could go global in the near future.

That’s because the demand for behavioral and speech therapy as well as other forms of intervention to address autism is so vast. Getting a diagnosis for autism can take months — crucial time in a child’s development when therapeutic intervention can be vital. And such therapy can be costly and require enormous investments of time and resources by parents.

The Floreo system requires an iPhone (version 7 or later) and a V.R. headset (a low-end model costs as little as $15 to $30), as well as an iPad, which can be used by a parent, teacher or coach in-person or remotely. The cost of the program is roughly $50 per month. (Floreo is currently working to enable insurance reimbursement, and has received Medicaid approval in four states.)

A child dons the headset and navigates the virtual reality lesson, while the coach — who can be a parent, teacher, therapist, counselor or personal aide — monitors and interacts with the child through the iPad.

The lessons cover a wide range of situations, such as visiting the aquarium or going to the grocery store. Many of the lessons involve teaching autistic children, who may struggle to interpret nonverbal cues, to interpret body language.

Autistic self-advocates note that behavioral therapy to treat autism is controversial among those with autism, arguing that it is not a disease to be cured and that therapy is often imposed on autistic children by their non-autistic parents or guardians. Behavioral therapy, they say, can harm or punish children for behaviors such as fidgeting. They argue that rather than conditioning autistic people to act like neurotypical individuals, society should be more welcoming of them and their different manner of experiencing the world.

“A lot of the mismatch between autistic people and society is not the fault of autistic people, but the fault of society,” said Zoe Gross, the director of advocacy at the Autistic Self Advocacy Network. “People should be taught to interact with people who have different kinds of disabilities.”

Mr. Ravindran said Floreo respected all voices in the autistic community, where needs are diverse. He noted that while Floreo was used by many behavioral health providers, it had been deployed in a variety of contexts, including at schools and in the home.

“The Floreo system is designed to be positive and fun, while creating positive reinforcement to help build skills that help acclimate to the real world,” Mr. Ravindran said.

In 2017, Floreo secured a $2 million fast track grant from the National Institutes of Health. The company is first testing whether autistic children will tolerate headsets, then conducting a randomized control trial to test the method’s usefulness in helping autistic people interact with the police.

Early results have been promising: According to a study published in the Autism Research journal (Mr. Ravindran was one of the authors), 98 percent of the children completed their lessons, quelling concerns about autistic children with sensory sensitivities being resistant to the headsets.

Ms. Gross said she saw potential in virtual reality lessons that helped people rehearse unfamiliar situations, such as Floreo’s lesson on crossing the street. “There are parts of Floreo to get really excited about: the airport walk through, or trick or treating — a social story for something that doesn’t happen as frequently in someone’s life,” she said, adding that she would like to see a lesson for medical procedures.

However, she questioned a general emphasis by the behavioral therapy industry on using emerging technologies to teach autistic people social skills.

A second randomized control trial using telehealth, conducted by Floreo using another N.I.H. grant, is underway, in hopes of showing that Floreo’s approach is as effective as in-person coaching.

But it was those early successes that convinced Mr. Ravindran to commit fully to the project.

“There were just a lot of really excited people.,” he said. “When I started showing families what we had developed, people would just give me a big hug. They would start crying that there was someone working on such a high-tech solution for their kids.”

Clinicians who have used the Floreo system say the virtual reality environment makes it easier for children to focus on the skill being taught in the lessons, unlike in the real world where they might be overwhelmed by sensory stimuli.

Celebrate the Children, a nonprofit private school in Denville, N.J., for children with autism and related challenges, hosted one of the early pilots for Floreo; Monica Osgood, the school’s co-founder and executive director, said the school had continued to use the system.

Click here to read the full article on New York Times.

Doctors using AI catch breast cancer more often than either does alone
LinkedIn
scan of breast tissue with cancer

By , MIT Technology Review

Radiologists assisted by an AI screen for breast cancer more successfully than they do when they work alone, according to new research. That same AI also produces more accurate results in the hands of a radiologist than it does when operating solo.

The large-scale study, published this month in The Lancet Digital Health, is the first to directly compare an AI’s performance in breast cancer screening according to whether it’s used alone or to assist a human expert. The hope is that such AI systems could save lives by detecting cancers doctors miss, free up radiologists to see more patients, and ease the burden in places where there is a dire lack of specialists.

The software being tested comes from Vara, a startup based in Germany that also led the study. The company’s AI is already used in over a fourth of Germany’s breast cancer screening centers and was introduced earlier this year to a hospital in Mexico and another in Greece.

The Vara team, with help from radiologists at the Essen University Hospital in Germany and the Memorial Sloan Kettering Cancer Center in New York, tested two approaches. In the first, the AI works alone to analyze mammograms. In the other, the AI automatically distinguishes between scans it thinks look normal and those that raise a concern. It refers the latter to a radiologist, who would review them before seeing the AI’s assessment. Then the AI would issue a warning if it detected cancer when the doctor did not.

To train the neural network, Vara fed the AI data from over 367,000 mammograms—including radiologists’ notes, original assessments, and information on whether the patient ultimately had cancer—to learn how to place these scans into one of three buckets: “confident normal,” “not confident” (in which no prediction is given), and “confident cancer.” The conclusions from both approaches were then compared with the decisions real radiologists originally made on 82,851 mammograms sourced from screening centers that didn’t contribute scans used to train the AI.

The second approach—doctor and AI working together—was 3.6% better at detecting breast cancer than a doctor working alone, and raised fewer false alarms. It accomplished this while automatically setting aside scans it classified as confidently normal, which amounted to 63% of all mammograms. This intense streamlining could slash radiologists’ workloads.

After breast cancer screenings, patients with a normal scan are sent on their way, while an abnormal or unclear scan triggers follow-up testing. But radiologists examining mammograms miss 1 in 8 cancers. Fatigue, overwork, and even the time of day all affect how well radiologists can identify tumors as they view thousands of scans. Signs that are visually subtle are also generally less likely to set off alarms, and dense breast tissue—found mostly in younger patients—makes signs of cancer harder to see.

Radiologists using the AI in the real world are required by German law to look at every mammogram, at least glancing at those the AI calls fine. The AI still lends them a hand by pre-filling reports on scans labeled normal, though the radiologist can always reject the AI’s call.

Thilo Töllner, a radiologist who heads a German breast cancer screening center, has used the program for two years. He’s sometimes disagreed when the AI classified scans as confident normal and manually filled out reports to reflect a different conclusion, but he says “normals are almost always normal.” Mostly, “you just have to press enter.”

Mammograms the AI has labeled as ambiguous or “confident cancer” are referred to a radiologist—but only after the doctor has offered an initial, independent assessment.

Radiologists classify mammograms on a 0 to 6 scale known as BI-RADS, where lower is better. A score of 3 indicates that something is probably benign, but worth checking up on. If Vara has assigned a BI-RADS score of 3 or higher to a mammogram the radiologist labels normal, a warning appears.

AI generally excels at image classification. So why did Vara’s AI on its own underperform a lone doctor? Part of the problem is that a mammogram alone can’t determine whether someone has cancer—that requires removing and testing the abnormal-looking tissue. Instead, the AI examines mammograms for hints.

Christian Leibig, lead author on the study and director of machine learning at Vara, says that mammograms of healthy and cancerous breasts can look very similar, and both types of scans can present a wide range of visual results. This complicates AI training. So does the low prevalence of cancer in breast screenings (according to Leibig, “in Germany, it’s roughly six in 1,000”). Because AIs trained to catch cancer are mostly trained on healthy breast scans, they can be prone to false positives.

The study tested the AI only on past mammogram decisions and assumed that radiologists would agree with the AI each time it issued a decision of “confident normal” or “confident cancer.” When the AI was unsure, the study defaulted to the original radiologist’s reading. That means it couldn’t test how using AI affects radiologists’ decisions—and whether any such changes may create new risks. Töllner admits he spends less time scrutinizing scans Vara labels normal than those it deems suspicious. “You get quicker with the normals because you get confident with the system,” he says.

Click here to read the full article on MIT Technology Review.

Boeing Skyscraper Pride

Danaher

Danaher

Alight

Alight Solutions

Leidos

Upcoming Events

  1. City Career Fair
    January 19, 2022 - November 4, 2022
  2. The Small Business Expo–Multiple Event Dates
    February 17, 2022 - December 1, 2022
  3. 44th Annual BDPA National Conference
    August 18, 2022 - August 20, 2022
  4. Diversity Alliance for Science (DA4S) West Coast Conference
    August 30, 2022 - September 1, 2022
  5. Diversity Alliance for Science (DA4S) Matchmaking Events
    September 1, 2022
  6. Commercial UAV Expo Americas
    September 6, 2022 - September 8, 2022

Upcoming Events

  1. City Career Fair
    January 19, 2022 - November 4, 2022
  2. The Small Business Expo–Multiple Event Dates
    February 17, 2022 - December 1, 2022
  3. 44th Annual BDPA National Conference
    August 18, 2022 - August 20, 2022
  4. Diversity Alliance for Science (DA4S) West Coast Conference
    August 30, 2022 - September 1, 2022
  5. Diversity Alliance for Science (DA4S) Matchmaking Events
    September 1, 2022
  6. Commercial UAV Expo Americas
    September 6, 2022 - September 8, 2022