From A Simple Swab To A Simple Sniff—How dogs are being trained to detect COVID-19

LinkedIn
dogs face close up

Especially for the elderly and to those with compromised immune systems, coronavirus testing kits are critical to treating the virus at an early stage. The current test is excellent at its job but is limited to and inaccessible to many people around the world—not to mention it’s uncomfortable. But what if there were an easier way?

Many scientific journals have proposed each disease has its own distinct scent. Many dogs have been used in the past to detect different types of cancer, Parkinson’s disease, and bacterial ailments. That is why the organization Medical Detection Dogs, located in the United Kingdom, have begun trials with medical professionals to see if dogs can sniff out the coronavirus’ scent.

The hope is that dogs will pick up on COVID-19’s scent among large crowds and detect those carrying the virus. This procedure would not only be more comfortable than current testing but could also cover more ground and be less invasive.

Should these dogs be successful, Professor Steve Lindsay of Durham University believes that, “…we could use COVID-19 detection dogs at airports at the end of the epidemic to rapidly identify people carrying the virus. This would help prevent the re-emergence of the disease after we have brought the present epidemic under control.”

Natalie Rodgers
Diversity in STEAM Magazine contributing writer

Scientists Partially Restored a Blind Man’s Sight With New Gene Therapy
LinkedIn
In previous studies, researchers have been able to treat a genetic form of blind ness called Leber congenital amaurosis, by fixing a faulty gene that would otherwise cause photoreceptors to gradually degenerate.

By Carl Zimmer, Yahoo! News

A team of scientists announced Monday that they had partially restored the sight of a blind man by building light-catching proteins in one of his eyes. Their report, which appeared in the journal Nature Medicine, is the first published study to describe the successful use of this treatment. “Seeing for the first time that it did work — even if only in one patient and in one eye — is exciting,” said Ehud Isacoff, a neuroscientist at the University of California, Berkeley, who was not involved in the study.

The procedure is a far cry from full vision. The volunteer, a 58-year-old man who lives in France, had to wear special goggles that gave him the ghostly perception of objects in a narrow field of view. But the authors of the report say that the trial — the result of 13 years of work — is a proof of concept for more effective treatments to come.

“It’s obviously not the end of the road, but it’s a major milestone,” said José-Alain Sahel, an ophthalmologist who splits his time between the University of Pittsburgh and the Sorbonne in Paris.

Sahel and other scientists have tried for decades to find a cure for inherited forms of blindness. These genetic disorders rob the eyes of essential proteins required for vision.

When light enters the eye, it is captured by photoreceptor cells. The photoreceptors then send an electrical signal to their neighbors, called ganglion cells, which can identify important features like motion. They then send signals of their own to the optic nerve, which delivers the information to the brain.

In previous studies, researchers have been able to treat a genetic form of blindness called Leber congenital amaurosis, by fixing a faulty gene that would otherwise cause photoreceptors to gradually degenerate.

But other forms of blindness cannot be treated so simply, because their victims lose their photoreceptors completely.

“Once the cells are dead, you cannot repair the gene defect,” Sahel said.

For these diseases, Sahel and other researchers have been experimenting with a more radical kind of repair. They are using gene therapy to turn ganglion cells into new photoreceptor cells, even though they don’t normally capture light.

The scientists are taking advantage of proteins derived from algae and other microbes that can make any nerve cell sensitive to light.

In the early 2000s, neuroscientists figured out how to install some of these proteins into the brain cells of mice and other lab animals by injecting viruses carrying their genes. The viruses infected certain types of brain cells, which then used the new gene to build light-sensitive channels.

Originally, researchers developed this technique, called optogenetics, as a way to probe the workings of the brain. By inserting a tiny light into the animal’s brain, they could switch a certain type of brain cell on or off with the flick of a switch. The method has enabled them to discover the circuitry underlying many kinds of behavior.

Click here to read the full article on Yahoo! News.

This is how the human heart adapts to space
LinkedIn
Two men are standing looking at each other in front of what appears to be a map.

By Ashley Strickland

When astronaut Scott Kelly spent nearly a year in space, his heart shrank despite the fact that he worked out six days a week over his 340-day stay, according to a new study.

Surprisingly, researchers observed the same change in Benoît Lecomte after he completed his 159-day swim across the Pacific Ocean in 2018.
The findings suggest that long-term weightlessness alters the structure of the heart, causing shrinkage and atrophy, and low-intensity exercise is not enough to keep that from happening. The study published Monday in the American Heart Association’s journal Circulation.
Photo : CNN
The gravity we experience on Earth is what helps the heart to maintain both its size and function as it keeps blood pumping through our veins. Even something as simple as standing up and walking around helps pull blood down into our legs.
When the element of gravity is replaced with weightlessness, the heart shrinks in response.
Kelly lived in the absence of gravity aboard the International Space Station from March 27, 2015, to March 1, 2016. He worked out on a stationary bike and treadmill and incorporated resistance activities into his routine six days a week for two hours each day.
Lecomte swam from June 5 to November 11, 2018, covering 1,753 miles and averaging about six hours a day swimming. That sustained activity may sound extreme, but each day of swimming was considered to be low-intensity activity.
Even though Lecomte was on Earth, he was spending hours a day in the water, which offsets the effects of gravity. Long-distance swimmers use the prone technique, a horizontal facedown position, for these endurance swims.
Researchers expected that the activities performed by both men would keep their hearts from experiencing any shrinkage or weakening. Data collected from tests of their hearts before, during and after these extreme events showed otherwise.
Kelly and Lecomte both experienced a loss of mass and initial drop in diameter in the left ventricles of the heart during their experiences.
Both long-duration spaceflight and prolonged water immersion led to a very specific adaptation of the heart, said senior study author Dr. Benjamin Levine, a professor of internal medicine/cardiology at the University of Texas Southwestern Medical Center.
While the authors point out that they only studied two men who both performed extraordinary things, further study is needed to understand how the human body reacts in extreme situations.
Read the full article at CNN.
Stressed out? Blame bad technology
LinkedIn

By Reuters

There is no question that we are all more dependent on technology than ever. So what happens when that tech does not work?

In the past, Emily Dreyfuss used an old-school strategy: She yelled.

When Amazon’s Alexa spat out wrong answers or misunderstood questions, Dreyfuss let the virtual assistant have it.

“I used her as a scapegoat for my feelings,” said Dreyfuss, a writer and editor for Harvard’s Shorenstein Center. “When you have a non-sentient and annoying device in your home, who isn’t doing what you want, I talked to her in not the nicest terms. And my husband ganged up on her, too.”

Tech frustrations like this have happened to all of us. Your wifi is always dropping out. Your passwords do not work. Your laptop crashes, and you lose everything you were working on. Just reading about those possibilities could be enough to raise your blood pressure.

Technology can damage our state of mind, and new research is bearing that out: Computer giant Dell Technologies, in partnership with neuroscience firm EMOTIV, put people through a gauntlet of bad tech experiences, and then measured their brainwaves to gauge their reactions.

Test subjects had trouble logging on, or had to navigate sluggish applications, or saw their spreadsheets crash.

“The moment people started using bad technology, we saw a doubling of their levels of stress,” said Olivier Oullier, EMOTIV’s president. “I was a bit surprised by that, because you rarely see those levels going so high. Tech stress had a lasting effect, Oullier added. “People don’t relax back into calmness quickly. It takes a long time.”

Company bottom lines have suffered along with the mental health of employees. Constant frustration with bad tech affects how staffers handle their daily workloads, especially younger workers. Gen Z and Millennial test subjects saw a whopping 30% productivity drop as a result.

“Bad experiences affect you regardless of computer literacy,” said Cile Montgomery, who leads customer experience initiatives for Dell. “But young people seem to be even more impacted, because they expect technology to work.”

Read the full article at Reuters.

Six Apps that Help you Stick to a Budget
LinkedIn
A cartoon of six smart phone being held by a group of hands, displaying various screens

In these days of furloughs, layoffs, and shortened hours, when many people are struggling to pay their rent, figure out how to manage their bills, and looking askance at their college loans, it can help to have a solid financial app to assess your situation, create a budget, figure out exactly what you can (or can’t) afford, work with those pesky and confusing figures, find a better way to save, or just keep from panicking.

We asked four staff members from The Verge to talk about what they used to keep financially sane, and here’s what they recommended.

Buxfer: All Around Accounting

First of all, I need to admit that I monitor my finances in an absurdly old-fashioned way. I don’t use an app that downloads all of my accounts and tracks everything for me (although I have played around with Mint a bit). Instead, I enter all of my expenses and income manually into my accounting software and then check off which expenses have cleared at the end of the month. That way, I can “pay” many of my bills ahead of time by entering them before the payments are actually made and end up with a much clearer picture of how much cash I’ll have available afterward.

For years, I used native accounting software that just sat on my personal computer, like Microsoft Money. In fact, I held on to Money for several years after Microsoft sunsetted it but still kept it available as a download. (Thank you for that, Microsoft!) However, I found out how much of a mistake that was the day my computer decided to (figuratively) crash and burn. I had a backup, so I wasn’t in much trouble — except I decided I didn’t want to be dependent on a backup. I wanted to be able to access my data from the cloud, so I could access it from a computer or from my phone. However, I still enter it manually.

It took a while, but I found Buxfer. This personal accounting software is simple to learn, easy to use, and flexible enough so that, while it will happily download all of your data for you, it will also let you manually enter your expenses and income (something most other current accounting apps do not). Buxfer does pretty much everything more well-known accounting apps do: it downloads your accounts (if you want it to), tracks your budget, lets you know how you’re doing using charts and tables, follows your investments, and lets you set goals for, say, saving up for a home or paying down a credit card. It even lets you split bills with a spouse or a roommate, so you can track who is paying for what.

If you only need something to manually add expenses and income to, Buxfer is free to use. If you want more sophisticated features — like, for example, automatic syncing with your bank and credit card accounts or automatic tagging of your accounts (so you can easily find “utilities” or “mortgage”) — then a “Pilot” account costs $2 / month. The cost increases up to $10 a month, depending on how many features you need.

I really like Buxfer. It has a clean, understandable interface; lets you choose how many of your features you want to use (and lets you ignore the others); and doesn’t bother you with intrusive ads — even on the free version. And although I was using the free version, when I had a question, I got a prompt reply to my email. Buxfer may not be as well-known as Quicken or YNAB, but it’s certainly worth checking out. —Barbara Krasnoff, reviews editor

Credit Karma:  Keep your Data Secure

A couple of years ago, I needed to find a company (cheap or free) that I could use for identity monitoring, and someone at work recommended Credit Karma. I soon found out that, in signing up for Credit Karma, I was signing up for much more than just identity monitoring.

Credit Karma watches all of your accounts for possible data breaches, monitors your credit standing and notifies you when it changes and why, and helps you to do things like lock your credit so that it’s harder for somebody to open an account in your name, among other services. It also offers links to information about buying a home, buying or leasing a car, paying down an overdrawn credit card, and other financial services. There is an entire section on financial relief, which could be very useful for those impacted by the current situation.

The site makes a variety of suggestions for low-interest credit cards, loans, and other financial instruments. Of course, these suggestions aren’t given strictly out of the kindness of its heart — you know that Credit Karma is getting compensated if you take it up on any of its offers — but I’ve checked out a few of its deals, and most of them aren’t bad. For example, one of their savings accounts offered considerably more interest than my local bank, without charging anything extra. (Unfortunately, when interest rates began to dive, the usefulness of that particular account dove with it.)

Unlike most of the other apps mentioned here, Credit Karma will not help you pay your bills or track your bank account. But it does offer some really useful information and services, and while I don’t check it more than once a month or so, I find it helps me make sure my finances are safe and on the right track. —Barbara Krasnoff

Continue to The Verge to read the full article

Can Virtual Reality Help Autistic Children Navigate the Real World?
LinkedIn
Mr. Ravindran adjusts his son’s VR headset between lessons. “It was one of the first times I’d seen him do pretend play like that,” Mr. Ravindran said of the time when his son used Google Street View through a headset, then went into his playroom and acted out what he had experienced in VR. “It ended up being a light bulb moment.

By Gautham Nagesh, New York Times

This article is part of Upstart, a series on young companies harnessing new science and technology.

Vijay Ravindran has always been fascinated with technology. At Amazon, he oversaw the team that built and started Amazon Prime. Later, he joined the Washington Post as chief digital officer, where he advised Donald E. Graham on the sale of the newspaper to his former boss, Jeff Bezos, in 2013.

By late 2015, Mr. Ravindran was winding down his time at the renamed Graham Holdings Company. But his primary focus was his son, who was then 6 years old and undergoing therapy for autism.

“Then an amazing thing happened,” Mr. Ravindran said.

Mr. Ravindran was noodling around with a virtual reality headset when his son asked to try it out. After spending 30 minutes using the headset in Google Street View, the child went to his playroom and started acting out what he had done in virtual reality.

“It was one of the first times I’d seen him do pretend play like that,” Mr. Ravindran said. “It ended up being a light bulb moment.”

Like many autistic children, Mr. Ravindran’s son struggled with pretend play and other social skills. His son’s ability to translate his virtual reality experience to the real world sparked an idea. A year later, Mr. Ravindran started a company called Floreo, which is developing virtual reality lessons designed to help behavioral therapists, speech therapists, special educators and parents who work with autistic children.

The idea of using virtual reality to help autistic people has been around for some time, but Mr. Ravindran said the widespread availability of commercial virtual reality headsets since 2015 had enabled research and commercial deployment at much larger scale. Floreo has developed almost 200 virtual reality lessons that are designed to help children build social skills and train for real world experiences like crossing the street or choosing where to sit in the school cafeteria.

Last year, as the pandemic exploded demand for telehealth and remote learning services, the company delivered 17,000 lessons to customers in the United States. Experts in autism believe the company’s flexible platform could go global in the near future.

That’s because the demand for behavioral and speech therapy as well as other forms of intervention to address autism is so vast. Getting a diagnosis for autism can take months — crucial time in a child’s development when therapeutic intervention can be vital. And such therapy can be costly and require enormous investments of time and resources by parents.

The Floreo system requires an iPhone (version 7 or later) and a V.R. headset (a low-end model costs as little as $15 to $30), as well as an iPad, which can be used by a parent, teacher or coach in-person or remotely. The cost of the program is roughly $50 per month. (Floreo is currently working to enable insurance reimbursement, and has received Medicaid approval in four states.)

A child dons the headset and navigates the virtual reality lesson, while the coach — who can be a parent, teacher, therapist, counselor or personal aide — monitors and interacts with the child through the iPad.

The lessons cover a wide range of situations, such as visiting the aquarium or going to the grocery store. Many of the lessons involve teaching autistic children, who may struggle to interpret nonverbal cues, to interpret body language.

Autistic self-advocates note that behavioral therapy to treat autism is controversial among those with autism, arguing that it is not a disease to be cured and that therapy is often imposed on autistic children by their non-autistic parents or guardians. Behavioral therapy, they say, can harm or punish children for behaviors such as fidgeting. They argue that rather than conditioning autistic people to act like neurotypical individuals, society should be more welcoming of them and their different manner of experiencing the world.

“A lot of the mismatch between autistic people and society is not the fault of autistic people, but the fault of society,” said Zoe Gross, the director of advocacy at the Autistic Self Advocacy Network. “People should be taught to interact with people who have different kinds of disabilities.”

Mr. Ravindran said Floreo respected all voices in the autistic community, where needs are diverse. He noted that while Floreo was used by many behavioral health providers, it had been deployed in a variety of contexts, including at schools and in the home.

“The Floreo system is designed to be positive and fun, while creating positive reinforcement to help build skills that help acclimate to the real world,” Mr. Ravindran said.

In 2017, Floreo secured a $2 million fast track grant from the National Institutes of Health. The company is first testing whether autistic children will tolerate headsets, then conducting a randomized control trial to test the method’s usefulness in helping autistic people interact with the police.

Early results have been promising: According to a study published in the Autism Research journal (Mr. Ravindran was one of the authors), 98 percent of the children completed their lessons, quelling concerns about autistic children with sensory sensitivities being resistant to the headsets.

Ms. Gross said she saw potential in virtual reality lessons that helped people rehearse unfamiliar situations, such as Floreo’s lesson on crossing the street. “There are parts of Floreo to get really excited about: the airport walk through, or trick or treating — a social story for something that doesn’t happen as frequently in someone’s life,” she said, adding that she would like to see a lesson for medical procedures.

However, she questioned a general emphasis by the behavioral therapy industry on using emerging technologies to teach autistic people social skills.

A second randomized control trial using telehealth, conducted by Floreo using another N.I.H. grant, is underway, in hopes of showing that Floreo’s approach is as effective as in-person coaching.

But it was those early successes that convinced Mr. Ravindran to commit fully to the project.

“There were just a lot of really excited people.,” he said. “When I started showing families what we had developed, people would just give me a big hug. They would start crying that there was someone working on such a high-tech solution for their kids.”

Clinicians who have used the Floreo system say the virtual reality environment makes it easier for children to focus on the skill being taught in the lessons, unlike in the real world where they might be overwhelmed by sensory stimuli.

Celebrate the Children, a nonprofit private school in Denville, N.J., for children with autism and related challenges, hosted one of the early pilots for Floreo; Monica Osgood, the school’s co-founder and executive director, said the school had continued to use the system.

Click here to read the full article on New York Times.

Women and Drones Documentary Filming Onsite at Commercial UAV Expo
LinkedIn
woman flying a drone wearing a safety vest and glassess

Commercial UAV Expo has been announced as an official filming location for a multi-part documentary produced through a partnership with Women and Drones and documentary film company Monumental Access. The partnership will focus on inspiring the next generation of talented aviation leaders by capturing the stories and footage of women in the drone industry.

In partnership with Women and Drones, Monumental Access has been creating a multi-part documentary for a behind-the-scenes look into the professionals, especially women, in the uncrewed aviation space. The multi-part documentary will give a birds-eye view of the significance of the drone industry by capturing in-depth interviews with educators, CEOs, and professionals allowing their stories to be told from the first-person perspective. Viewers will have an all-access look into the women’s lives who are shaping the industry.

“Women and Drones has been an important supporting partner of Commercial UAV Expo for years. We are thrilled that we can help elevate their mission and provide a documentary filming location to access some of the most influential leaders in the commercial drone industry by bringing the filming location to Commercial UAV Expo in Las Vegas,” said Lora Burns, Marketing Manager and Coordinator of the Diversity, Equity and Inclusion UAV Empower initiatives at Commercial UAV Expo.

“The partnership with Monumental and Commercial UAV Expo will allow us to capture stories of the individuals who are contributing to the future of STEM and aviation. From the nonprofits and educational organizations introducing youth to aviation and STEM via drones to the innovators leading the way in the various emerging aviation technologies we plan to shed a bright light on the industry” said Sharon Rossmark, CEO of Women and Drones.

“Monumental Access is excited to highlight the excellence achieved by women in the field of emerging aviation technologies. By capturing their stories through the lens of a camera everyone will have an opportunity to have a front-row seat alongside these amazing women” said Monte Chambers, founder and CEO of Monumental Access.

Filming started in May with the Disaster Response Workshop hosted by Dr. Robin Murphy at Texas A&M and the Center for Robot-Assisted Search and Rescue. The project captured the experiences of the participants and facilitators and shared a powerful message about the importance of this type of training for women. “Ultimately, my desired outcome for filming in the Disaster Response Workshop will be to create engaging content for viewers unfamiliar with the drone sector of the aviation industry. By raising awareness to the public, these modern-day hidden figures will be in the spotlight” Chambers added.

The next round of filming will take place at Commercial UAV Expo in Las Vegas, Sept 6-8, 2022. In addition to the onsite filming, Commercial UAV Expo offers a robust conference program delivering practical, actionable education. Sessions include a panel on Women Behind the Drone Revolution, hosted by DroneTalks, featuring inspirational women from around the world as they share career path stories, and deliver actionable insight based on their successes, key challenges, important learnings, and their current activities in the industry.  Additional programming includes deep dive vertical industry sessions for professionals in construction, drone delivery, energy & utilities, forestry & agriculture, infrastructure & transportation, mining & aggregates, security, and surveying & mapping. Industry Update Sessions provide up-to-the-minute information on topics that affect everyone in UAS, such as AAM, BVLOS, and autonomy.

Event features include an exhibit hall that will feature 200+ top UAS companies from around the globe. Additional special events include Live Outdoor Flying Demonstrations, the DRONERESPONDERS Public Safety Summit, and Workshops and Trainings, all of which allow for hands-on learning and industry connections. The 2022 event boasts more than 300 media and association supporters from six continents, including the longstanding supporting partnership with Women and Drones. Visit www.expouav.com for more information or to register.

Women and Drones Email Contact:  media@womenanddrones.com
Commercial UAV Email Contact: lburns@divcom.com

About Commercial UAV Expo 

Commercial UAV Expo, presented by Commercial UAV News, is an international conference and expo exclusively focused on commercial UAS integration and operation covering industries including Construction; Drone Delivery; Energy & Utilities; Forestry & Agriculture; Infrastructure & Transportation; Mining & Aggregates; Public Safety & Emergency Services; Security; and Surveying & Mapping. It takes place September 6 – 8, 2022 at Caesars Forum, Las Vegas NV. For more information, visit www.expouav.com.

Commercial UAV Expo is produced by Diversified Communications’ technology portfolio which also includes Commercial UAV News; Geo Week, Geo Week Newsletter, 3D Technology Newsletter, AEC Innovations Newsletter, Geo Business (UK) and Digital Construction Week (UK).

About Women and Drones:

Women And Drones is the leading membership organization dedicated to driving excellence in the uncrewed aircraft systems (UAS) and Advanced Air Mobility (AAM) industry by advocating for female participation in this dynamic segment of the global economy. We partner with companies committed to an inclusive culture where women can thrive. Our educational programs range from kindergarten to career in efforts to balance the gender equation in the industry now, as well as for the future of flight.

About Monumental Access

Monumental Access focuses on producing quality media by creating content, capturing the heartfelt story, and connecting with community stakeholders. With the nationwide demand for videographers, Monumental Access developed a unique market for governmental, non-profit, and corporate companies.  What started off as a dream during the 2020 Global pandemic, has transitioned into a reality in detailing the important moments of our clients through the lens of a camera.  Combined with unique storytelling and professionalism, Monumental Access connects the hearts and attention of many across the country with its interviews, commercials, and documentaries! As a result, Monumental Access is one of the most creative media companies in the Saint Louis, MO area.

 

The latest video game controller isn’t plastic. It’s your face.
LinkedIn
Dunn playing “Minecraft” using voice commands on the Enabled Play controller, face expression controls via a phone and virtual buttons on Xbox's adaptive controller. (Courtesy of Enabled Play Game Controller)

By Amanda Florian, The Washington Post

Over decades, input devices in the video game industry have evolved from simple joysticks to sophisticated controllers that emit haptic feedback. But with Enabled Play, a new piece of assistive tech created by self-taught developer Alex Dunn, users are embracing a different kind of input: facial expressions.

While companies like Microsoft have sought to expand accessibility through adaptive controllers and accessories, Dunn’s new device takes those efforts even further, translating users’ head movements, facial expressions, real-time speech and other nontraditional input methods into mouse clicks, key strokes and thumbstick movements. The device has users raising eyebrows — quite literally.

“Enabled Play is a device that learns to work with you — not a device you have to learn to work with,” Dunn, who lives in Boston, said via Zoom.

Dunn, 26, created Enabled Play so that everyone — including his younger brother with a disability — can interface with technology in a natural and intuitive way. At the beginning of the pandemic, the only thing he and his New Hampshire-based brother could do together, while approximately 70 miles apart, was game.

“And that’s when I started to see firsthand some of the challenges that he had and the limitations that games had for people with really any type of disability,” he added.

At 17, Dunn dropped out of Worcester Polytechnic Institute to become a full-time software engineer. He began researching and developing Enabled Play two and a half years ago, which initially proved challenging, as most speech-recognition programs lagged in response time.

“I built some prototypes with voice commands, and then I started talking to people who were deaf and had a range of disabilities, and I found that voice commands didn’t cut it,” Dunn said.

That’s when he started thinking outside the box.

Having already built Suave Keys, a voice-powered program for gamers with disabilities, Dunn created Snap Keys — an extension that turns a user’s Snapchat lens into a controller when playing games like Call of Duty, “Fall Guys,” and “Dark Souls.” In 2020, he won two awards for his work at Snap Inc.’s Snap Kit Developer Challenge, a competition among third-party app creators to innovate Snapchat’s developer tool kit.

With Enabled Play, Dunn takes accessibility to the next level. With a wider variety of inputs, users can connect the assistive device — equipped with a robust CPU and 8 GB of RAM — to a computer, game console or other device to play games in whatever way works best for them.

Dunn also spent time making sure Enabled Play was accessible to people who are deaf, as well as people who want to use nonverbal audio input, like “ooh” or “aah,” to perform an action. Enabled Play’s vowel sound detection model is based on “The Vocal Joystick,” which engineers and linguistics experts at the University of Washington developed in 2006.

“Essentially, it looks to predict the word you are going to say based on what is in the profile, rather than trying to assume it could be any word in the dictionary,” Dunn said. “This helps cut through machine learning bias by learning more about how the individual speaks and applies it to their desired commands.”

Dunn’s AI-enabled controller takes into account a person’s natural tendencies. If a gamer wants to set up a jump command every time they open their mouth, Enabled Play would identify that person’s individual resting mouth position and set that as the baseline.

In January, Enabled Play officially launched in six countries — its user base extending from the U.S. to the U.K., Ghana and Austria. For Dunn, one of his primary goals was to fill a gap in accessibility and pricing compared to other assistive gaming devices.

“There are things like the Xbox Adaptive Controller. There are things like the HORI Flex [for Nintendo Switch]. There are things like Tobii, which does eye-tracking and stuff like that. But it still seemed like it wasn’t enough,” he said.

Compared to some devices that are only compatible with one gaming system or computer at a time, Dunn’s AI-enabled controller — priced at $249.99 — supports a combination of inputs and outputs. Speech therapists say that compared to augmentative and alternative communication (AAC) devices, which are medically essential for some with disabilities, Dunn’s device offers simplicity.

“This is just the start,” said Julia Franklin, a speech language pathologist at Community School of Davidson in Davidson, N.C. Franklin introduced students to Enabled Play this summer and feels it’s a better alternative to other AAC devices on the market that are often “expensive, bulky and limited” in usability. Many sophisticated AAC systems can range from $6,000 to $11,500 for high-tech devices, with low-end eye-trackers running in the thousands. A person may also download AAC apps on their mobile devices, which range from $49.99 to $299.99 for the app alone.

“For many people who have physical and cognitive differences, they often exhaust themselves to learn a complex AAC system that has limits,” she said. “The Enabled Play device allows individuals to leverage their strengths and movements that are already present.”

Internet users have applauded Dunn for his work, noting that asking for accessibility should not equate to asking for an “easy mode” — a misconception often cited by critics of making games more accessible.

“This is how you make gaming accessible,” one Reddit user wrote about Enabled Play. “Not by dumbing it down, but by creating mechanical solutions that allow users to have the same experience and accomplish the same feats as [people without disabilities].” Another user who said they regularly worked with young patients with cerebral palsy speculated that Enabled Play “would quite literally change their lives.”

Click here to read the full article on The Washington Post.

Diagnosing Mental Health Disorders Through AI Facial Expression Evaluation
LinkedIn
Researchers from Germany have developed a method for identifying mental disorders based on facial expressions interpreted by computer vision.

By , Unite

Researchers from Germany have developed a method for identifying mental disorders based on facial expressions interpreted by computer vision.

The new approach can not only distinguish between unaffected and affected subjects, but can also correctly distinguish depression from schizophrenia, as well as the degree to which the patient is currently affected by the disease.

The researchers have provided a composite image that represents the control group for their tests (on the left in the image below) and the patients who are suffering from mental disorders (right). The identities of multiple people are blended in the representations, and neither image depicts a particular individual:

Individuals with affective disorders tend to have raised eyebrows, leaden gazes, swollen faces and hang-dog mouth expressions. To protect patient privacy, these composite images are the only ones made available in support of the new work.

Until now, facial affect recognition has been primarily used as a potential tool for basic diagnosis. The new approach, instead, offers a possible method to evaluate patient progress throughout treatment, or else (potentially, though the paper does not suggest it) in their own domestic environment for outpatient monitoring.

The paper states*:

‘Going beyond machine diagnosis of depression in affective computing, which has been developed in previous studies, we show that the measurable affective state estimated by means of computer vision contains far more information than the pure categorical classification.’

The researchers have dubbed this technique Opto Electronic Encephalography (OEG), a completely passive method of inferring mental state by facial image analysis instead of topical sensors or ray-based medical imaging technologies.

The authors conclude that OEG could potentially be not just a mere secondary aide to diagnosis and treatment, but, in the long term, a potential replacement for certain evaluative parts of the treatment pipeline, and one that could cut down on the time necessary for patient monitoring and initial diagnosis. They note:

‘Overall, the results predicted by the machine show better correlations compared to the pure clinical observer rating based questionnaires and are also objective. The relatively short measurement period of a few minutes for the computer vision approaches is also noteworthy, whereas hours are sometimes required for the clinical interviews.’

However, the authors are keen to emphasize that patient care in this field is a multi-modal pursuit, with many other indicators of patient state to be considered than just their facial expressions, and that it is too early to consider that such a system could entirely substitute traditional approaches to mental disorders. Nonetheless, they consider OEG a promising adjunct technology, particularly as a method to grade the effects of pharmaceutical treatment in a patient’s prescribed regime.

The paper is titled The Face of Affective Disorders, and comes from eight researchers across a broad range of institutions from the private and public medical research sector.

Data

(The new paper deals mostly with the various theories and methods that are currently popular in patient diagnosis of mental disorders, with less attention than is usual to the actual technologies and processes used in the tests and various experiments)

Data-gathering took place at University Hospital at Aachen, with 100 gender-balanced patients and a control group of 50 non-affected people. The patients included 35 sufferers from schizophrenia and 65 people suffering from depression.

For the patient portion of the test group, initial measurements were taken at the time of first hospitalization, and the second prior to their discharge from hospital, spanning an average interval of 12 weeks. The control group participants were recruited arbitrarily from the local population, with their own induction and ‘discharge’ mirroring that of the actual patients.

In effect, the most important ‘ground truth’ for such an experiment must be diagnoses obtained by approved and standard methods, and this was the case for the OEG trials.

However, the data-gathering stage obtained additional data more suited for machine interpretation: interviews averaging 90 minutes were captured over three phases with a Logitech c270 consumer webcam running at 25fps.

The first session comprised of a standard Hamilton interview (based on research originated around 1960), such as would normally be given on admission. In the second phase, unusually, the patients (and their counterparts in the control group) were shown videos of a series of facial expressions, and asked to mimic each of these, while stating their own estimation of their mental condition at that time, including emotional state and intensity. This phase lasted around ten minutes.

In the third and final phase, the participants were shown 96 videos of actors, lasting just over ten seconds each, apparently recounting intense emotional experiences. The participants were then asked to evaluate the emotion and intensity represented in the videos, as well as their own corresponding feelings. This phase lasted around 15 minutes.

Click here to read the full article on Unite.

Your favourite Instagram face might not be a human. How AI is taking over influencer roles
LinkedIn
South Korean influencer Rozy has over 130,000 followers on Instagram.

By Mint

South Korean influencer Rozy has over 130,000 followers on Instagram. She posts photos of globetrotting adventures, she sings, dances and models. The interesting fact is, unlike most popular faces on the medium, Rozy is not a real human. However, this digitally rendered being looks so real that it’s often mistaken for flesh and blood.

How Rozy was designed?
Seoul-based company that created Rozy describes her as a blended personality – part human, part AI, and part robot. She is “able to do everything that humans cannot … in the most human-like form,” Sidus Studio X says on its website.

Sidus Studio X explains sometimes they create an image of Rozy from head to toe while other times it is just a superimposed photo where they put her head onto the body of a human model.

Rozy was launched in 2020 and since then, she pegged several brand deals and sponsorships, and participated in several virtual fashion shows and also released two singles.

And a CNN report claims, that Rozy is not alone, there are several others like her. Facebook and Instagram together have more than 200 virtual influencers on their platforms

The CGI (computer-generated imagery) technology behind Rozy isn’t new. It is ubiquitous in today’s entertainment industry, where artists use it to craft realistic nonhuman characters in movies, computer games and music videos. But it has only recently been used to make influencers, the report reads.

South Korean retail brand Lotte Home Shopping created its virtual influencer — Lucy, who now has 78,000 Instagram followers.

Lee Bo-hyun, Lotte representative, said that Lucy’s image is more than a pretty face. She studied industrial design, and works in car design. She posts about her job and interests, such as her love for animals and kimbap — rice rolls wrapped in seaweed.

There is a risk attached
However, there is always a risk attached to it. Facebook and Instagram’s parent company Meta has acknowledged the risks.

In a blog post, it said, “Like any disruptive technology, synthetic media has the potential for both good and harm. Issues of representation, cultural appropriation and expressive liberty are already a growing concern,” the company said in a blog post.

“To help brands navigate the ethical quandaries of this emerging medium and avoid potential hazards, (Meta) is working with partners to develop an ethical framework to guide the use of (virtual influencers).”

However, even though the elder generation is quite skeptical, the younger lot is comfortable communicating with virtual influencers.

Lee Na-kyoung, a 23-year-old living in Incheon, began following Rozy about two years ago thinking she was a real person. Rozy followed her back, sometimes commenting on her posts, and a virtual friendship blossomed — one that has endured even after Lee found out the truth, CNN report said.

“We communicated like friends and I felt comfortable with her — so I don’t think of her as an AI but a real friend,” Lee said.

Click here to read the full article on Mint.

GM just secured enough cathode material for 5 million electric vehicles
LinkedIn
GM garage filled with white vans

By Andrew J. Hawkins, The Verge

General Motors needs a lot of cathode active materials (CAM) if it’s to reach its goal of making enough electric vehicles to become a completely carbon neutral company by 2040. How much is enough? How about 950,000 tons of the stuff.

GM now says it’s reached a deal with LG Chem, one of South Korea’s premier battery making firms, to lock down a supply of CAM starting later this year. CAM is basically what makes a battery a battery, consisting of components like processed nickel, lithium and other materials, and representing about 40 percent of the total cost of a battery cell.

The majority of EV battery cathodes are made with NCM — nickel, cobalt, and magnesium. Cobalt is a key component in this mix, but it’s also the most expensive material in the battery and mined under conditions that often violate human rights, leading it to be called the “blood diamond of batteries.” As a result, GM and other companies like Tesla, are rushing to create a cobalt-free battery. GM’s Ultium batteries, for example, will add aluminum — making the mix NCMA — and reduce the cobalt content by 70 percent.

LG Chem will begin supplying CAM to the automaker starting in the latter half of 2022 and lasting until 2030. GM says this will be enough battery material to power approximately 5 million electric vehicles, which should help the company in its quest to catch up to Tesla.

GM has said it plans to spend $30 billion by 2025 on the creation of 30 new plug-in models in its bid to overtake Elon Musk’s company as the leading EV company in the world. Tesla still dominates the relatively small EV market in the US, with around 66 percent market share, while GM only has around 6 percent. This year, the company was even outsold by legacy auto rivals like Ford and Hyundai, according to CNBC.

In a furious bid to catch up and become more vertically integrated, GM is trying to get a stronger grasp on its supply chain, which includes battery manufacturing. The company has said it will spend over $4 billion on the construction of two battery factories in North America in partnership with South Korea’s LG Chem.

GM said today that it will also explore localizing a CAM production facility with LG Chem by the end of 2025. Previously, the company announced that it will construct a new cathode factory in North America in a joint venture with South Korea’s Posco Chemical.

Click here to read the full article on The Verg.

Boeing Skyscraper Pride

Danaher

Danaher

Alight

Alight Solutions

Leidos

Upcoming Events

  1. City Career Fair
    January 19, 2022 - November 4, 2022
  2. The Small Business Expo–Multiple Event Dates
    February 17, 2022 - December 1, 2022
  3. National College Resources Foundation Upcoming Events–Mark Your Calendar!
    September 24, 2022 - April 1, 2023
  4. NBMBAA 44th Annual Conference and Expo
    September 27, 2022 - October 1, 2022
  5. NBMBAA 44th Annual Conference and Exposition
    September 27, 2022 - October 1, 2022
  6. Anaheim & CA STEAM Symposium? Yes, Come Present In Person!
    October 1, 2022 - October 2, 2022

Upcoming Events

  1. City Career Fair
    January 19, 2022 - November 4, 2022
  2. The Small Business Expo–Multiple Event Dates
    February 17, 2022 - December 1, 2022
  3. National College Resources Foundation Upcoming Events–Mark Your Calendar!
    September 24, 2022 - April 1, 2023
  4. NBMBAA 44th Annual Conference and Expo
    September 27, 2022 - October 1, 2022
  5. NBMBAA 44th Annual Conference and Exposition
    September 27, 2022 - October 1, 2022
  6. Anaheim & CA STEAM Symposium? Yes, Come Present In Person!
    October 1, 2022 - October 2, 2022