NASA’s Retiring Top Scientist Says We Can Terraform Mars and Maybe Venus, Too

LinkedIn
James Green sitting in front of an image of the sun and the NASA logo. He is wearing a suit and smiling

Since joining NASA in 1980, Jim Green has seen it all. He has helped the space agency understand Earth’s magnetic field, explore the outer solar system and search for life on Mars. As the new year arrived on Saturday, he bade farewell to the agency.

Over the past four decades, which includes 12 years as the director of NASA’s planetary science division and the last three years as its chief scientist, he has shaped much of NASA’s scientific inquiry, overseeing missions across the solar system and contributing to more than 100 scientific papers across a range of topics. While specializing in Earth’s magnetic field and plasma waves early in his career, he went on to diversify his research portfolio.

One of Dr. Green’s most recent significant proposals has been a scale for verifying the detection of alien life, called the “confidence of life detection,” or CoLD, scale. He has published work suggesting we could terraform Mars, or making it habitable for humans, using a giant magnetic shield to stop the sun from stripping the red planet’s atmosphere, raising the temperature on the surface. He has also long been a proponent of the exploration of other worlds, including a mission to Europa, the icy moon of Jupiter, that is scheduled to launch in 2024.

Ahead of a December meeting of the American Geophysical Union in New Orleans, Dr. Green spoke about some of this wide-ranging work and the search for life in the solar system. Below are edited and condensed excerpts from our interview.

You’ve urged a methodical approach to looking for life with your CoLD scale, ranking possible detections from one to seven. Why do we need such a scale?

A couple of years ago, scientists came out and said they’d seen phosphine in the atmosphere of Venus. At the level they saw it, which was enormous, that led them to believe life was one of the major possibilities. On the CoLD scale, where seven is “we found life,” it is “one.” It didn’t even make it to “two.” They recognized later there was contamination in their signal and it may not even be phosphine and we can’t reproduce it. So we have to do a better job in communicating.

We see methane all over the place on Mars. Ninety-five percent of the methane we find here on Earth comes from life, but there’s a few percent that doesn’t. We’re only at a CoLD Level 3, but if a scientist came to me and said, “Here’s an instrument that will make it a CoLD Level 4,” I’d fund that mission in a minute. They’re not jumping to seven, they’re making that next big step, the right step, to make progress to actually finding life in the solar system. That’s what we’ve got to do, stop screwing around with just crying wolf.

The search for life on Mars has been a focus for NASA for so long, starting in 1976 with the Viking 1 and 2 landers and later with missions from the 1990s onward. Are you surprised we haven’t found life in that time?

Yes and no. What we’re doing now is much more methodical, much more intelligent in the way we recognize what signatures life can produce over time. Our solar system is 4.5 billion years old, and at this time, Earth is covered in life. But if we go back a billion years, we would find that Venus was a blue planet. It had a significant ocean. It might actually have had life, and a lot of it. If we go back another billion years, Mars was a blue planet. We know now Mars lost its magnetic field, the water started evaporating and Mars basically went stagnant about 3.5 billion years ago.

We would like to have found life on the surface. We put the Viking landers in a horrible place because we didn’t know where to put them — we were just trying to put them down on the surface of Mars. It was like putting something down in the Gobi Desert. We should have put them down in Jezero Crater, in this river delta we’re at right now with the Perseverance rover, but we didn’t even know it existed at the time!

One of the Viking experiments indicated there was microbial life in the soils, but only one of the three instruments did, so we couldn’t say we found life. Now we’ll really, definitively know because we’re going to bring back samples. We didn’t know it would need a sample return mission.

You’ve previously suggested it might be possible to terraform Mars by placing a giant magnetic shield between the planet and the sun, which would stop the sun from stripping its atmosphere, allowing the planet to trap more heat and warm its climate to make it habitable. Is that really doable?

Yeah, it’s doable. Stop the stripping, and the pressure is going to increase. Mars is going to start terraforming itself. That’s what we want: the planet to participate in this any way it can. When the pressure goes up, the temperature goes up.

The first level of terraforming is at 60 millibars, a factor of 10 from where we are now. That’s called the Armstrong limit, where your blood doesn’t boil if you walked out on the surface. If you didn’t need a spacesuit, you could have much more flexibility and mobility. The higher temperature and pressure enable you to begin the process of growing plants in the soils.

There are several scenarios on how to do the magnetic shield. I’m trying to get a paper out I’ve been working on for about two years. It’s not going to be well received. The planetary community does not like the idea of terraforming anything. But you know. I think we can change Venus, too, with a physical shield that reflects light. We create a shield, and the whole temperature starts going down.

Click here to read the full article on the New York Times.

Diagnosing Mental Health Disorders Through AI Facial Expression Evaluation
LinkedIn
Researchers from Germany have developed a method for identifying mental disorders based on facial expressions interpreted by computer vision.

By , Unite

Researchers from Germany have developed a method for identifying mental disorders based on facial expressions interpreted by computer vision.

The new approach can not only distinguish between unaffected and affected subjects, but can also correctly distinguish depression from schizophrenia, as well as the degree to which the patient is currently affected by the disease.

The researchers have provided a composite image that represents the control group for their tests (on the left in the image below) and the patients who are suffering from mental disorders (right). The identities of multiple people are blended in the representations, and neither image depicts a particular individual:

Individuals with affective disorders tend to have raised eyebrows, leaden gazes, swollen faces and hang-dog mouth expressions. To protect patient privacy, these composite images are the only ones made available in support of the new work.

Until now, facial affect recognition has been primarily used as a potential tool for basic diagnosis. The new approach, instead, offers a possible method to evaluate patient progress throughout treatment, or else (potentially, though the paper does not suggest it) in their own domestic environment for outpatient monitoring.

The paper states*:

‘Going beyond machine diagnosis of depression in affective computing, which has been developed in previous studies, we show that the measurable affective state estimated by means of computer vision contains far more information than the pure categorical classification.’

The researchers have dubbed this technique Opto Electronic Encephalography (OEG), a completely passive method of inferring mental state by facial image analysis instead of topical sensors or ray-based medical imaging technologies.

The authors conclude that OEG could potentially be not just a mere secondary aide to diagnosis and treatment, but, in the long term, a potential replacement for certain evaluative parts of the treatment pipeline, and one that could cut down on the time necessary for patient monitoring and initial diagnosis. They note:

‘Overall, the results predicted by the machine show better correlations compared to the pure clinical observer rating based questionnaires and are also objective. The relatively short measurement period of a few minutes for the computer vision approaches is also noteworthy, whereas hours are sometimes required for the clinical interviews.’

However, the authors are keen to emphasize that patient care in this field is a multi-modal pursuit, with many other indicators of patient state to be considered than just their facial expressions, and that it is too early to consider that such a system could entirely substitute traditional approaches to mental disorders. Nonetheless, they consider OEG a promising adjunct technology, particularly as a method to grade the effects of pharmaceutical treatment in a patient’s prescribed regime.

The paper is titled The Face of Affective Disorders, and comes from eight researchers across a broad range of institutions from the private and public medical research sector.

Data

(The new paper deals mostly with the various theories and methods that are currently popular in patient diagnosis of mental disorders, with less attention than is usual to the actual technologies and processes used in the tests and various experiments)

Data-gathering took place at University Hospital at Aachen, with 100 gender-balanced patients and a control group of 50 non-affected people. The patients included 35 sufferers from schizophrenia and 65 people suffering from depression.

For the patient portion of the test group, initial measurements were taken at the time of first hospitalization, and the second prior to their discharge from hospital, spanning an average interval of 12 weeks. The control group participants were recruited arbitrarily from the local population, with their own induction and ‘discharge’ mirroring that of the actual patients.

In effect, the most important ‘ground truth’ for such an experiment must be diagnoses obtained by approved and standard methods, and this was the case for the OEG trials.

However, the data-gathering stage obtained additional data more suited for machine interpretation: interviews averaging 90 minutes were captured over three phases with a Logitech c270 consumer webcam running at 25fps.

The first session comprised of a standard Hamilton interview (based on research originated around 1960), such as would normally be given on admission. In the second phase, unusually, the patients (and their counterparts in the control group) were shown videos of a series of facial expressions, and asked to mimic each of these, while stating their own estimation of their mental condition at that time, including emotional state and intensity. This phase lasted around ten minutes.

In the third and final phase, the participants were shown 96 videos of actors, lasting just over ten seconds each, apparently recounting intense emotional experiences. The participants were then asked to evaluate the emotion and intensity represented in the videos, as well as their own corresponding feelings. This phase lasted around 15 minutes.

Click here to read the full article on Unite.

Meet Afro-Latina Scientist Dr. Jessica Esquivel
LinkedIn
Dr. Jessica Esquivel

By Erica Nahmad, Be Latina

It’s undeniable that representation matters and the idea of what a scientist could or should look like is changing, largely thanks to pioneers like Afro-Latina scientist Dr. Jessica Esquivel, who is breaking barriers for women in STEM one step at a time.

Dr. Esquivel isn’t just extraordinary because of what she is capable of as an Afro-Latina astrophysicist — she’s also extraordinary in her vulnerability and relatability. She’s on a mission to break barriers in science and to show the humanity behind scientists.

Dr. Esquivel makes science accessible to everyone, no matter what you look like or where you come from. As one of the only Afro-Latina scientists in her field, and one of the only women who looked like her to pursue a Ph.D. in physics, Dr. Esquivel knows a thing or two about the importance of representation, especially in STEM fields and science labs.

Women make up only 28% of the science, technology, engineering, and math workforce in the U.S. Those disparities are even more severe when you start to look at minority populations.

“When you start looking at the intersections of race and gender and then even sexuality, those numbers drop significantly,” Esquivel told CBS Chicago. “There are only about 100 to 150 black women with their Ph.D. in physics in the country!”

Fighting against the isolation of uniqueness
Dr. Jessica Esquivel recalls being a nontraditional student and being “the only” when she entered graduate school for physics — the only woman in her class, the only Black, the only Mexican, the only lesbian — and all of that made her feel very isolated.

“On top of such rigorous material, the isolation and otherness that happens due to being the only or one of few is an added burden marginalized people, especially those with multiple marginalized identities, have to deal with,” Dr. Esquivel told BeLatina in an email interview. On top of feeling like an outsider, isolation was also consuming. “Being away from family at a predominately white institution, where the number of microaggressions was constant, really affected my mental health and, in turn, my coursework and research, so it was important to surround myself with mentors who supported me and believed in my ability to be a scientist.”

While she anticipated that the physics curriculum would be incredibly challenging, she was definitely not prepared for how hard the rest of the experience would be and how it would impact her as a student and a scientist.

The challenges she faced professionally and personally made her realize early on just how crucial representation is in academia and all fields, but especially in STEM. “It was really impactful for me to learn that there were other Black women who had made it out of the grad school metaphorical trenches. It’s absolutely important to create inclusive spaces where marginalized people, including Black, Latina, and genderqueer people, can thrive,” she said.

“The secrets of our universe don’t discriminate, these secrets can and should be unraveled by all those who wish to embark on that journey, and my aim is to clear as many barriers and leave these physics spaces better than I entered them.”

When inclusion and equal opportunities are the ultimate goal
Dr. Jessica Esquivel isn’t just dedicating her time and energy to studying complex scientific concepts — think quantum entanglement, space-time fabric, the building blocks of the universe… some seriously abstract physics concepts straight out of a sci-fi movie, as she explains. On top of her research, she put in so much extra work to show people, especially younger generations of women of color, that the physics and STEM world is not some old white man’s club where this prestigious knowledge is only available to them. Dr. Esquivel is an expert in her field; she knows things that no one else currently knows and has the ability and the power to transfer that knowledge to others and pass it down to others. There is a place for everyone, including people who look like her, in the STEM world, and she’s on a mission to inspire others while working to increase diversity, equity, and inclusion in the STEM space.

“Many of us who are underrepresented in STEM have taken on the responsibility of spearheading institutional change toward more just, equitable, and inclusive working environments as a form of survival,” she explains. “I’m putting in more work on top of the research I do because I recognize that I do better research if I feel supported and if I feel like I can bring my whole self to my job. My hope is that one day Black and brown women and gender-queer folks interested in science can pursue just that and not have to fight for their right to be a scientist or defend that they are worthy of doing science.”

Click here to read the full article on Be Latina.

Your favourite Instagram face might not be a human. How AI is taking over influencer roles
LinkedIn
South Korean influencer Rozy has over 130,000 followers on Instagram.

By Mint

South Korean influencer Rozy has over 130,000 followers on Instagram. She posts photos of globetrotting adventures, she sings, dances and models. The interesting fact is, unlike most popular faces on the medium, Rozy is not a real human. However, this digitally rendered being looks so real that it’s often mistaken for flesh and blood.

How Rozy was designed?
Seoul-based company that created Rozy describes her as a blended personality – part human, part AI, and part robot. She is “able to do everything that humans cannot … in the most human-like form,” Sidus Studio X says on its website.

Sidus Studio X explains sometimes they create an image of Rozy from head to toe while other times it is just a superimposed photo where they put her head onto the body of a human model.

Rozy was launched in 2020 and since then, she pegged several brand deals and sponsorships, and participated in several virtual fashion shows and also released two singles.

And a CNN report claims, that Rozy is not alone, there are several others like her. Facebook and Instagram together have more than 200 virtual influencers on their platforms

The CGI (computer-generated imagery) technology behind Rozy isn’t new. It is ubiquitous in today’s entertainment industry, where artists use it to craft realistic nonhuman characters in movies, computer games and music videos. But it has only recently been used to make influencers, the report reads.

South Korean retail brand Lotte Home Shopping created its virtual influencer — Lucy, who now has 78,000 Instagram followers.

Lee Bo-hyun, Lotte representative, said that Lucy’s image is more than a pretty face. She studied industrial design, and works in car design. She posts about her job and interests, such as her love for animals and kimbap — rice rolls wrapped in seaweed.

There is a risk attached
However, there is always a risk attached to it. Facebook and Instagram’s parent company Meta has acknowledged the risks.

In a blog post, it said, “Like any disruptive technology, synthetic media has the potential for both good and harm. Issues of representation, cultural appropriation and expressive liberty are already a growing concern,” the company said in a blog post.

“To help brands navigate the ethical quandaries of this emerging medium and avoid potential hazards, (Meta) is working with partners to develop an ethical framework to guide the use of (virtual influencers).”

However, even though the elder generation is quite skeptical, the younger lot is comfortable communicating with virtual influencers.

Lee Na-kyoung, a 23-year-old living in Incheon, began following Rozy about two years ago thinking she was a real person. Rozy followed her back, sometimes commenting on her posts, and a virtual friendship blossomed — one that has endured even after Lee found out the truth, CNN report said.

“We communicated like friends and I felt comfortable with her — so I don’t think of her as an AI but a real friend,” Lee said.

Click here to read the full article on Mint.

Terrence Howard Claims He Invented ‘New Hydrogen Technology’ To Defend Uganda
LinkedIn
Terrence Howard on the red carpet for

By BET

Former Empire actor and red carpet scientist Terrence Howard is currently visiting Uganda as part of a government effort to draw investors from the African diaspora to the nation. He is claiming he has what it needs to change the world.

According to Vice, Howard made a lofty presentation on Wednesday, July 13, addressing officials and claiming to have developed a “new hydrogen technology.”

Famously, Howard argued in Rolling Stone that one times one equals two, and now he says his new system, The Lynchpin, would be able to clean the ocean and defend Uganda from exploitation via cutting-edge drone technology. The proprietary technology he announced in a 2021 press release is said to hold 86 patents.

“I was able to identify the grand unified field equation they’ve been looking for and put it into geometry,” he shared in front of an audience of Ugandan dignitaries. “We’re talking about unlimited bonding, unlimited predictable structures, supersymmetry.”

“The Lynchpins are now able to behave as a swarm, as a colony, that can defend a nation, that can harvest food, that can remove plastics from the ocean, that can give the children of Uganda and the people of Uganda an opportunity to spread this and sell these products throughout the world,” he added.

Howard, who briefly quit acting in 2019 only to come out of retirement in 2020, has seemingly made rewriting history a personal side hustle. According to Vice, he made nebulous claims that rapidly went viral on social media, saying, “I’ve made some discoveries in my own personal life with the science that, y’know, Pythagoras was searching for. I was able to open up the flower of life properly and find the real wave conjugations we’ve been looking for 10,000 years.”

While his latest claims have yet to be clarified, Howard was invited to speak by Frank Tumwebaze, the minister of agriculture, animal industries, and fishery.

Click here to read the full article on BET.

Doctors using AI catch breast cancer more often than either does alone
LinkedIn
scan of breast tissue with cancer

By , MIT Technology Review

Radiologists assisted by an AI screen for breast cancer more successfully than they do when they work alone, according to new research. That same AI also produces more accurate results in the hands of a radiologist than it does when operating solo.

The large-scale study, published this month in The Lancet Digital Health, is the first to directly compare an AI’s performance in breast cancer screening according to whether it’s used alone or to assist a human expert. The hope is that such AI systems could save lives by detecting cancers doctors miss, free up radiologists to see more patients, and ease the burden in places where there is a dire lack of specialists.

The software being tested comes from Vara, a startup based in Germany that also led the study. The company’s AI is already used in over a fourth of Germany’s breast cancer screening centers and was introduced earlier this year to a hospital in Mexico and another in Greece.

The Vara team, with help from radiologists at the Essen University Hospital in Germany and the Memorial Sloan Kettering Cancer Center in New York, tested two approaches. In the first, the AI works alone to analyze mammograms. In the other, the AI automatically distinguishes between scans it thinks look normal and those that raise a concern. It refers the latter to a radiologist, who would review them before seeing the AI’s assessment. Then the AI would issue a warning if it detected cancer when the doctor did not.

To train the neural network, Vara fed the AI data from over 367,000 mammograms—including radiologists’ notes, original assessments, and information on whether the patient ultimately had cancer—to learn how to place these scans into one of three buckets: “confident normal,” “not confident” (in which no prediction is given), and “confident cancer.” The conclusions from both approaches were then compared with the decisions real radiologists originally made on 82,851 mammograms sourced from screening centers that didn’t contribute scans used to train the AI.

The second approach—doctor and AI working together—was 3.6% better at detecting breast cancer than a doctor working alone, and raised fewer false alarms. It accomplished this while automatically setting aside scans it classified as confidently normal, which amounted to 63% of all mammograms. This intense streamlining could slash radiologists’ workloads.

After breast cancer screenings, patients with a normal scan are sent on their way, while an abnormal or unclear scan triggers follow-up testing. But radiologists examining mammograms miss 1 in 8 cancers. Fatigue, overwork, and even the time of day all affect how well radiologists can identify tumors as they view thousands of scans. Signs that are visually subtle are also generally less likely to set off alarms, and dense breast tissue—found mostly in younger patients—makes signs of cancer harder to see.

Radiologists using the AI in the real world are required by German law to look at every mammogram, at least glancing at those the AI calls fine. The AI still lends them a hand by pre-filling reports on scans labeled normal, though the radiologist can always reject the AI’s call.

Thilo Töllner, a radiologist who heads a German breast cancer screening center, has used the program for two years. He’s sometimes disagreed when the AI classified scans as confident normal and manually filled out reports to reflect a different conclusion, but he says “normals are almost always normal.” Mostly, “you just have to press enter.”

Mammograms the AI has labeled as ambiguous or “confident cancer” are referred to a radiologist—but only after the doctor has offered an initial, independent assessment.

Radiologists classify mammograms on a 0 to 6 scale known as BI-RADS, where lower is better. A score of 3 indicates that something is probably benign, but worth checking up on. If Vara has assigned a BI-RADS score of 3 or higher to a mammogram the radiologist labels normal, a warning appears.

AI generally excels at image classification. So why did Vara’s AI on its own underperform a lone doctor? Part of the problem is that a mammogram alone can’t determine whether someone has cancer—that requires removing and testing the abnormal-looking tissue. Instead, the AI examines mammograms for hints.

Christian Leibig, lead author on the study and director of machine learning at Vara, says that mammograms of healthy and cancerous breasts can look very similar, and both types of scans can present a wide range of visual results. This complicates AI training. So does the low prevalence of cancer in breast screenings (according to Leibig, “in Germany, it’s roughly six in 1,000”). Because AIs trained to catch cancer are mostly trained on healthy breast scans, they can be prone to false positives.

The study tested the AI only on past mammogram decisions and assumed that radiologists would agree with the AI each time it issued a decision of “confident normal” or “confident cancer.” When the AI was unsure, the study defaulted to the original radiologist’s reading. That means it couldn’t test how using AI affects radiologists’ decisions—and whether any such changes may create new risks. Töllner admits he spends less time scrutinizing scans Vara labels normal than those it deems suspicious. “You get quicker with the normals because you get confident with the system,” he says.

Click here to read the full article on MIT Technology Review.

A 76 million-year-old dinosaur skeleton will be auctioned in New York City
LinkedIn
A 76 million-year-old gorgosaurus dinosaur skeleton

By NPR

The fossilized skeleton of a T. rex relative that roamed the earth about 76 million years ago will be auctioned in New York this month, Sotheby’s announced Tuesday.

The Gorgosaurus skeleton will highlight Sotheby’s natural history auction on July 28, the auction house said.

The Gorgosaurus was an apex carnivore that lived in what is now the western United States and Canada during the late Cretaceous Period. It predated its relative the Tyrannosaurus rex by 10 million years.

The specimen being sold was discovered in 2018 in the Judith River Formation near Havre, Montana, Sotheby’s said. It measures nearly 10 feet (3 meters) tall and 22 (6.7 meters) feet long.

All of the other known Gorgosaurus skeletons are in museum collections, making this one the only specimen available for private ownership, the auction house said.

“In my career, I have had the privilege of handling and selling many exceptional and unique objects, but few have the capacity to inspire wonder and capture imaginations quite like this unbelievable Gorgosaurus skeleton,” Cassandra Hatton, Sotheby’s global head of science and popular culture, said.

Sotheby’s presale estimate for the fossil is $5 million to $8 million.

A Gorgosaurus dinosaur skeleton is displayed at Sotheby’s New York on Tuesday.

Click here to read the full article on NPR.

At 17, she was her family’s breadwinner on a McDonald’s salary. Now she’s gone into space
LinkedIn
Amazon founder and CEO Jeff Bezos announced he'll be on board a spaceflight next month, in a capsule attached to a rocket made by his space exploration company Blue Origin. Bezos is seen here in 2019.

By Jackie Wattles, CNN

A rocket built by Jeff Bezos’ Blue Origin carried its fifth group of passengers to the edge of space, including the first-ever Mexican-born woman to make such a journey.

The 60-foot-tall suborbital rocket took off from Blue Origin’s facilities in West Texas at 9:26am ET, vaulting a group of six people to more than 62 miles above the Earth’s surface — which is widely deemed to make the boundary of outer space — and giving them a few minutes of weightlessness before parachuting to landing.

Most of the passengers paid an undisclosed sum for their seats. But Katya Echazarreta, an engineer and science communicator from Guadalajara, Mexico, was selected by a nonprofit called Space for Humanity to join this mission from a pool of thousands of applicants. The organization’s goal is to send “exceptional leaders” to space and allow them to experience the overview effect, a phenomenon frequently reported by astronauts who say that viewing the Earth from space give them a profound shift in perspective.

Echazarreta told CNN Business that she experienced that overview effect “in my own way.”

“Looking down and seeing how everyone is down there, all of our past, all of our mistakes, all of our obstacles, everything — everything is there,” she said. “And the only thing I could think of when I came back down was that I need people to see this. I need Latinas to see this. And I think that it just completely reinforced my mission to continue getting primarily women and people of color up to space and doing whatever it is they want to do.”

Echazarreta is the first Mexican-born woman to travel to space and the second Mexican after Rodolfo Neri Vela, a scientist who joined one of NASA’s Space Shuttle missions in 1985.

She moved to the United States with her family at the age of seven, and she recalls being overwhelmed in a new place where she didn’t speak the language, and a teacher warned her she might have to be held back.
“It just really fueled me and I think ever since then, ever since the third grade, I kind of just went off and have not stopped,” Echazarreta recalled in an Instagram interview.

When she was 17 and 18, Echazarreta said she was also the main breadwinner for her family on a McDonald’s salary.

“I had sometimes up to four [jobs] at the same time, just to try to get through college because it was really important for me,” she said.
These days, Echazarreta is working on her master’s degree in engineering at Johns Hopkins University. She previously worked at NASA’s famed Jet Propulsion Laboratory in California. She also boasts a following of more than 330,000 users on TikTok, hosts a science-focused YouTube series and is a presenter on the weekend CBS show “Mission Unstoppable.”

Space for Humanity — which was founded in 2017 by Dylan Taylor, a space investor who recently joined a Blue Origin flight himself — chose her for her impressive contributions. “We were looking for some like people who were leaders in their communities, who have a sphere of influence; people who are doing really great work in the world already, and people who are passionate about whatever that is,” Rachel Lyons, the nonprofit’s executive director, told CNN Business.

Click here to read the full article on CNN.

Disabled people are ‘invisible by exclusion’ in politics, says Assemblymember running to be the first openly autistic member of Congress
LinkedIn
Assemblymember Yuh-Line Niou

By , Business Insider

The halls of Congress have yet to see an openly autistic legislator, but New York Assemblymember Yuh-Line Niou could change that.

Niou, who was diagnosed with autism at 22, said she was “surprised” to learn she could be the first openly autistic Congressmember but also said it showed a lack of representation of disabled communities in policy making.

“I think we hear a lot of the first and only sometimes,” Niou told Insider. “While it’s an amazing thing, I think that what’s more important is that there are people understanding that it’s also a really lonely thing. And I think that it really is important to have representation because you need that lens to talk about everything in policy.”

Niou, a progressive Democrat and Taiwanese immigrant who represents New York’s 65th district, announced her run for Congress this year in a high-profile race against Bill de Blasio and Rep. Mondaire Jones.

Niou’s diagnosis became well known after Refinery 29 published an article discussing it in 2020. After parents and kids reached out to her relating to her, she became aware of how talking openly about her autism helped to “drive away stigma.”

Among full-time politicians, disabled Americans are underrepresented. People with disabilities make up 6.3% of federal politicians, compared to 15.7% of all adults in America who are disabled, research from Rutgers shows.

“People with disabilities cannot achieve equality unless they are part of government decision-making,” said Lisa Schur in the 2019 Rutgers report.

The number of disabled Americans may have increased in the past two years. Estimates show that 1.2 million more people may have become disabled as a result of COVID-19.

Niou also said that she knows what it feels like to be shut out of the government process. In 2016, Niou became the first Asian to serve as Assemblymember in her district, a large Asian district that includes New York’s Chinatown.

Disabled people have been “invisible by exclusion from the policy-making process,” Niou said. Her disability status helps her bring perspective to a host of laws from transportation to housing, and she wants to make sure that neurodivergent people have more of a say in the legislative process.

“We’re not considering all the different diverse perspectives, especially when you’re talking about neurodivergent [issues] or when we’re talking about disability issues,” Niou said.

Disabled people are more likely to be incarcerated, are at a higher risk of homelessness, and more likely to face impoverishment.

Click here to read the full article on Business Insider.

Disability Inclusion Is Coming Soon to the Metaverse
LinkedIn
Disabled avatars from the metaverse in a wheelchair

By Christopher Reardon, PC Mag

When you think of futurism, you probably don’t think of the payroll company ADP—but that’s where Giselle Mota works as the company’s principal consultant on the “future of work.” Mota, who has given a Ted Talk(Opens in a new window) and has written(Opens in a new window) for Forbes, is committed to bringing more inclusion and access to the Web3 and metaverse spaces. She’s also been working on a side project called Unhidden, which will provide disabled people with accurate avatars, so they’ll have the option to remain themselves in the metaverse and across Web3.

To See and Be Seen
The goal of Unhidden is to encourage tech companies to be more inclusive, particularly of people with disabilities. The project has launched and already has a partnership with the Wanderland(Opens in a new window) app, which will feature Unhidden avatars through its mixed-reality(Opens in a new window) platform at the VivaTech Conference in Paris and the DisabilityIN Conference in Dallas. The first 12 avatars will come out this summer with Mota, Dr. Tiffany Jana, Brandon Farstein, Tiffany Yu, and other global figures representing disability inclusion.

The above array of individuals is known as the NFTY Collective(Opens in a new window). Its members hail from countries including America, the UK, and Australia, and the collective represents a spectrum of disabilities, ranging from the invisible type, such as bipolar and other forms of neurodiversity, to the more visible, including hypoplasia and dwarfism.

Hypoplasia causes the underdevelopment of an organ or tissue. For Isaac Harvey, the disease manifested by leaving him with no arms and short legs. Harvey uses a wheelchair and is the president of Wheels for Wheelchairs, along with being a video editor. He got involved with Unhidden after being approached by its co-creator along with Mota, Victoria Jenkins, who is an inclusive fashion designer.

Click here to read the full article on PC Mag.

For people with disabilities, AI can only go so far to make the web more accessible
LinkedIn
AI technology

By Kate Kaye, Protocol

“It’s a lot to listen to a robot all day long,” said Tina Pinedo, communications director at Disability Rights Oregon, a group that works to promote and defend the rights of people with disabilities.

But listening to a machine is exactly what many people with visual impairments do while using screen reading tools to accomplish everyday online tasks such as paying bills or ordering groceries from an ecommerce site.

“There are not enough web developers or people who actually take the time to listen to what their website sounds like to a blind person. It’s auditorily exhausting,” said Pinedo.

Whether struggling to comprehend a screen reader barking out dynamic updates to a website, trying to make sense of poorly written video captions or watching out for fast-moving imagery that could induce a seizure, the everyday obstacles blocking people with disabilities from a satisfying digital experience are immense.

Needless to say, technology companies have tried to step in, often promising more than they deliver to users and businesses hoping that automated tools can break down barriers to accessibility. Although automated tech used to check website designs for accessibility flaws have been around for some time, companies such as Evinced claim that sophisticated AI not only does a better job of automatically finding and helping correct accessibility problems, but can do it for large enterprises that need to manage thousands of website pages and app content.

Still, people with disabilities and those who regularly test for web accessibility problems say automated systems and AI can only go so far. “The big danger is thinking that some type of automation can replace a real person going through your website, and basically denying people of their experience on your website, and that’s a big problem,” Pinedo said.

Why Capital One is betting on accessibility AI
For a global corporation such as Capital One, relying on a manual process to catch accessibility issues is a losing battle.

“We test our entire digital footprint every month. That’s heavily reliant on automation as we’re testing almost 20,000 webpages,” said Mark Penicook, director of Accessibility at the banking and credit card company, whose digital accessibility team is responsible for all digital experiences across Capital One including websites, mobile apps and electronic messaging in the U.S., the U.K. and Canada.

Accessibility isn’t taught in computer science.
Even though Capital One has a team of people dedicated to the effort, Penicook said he has had to work to raise awareness about digital accessibility among the company’s web developers. “Accessibility isn’t taught in computer science,” Penicook told Protocol. “One of the first things that we do is start teaching them about accessibility.”

One way the company does that is by celebrating Global Accessibility Awareness Day each year, Penicook said. Held on Thursday, the annual worldwide event is intended to educate people about digital access and inclusion for those with disabilities and impairments.

Before Capital One gave Evinced’s software a try around 2018, its accessibility evaluations for new software releases or features relied on manual review and other tools. Using Evinced’s software, Penicook said the financial services company’s accessibility testing takes hours rather than weeks, and Capital One’s engineers and developers use the system throughout their internal software development testing process.

It was enough to convince Capital One to invest in Evinced through its venture arm, Capital One Ventures. Microsoft’s venture group, M12, also joined a $17 million funding round for Evinced last year.

Evinced’s software automatically scans webpages and other content, and then applies computer vision and visual analysis AI to detect problems. The software might discover a lack of contrast between font and background colors that make it difficult for people with vision impairments like color blindness to read. The system might find images that do not have alt text, the metadata that screen readers use to explain what’s in a photo or illustration. Rather than pointing out individual problems, the software uses machine learning to find patterns that indicate when the same type of problem is happening in several places and suggests a way to correct it.

“It automatically tells you, instead of a thousand issues, it’s actually one issue,” said Navin Thadani, co-founder and CEO of Evinced.

The software also takes context into account, factoring in the purpose of a site feature or considering the various operating systems or screen-reader technologies that people might use when visiting a webpage or other content. For instance, it identifies user design features that might be most accessible for a specific purpose, such as a button to enable a bill payment transaction rather than a link.

Some companies use tools typically referred to as “overlays” to check for accessibility problems. Many of those systems are web plug-ins that add a layer of automation on top of existing sites to enable modifications tailored to peoples’ specific requirements. One product that uses computer vision and machine learning, accessiBe, allows people with epilepsy to choose an option that automatically stops all animated images and videos on a site before they could pose a risk of seizure. The company raised $28 million in venture capital funding last year.

Another widget from TruAbilities offers an option that limits distracting page elements to allow people with neurodevelopmental disorders to focus on the most important components of a webpage.

Some overlay tools have been heavily criticized for adding new annoyances to the web experience and providing surface-level responses to problems that deserve more robust solutions. Some overlay tech providers have “pretty brazen guarantees,” said Chase Aucoin, chief architect at TPGi, a company that provides accessibility automation tools and consultation services to customers, including software development monitoring and product design assessments for web development teams.

“[Overlays] give a false sense of security from a risk perspective to the end user,” said Aucoin, who himself experiences motor impairment. “It’s just trying to slap a bunch of paint on top of the problem.”

In general, complicated site designs or interfaces that automatically hop to a new page section or open a new window can create a chaotic experience for people using screen readers, Aucoin said. “A big thing now is just cognitive; how hard is this thing for somebody to understand what’s going on?” he said.

Even more sophisticated AI-based accessibility technologies don’t address every disability issue. For instance, people with an array of disabilities either need or prefer to view videos with captions, rather than having sound enabled. However, although automated captions for videos have improved over the years, “captions that are computer-generated without human review can be really terrible,” said Karawynn Long, an autistic writer with central auditory processing disorder and hyperlexia, a hyperfocus on written language.

“I always appreciate when written transcripts are included as an option, but auto-generated ones fall woefully short, especially because they don’t include good indications of non-linguistic elements of the media,” Long said.

Click here to read the full article on Protocol.

Boeing Skyscraper Pride

Danaher

Danaher

Alight

Alight Solutions

Leidos

Upcoming Events

  1. City Career Fair
    January 19, 2022 - November 4, 2022
  2. The Small Business Expo–Multiple Event Dates
    February 17, 2022 - December 1, 2022
  3. 44th Annual BDPA National Conference
    August 18, 2022 - August 20, 2022
  4. Diversity Alliance for Science (DA4S) West Coast Conference
    August 30, 2022 - September 1, 2022
  5. Diversity Alliance for Science (DA4S) Matchmaking Events
    September 1, 2022
  6. Commercial UAV Expo Americas
    September 6, 2022 - September 8, 2022

Upcoming Events

  1. City Career Fair
    January 19, 2022 - November 4, 2022
  2. The Small Business Expo–Multiple Event Dates
    February 17, 2022 - December 1, 2022
  3. 44th Annual BDPA National Conference
    August 18, 2022 - August 20, 2022
  4. Diversity Alliance for Science (DA4S) West Coast Conference
    August 30, 2022 - September 1, 2022
  5. Diversity Alliance for Science (DA4S) Matchmaking Events
    September 1, 2022
  6. Commercial UAV Expo Americas
    September 6, 2022 - September 8, 2022