The fossilized skeleton of a T. rex relative that roamed the earth about 76 million years ago will be auctioned in New York this month, Sotheby’s announced Tuesday.
The Gorgosaurus skeleton will highlight Sotheby’s natural history auction on July 28, the auction house said.
The Gorgosaurus was an apex carnivore that lived in what is now the western United States and Canada during the late Cretaceous Period. It predated its relative the Tyrannosaurus rex by 10 million years.
The specimen being sold was discovered in 2018 in the Judith River Formation near Havre, Montana, Sotheby’s said. It measures nearly 10 feet (3 meters) tall and 22 (6.7 meters) feet long.
All of the other known Gorgosaurus skeletons are in museum collections, making this one the only specimen available for private ownership, the auction house said.
“In my career, I have had the privilege of handling and selling many exceptional and unique objects, but few have the capacity to inspire wonder and capture imaginations quite like this unbelievable Gorgosaurus skeleton,” Cassandra Hatton, Sotheby’s global head of science and popular culture, said.
Sotheby’s presale estimate for the fossil is $5 million to $8 million.
More than 200 climate scientists just released a stark look at how fast the climate is warming, showing heat waves, extreme rain and intense droughts are on the rise. The evidence for warming is “unequivocal” but the extent of future disasters will be determined by how fast governments can cut heat-trapping emissions. Here are the top findings from the report.
#1 Humans are causing rapid and widespread warming
Carbon dioxide in the atmosphere has now reached the highest level in at least the past 2 million years. As a result, temperatures are warming quickly. Since 1970, global temperatures have increased faster than in any other 50-year period in the last 2,000 years. Some parts of the globe, like the poles, are warming even faster.
#2 Extreme weather is on the rise and will keep getting worse
Heat waves are more frequent and intense. Storms are dumping more rainfall, causing floods. Droughts are getting hotter and drier. Scientists are finding these trends are directly linked to the human influence on the climate and they’re getting worse.
#3 If humans cut emissions, the worst impacts are avoidable
While the planet will continue warm in the near-term, scientists say there is still time to prevent catastrophic climate change. That would mean a rapid drop in emissions from power plants and cars over the next few decades, essentially halting the use of fossil fuels.
Click here to read the full article on NPR.
By Carl Zimmer, Yahoo! News
A team of scientists announced Monday that they had partially restored the sight of a blind man by building light-catching proteins in one of his eyes. Their report, which appeared in the journal Nature Medicine, is the first published study to describe the successful use of this treatment. “Seeing for the first time that it did work — even if only in one patient and in one eye — is exciting,” said Ehud Isacoff, a neuroscientist at the University of California, Berkeley, who was not involved in the study.
The procedure is a far cry from full vision. The volunteer, a 58-year-old man who lives in France, had to wear special goggles that gave him the ghostly perception of objects in a narrow field of view. But the authors of the report say that the trial — the result of 13 years of work — is a proof of concept for more effective treatments to come.
“It’s obviously not the end of the road, but it’s a major milestone,” said José-Alain Sahel, an ophthalmologist who splits his time between the University of Pittsburgh and the Sorbonne in Paris.
Sahel and other scientists have tried for decades to find a cure for inherited forms of blindness. These genetic disorders rob the eyes of essential proteins required for vision.
When light enters the eye, it is captured by photoreceptor cells. The photoreceptors then send an electrical signal to their neighbors, called ganglion cells, which can identify important features like motion. They then send signals of their own to the optic nerve, which delivers the information to the brain.
In previous studies, researchers have been able to treat a genetic form of blindness called Leber congenital amaurosis, by fixing a faulty gene that would otherwise cause photoreceptors to gradually degenerate.
But other forms of blindness cannot be treated so simply, because their victims lose their photoreceptors completely.
“Once the cells are dead, you cannot repair the gene defect,” Sahel said.
For these diseases, Sahel and other researchers have been experimenting with a more radical kind of repair. They are using gene therapy to turn ganglion cells into new photoreceptor cells, even though they don’t normally capture light.
The scientists are taking advantage of proteins derived from algae and other microbes that can make any nerve cell sensitive to light.
In the early 2000s, neuroscientists figured out how to install some of these proteins into the brain cells of mice and other lab animals by injecting viruses carrying their genes. The viruses infected certain types of brain cells, which then used the new gene to build light-sensitive channels.
Originally, researchers developed this technique, called optogenetics, as a way to probe the workings of the brain. By inserting a tiny light into the animal’s brain, they could switch a certain type of brain cell on or off with the flick of a switch. The method has enabled them to discover the circuitry underlying many kinds of behavior.
Click here to read the full article on Yahoo! News.
By Ashley Strickland of CNN
The mystique of Mars is one that humans can’t seem to resist. The red planet has easily captured our interest for centuries, heavily featured in science fiction books and films and the subject of robotic exploration since the 1960s.
By BOB HOLMES, KNOWABLE MAGAZINE
Life, for most of us, ends far too soon—hence the effort by biomedical researchers to find ways to delay the aging process and extend our stay on Earth. But there’s a paradox at the heart of the science of aging: The vast majority of research focuses on fruit flies, nematode worms and laboratory mice, because they’re easy to work with and lots of genetic tools are available. And yet, a major reason that geneticists chose these species in the first place is because they have short lifespans. In effect, we’ve been learning about longevity from organisms that are the least successful at the game.
Today, a small number of researchers are taking a different approach and studying unusually long-lived creatures—ones that, for whatever evolutionary reasons, have been imbued with lifespans far longer than other creatures they’re closely related to. The hope is that by exploring and understanding the genes and biochemical pathways that impart long life, researchers may ultimately uncover tricks that can extend our own lifespans, too.
Everyone has a rough idea of what aging is, just from experiencing it as it happens to themselves and others. Our skin sags, our hair goes gray, joints stiffen and creak—all signs that our components—that is, proteins and other biomolecules—aren’t what they used to be. As a result, we’re more prone to chronic diseases such as cancer, Alzheimer’s and diabetes—and the older we get, the more likely we are to die each year. “You live, and by living you produce negative consequences like molecular damage. This damage accumulates over time,” says Vadim Gladyshev, who researches aging at Harvard Medical School. “In essence, this is aging.”
This happens faster for some species than others, though—the clearest pattern is that bigger animals tend to live longer lives than smaller ones. But even after accounting for size, huge differences in longevity remain. A house mouse lives just two or three years, while the naked mole rat, a similar-sized rodent, lives more than 35. Bowhead whales are enormous—the second-largest living mammal—but their 200-year lifespan is at least double what you’d expect given their size. Humans, too, are outliers: We live twice as long as our closest relatives, the chimpanzees.
Bats above average
Perhaps the most remarkable animal Methuselahs are among bats. One individual of Myotis brandtii, a small bat about a third the size of a mouse, was recaptured, still hale and hearty, 41 years after it was initially banded. That is especially amazing for an animal living in the wild, says Emma Teeling, a bat evolutionary biologist at University College Dublin who coauthored a review exploring the value of bats in studying aging in the 2018 Annual Review of Animal Biosciences. “It’s equivalent to about 240 to 280 human years, with little to no sign of aging,” she says. “So bats are extraordinary. The question is, Why?”
There are actually two ways to think about Teeling’s question. First: What are the evolutionary reasons that some species have become long-lived, while others haven’t? And, second: What are the genetic and metabolic tricks that allow them to do that?
Answers to the first question, at least in broad brushstrokes, are becoming fairly clear. The amount of energy that a species should put toward preventing or repairing the damage of living depends on how likely an individual is to survive long enough to benefit from all that cellular maintenance. “You want to invest enough that the body doesn’t fall apart too quickly, but you don’t want to over-invest,” says Tom Kirkwood, a biogerontologist at Newcastle University in the UK. “You want a body that has a good chance of remaining in sound condition for as long as you have a decent statistical probability to survive.”
This implies that a little scurrying rodent like a mouse has little to gain by investing much in maintenance, since it will probably end up as a predator’s lunch within a few months anyway. That low investment means it should age more quickly. In contrast, species such as whales and elephants are less vulnerable to predation or other random strokes of fate and are likely to survive long enough to reap the benefits of better-maintained cellular machinery. It’s also no surprise that groups such as birds and bats—which can escape enemies by flying—tend to live longer than you’d expect given their size, Kirkwood says. The same would apply for naked mole rats, which live their lives in subterranean burrows where they are largely safe from predators.
But the question that researchers most urgently want to answer is the second one: How do long-lived species manage to delay aging? Here, too, the outline of an answer is beginning to emerge as researchers compare species that differ in longevity. Long-lived species, they’ve found, accumulate molecular damage more slowly than shorter-lived ones do. Naked mole rats, for example, have an unusually accurate ribosome, the cellular structure responsible for assembling proteins. It makes only a tenth as many errors as normal ribosomes, according to a study led by Vera Gorbunova, a biologist at the University of Rochester. And it’s not just mole rats: In a follow-up study comparing 17 rodent species of varying longevity, Gorbunova’s team found that the longer-lived species, in general, tended to have more accurate ribosomes.
The proteins of naked mole rats are also more stable than those of other mammals, according to research led by Rochelle Buffenstein, a comparative gerontologist at Calico, a Google spinoff focused on aging research. Cells of this species have greater numbers of a class of molecules called chaperones that help proteins fold properly. They also have more vigorous proteasomes, structures that dispose of defective proteins. Those proteasomes become even more active when faced with oxidative stress, reactive chemicals that can damage proteins and other biomolecules; in contrast, the proteasomes of mice become less efficient, thus allowing damaged proteins to accumulate and impair the cell’s workings.
DNA, too, seems to be maintained better in longer-lived mammals. When Gorbunova’s team compared the efficiency with which 18 rodent species repaired a particular kind of damage (called a double-strand break) in their DNA molecules, they found that species with longer lifespans, such as naked mole rats and beavers, outperformed shorter-lived species such as mice and hamsters. The difference was largely due to a more powerful version of a gene known as Sirt6, which was already known to affect lifespan in mice.
Click here to read the full article on KNOWABLE MAGAZINE.
By Martin Anderson, Unite
Researchers from Germany have developed a method for identifying mental disorders based on facial expressions interpreted by computer vision.
The new approach can not only distinguish between unaffected and affected subjects, but can also correctly distinguish depression from schizophrenia, as well as the degree to which the patient is currently affected by the disease.
The researchers have provided a composite image that represents the control group for their tests (on the left in the image below) and the patients who are suffering from mental disorders (right). The identities of multiple people are blended in the representations, and neither image depicts a particular individual:
Individuals with affective disorders tend to have raised eyebrows, leaden gazes, swollen faces and hang-dog mouth expressions. To protect patient privacy, these composite images are the only ones made available in support of the new work.
Until now, facial affect recognition has been primarily used as a potential tool for basic diagnosis. The new approach, instead, offers a possible method to evaluate patient progress throughout treatment, or else (potentially, though the paper does not suggest it) in their own domestic environment for outpatient monitoring.
The paper states*:
‘Going beyond machine diagnosis of depression in affective computing, which has been developed in previous studies, we show that the measurable affective state estimated by means of computer vision contains far more information than the pure categorical classification.’
The researchers have dubbed this technique Opto Electronic Encephalography (OEG), a completely passive method of inferring mental state by facial image analysis instead of topical sensors or ray-based medical imaging technologies.
The authors conclude that OEG could potentially be not just a mere secondary aide to diagnosis and treatment, but, in the long term, a potential replacement for certain evaluative parts of the treatment pipeline, and one that could cut down on the time necessary for patient monitoring and initial diagnosis. They note:
‘Overall, the results predicted by the machine show better correlations compared to the pure clinical observer rating based questionnaires and are also objective. The relatively short measurement period of a few minutes for the computer vision approaches is also noteworthy, whereas hours are sometimes required for the clinical interviews.’
However, the authors are keen to emphasize that patient care in this field is a multi-modal pursuit, with many other indicators of patient state to be considered than just their facial expressions, and that it is too early to consider that such a system could entirely substitute traditional approaches to mental disorders. Nonetheless, they consider OEG a promising adjunct technology, particularly as a method to grade the effects of pharmaceutical treatment in a patient’s prescribed regime.
The paper is titled The Face of Affective Disorders, and comes from eight researchers across a broad range of institutions from the private and public medical research sector.
(The new paper deals mostly with the various theories and methods that are currently popular in patient diagnosis of mental disorders, with less attention than is usual to the actual technologies and processes used in the tests and various experiments)
Data-gathering took place at University Hospital at Aachen, with 100 gender-balanced patients and a control group of 50 non-affected people. The patients included 35 sufferers from schizophrenia and 65 people suffering from depression.
For the patient portion of the test group, initial measurements were taken at the time of first hospitalization, and the second prior to their discharge from hospital, spanning an average interval of 12 weeks. The control group participants were recruited arbitrarily from the local population, with their own induction and ‘discharge’ mirroring that of the actual patients.
In effect, the most important ‘ground truth’ for such an experiment must be diagnoses obtained by approved and standard methods, and this was the case for the OEG trials.
However, the data-gathering stage obtained additional data more suited for machine interpretation: interviews averaging 90 minutes were captured over three phases with a Logitech c270 consumer webcam running at 25fps.
The first session comprised of a standard Hamilton interview (based on research originated around 1960), such as would normally be given on admission. In the second phase, unusually, the patients (and their counterparts in the control group) were shown videos of a series of facial expressions, and asked to mimic each of these, while stating their own estimation of their mental condition at that time, including emotional state and intensity. This phase lasted around ten minutes.
In the third and final phase, the participants were shown 96 videos of actors, lasting just over ten seconds each, apparently recounting intense emotional experiences. The participants were then asked to evaluate the emotion and intensity represented in the videos, as well as their own corresponding feelings. This phase lasted around 15 minutes.
Click here to read the full article on Unite.
By Erica Nahmad, Be Latina
It’s undeniable that representation matters and the idea of what a scientist could or should look like is changing, largely thanks to pioneers like Afro-Latina scientist Dr. Jessica Esquivel, who is breaking barriers for women in STEM one step at a time.
Dr. Esquivel isn’t just extraordinary because of what she is capable of as an Afro-Latina astrophysicist — she’s also extraordinary in her vulnerability and relatability. She’s on a mission to break barriers in science and to show the humanity behind scientists.
Dr. Esquivel makes science accessible to everyone, no matter what you look like or where you come from. As one of the only Afro-Latina scientists in her field, and one of the only women who looked like her to pursue a Ph.D. in physics, Dr. Esquivel knows a thing or two about the importance of representation, especially in STEM fields and science labs.
Women make up only 28% of the science, technology, engineering, and math workforce in the U.S. Those disparities are even more severe when you start to look at minority populations.
“When you start looking at the intersections of race and gender and then even sexuality, those numbers drop significantly,” Esquivel told CBS Chicago. “There are only about 100 to 150 black women with their Ph.D. in physics in the country!”
Fighting against the isolation of uniqueness
Dr. Jessica Esquivel recalls being a nontraditional student and being “the only” when she entered graduate school for physics — the only woman in her class, the only Black, the only Mexican, the only lesbian — and all of that made her feel very isolated.
“On top of such rigorous material, the isolation and otherness that happens due to being the only or one of few is an added burden marginalized people, especially those with multiple marginalized identities, have to deal with,” Dr. Esquivel told BeLatina in an email interview. On top of feeling like an outsider, isolation was also consuming. “Being away from family at a predominately white institution, where the number of microaggressions was constant, really affected my mental health and, in turn, my coursework and research, so it was important to surround myself with mentors who supported me and believed in my ability to be a scientist.”
While she anticipated that the physics curriculum would be incredibly challenging, she was definitely not prepared for how hard the rest of the experience would be and how it would impact her as a student and a scientist.
The challenges she faced professionally and personally made her realize early on just how crucial representation is in academia and all fields, but especially in STEM. “It was really impactful for me to learn that there were other Black women who had made it out of the grad school metaphorical trenches. It’s absolutely important to create inclusive spaces where marginalized people, including Black, Latina, and genderqueer people, can thrive,” she said.
“The secrets of our universe don’t discriminate, these secrets can and should be unraveled by all those who wish to embark on that journey, and my aim is to clear as many barriers and leave these physics spaces better than I entered them.”
When inclusion and equal opportunities are the ultimate goal
Dr. Jessica Esquivel isn’t just dedicating her time and energy to studying complex scientific concepts — think quantum entanglement, space-time fabric, the building blocks of the universe… some seriously abstract physics concepts straight out of a sci-fi movie, as she explains. On top of her research, she put in so much extra work to show people, especially younger generations of women of color, that the physics and STEM world is not some old white man’s club where this prestigious knowledge is only available to them. Dr. Esquivel is an expert in her field; she knows things that no one else currently knows and has the ability and the power to transfer that knowledge to others and pass it down to others. There is a place for everyone, including people who look like her, in the STEM world, and she’s on a mission to inspire others while working to increase diversity, equity, and inclusion in the STEM space.
“Many of us who are underrepresented in STEM have taken on the responsibility of spearheading institutional change toward more just, equitable, and inclusive working environments as a form of survival,” she explains. “I’m putting in more work on top of the research I do because I recognize that I do better research if I feel supported and if I feel like I can bring my whole self to my job. My hope is that one day Black and brown women and gender-queer folks interested in science can pursue just that and not have to fight for their right to be a scientist or defend that they are worthy of doing science.”
Click here to read the full article on Be Latina.
South Korean influencer Rozy has over 130,000 followers on Instagram. She posts photos of globetrotting adventures, she sings, dances and models. The interesting fact is, unlike most popular faces on the medium, Rozy is not a real human. However, this digitally rendered being looks so real that it’s often mistaken for flesh and blood.
How Rozy was designed?
Seoul-based company that created Rozy describes her as a blended personality – part human, part AI, and part robot. She is “able to do everything that humans cannot … in the most human-like form,” Sidus Studio X says on its website.
Sidus Studio X explains sometimes they create an image of Rozy from head to toe while other times it is just a superimposed photo where they put her head onto the body of a human model.
Rozy was launched in 2020 and since then, she pegged several brand deals and sponsorships, and participated in several virtual fashion shows and also released two singles.
And a CNN report claims, that Rozy is not alone, there are several others like her. Facebook and Instagram together have more than 200 virtual influencers on their platforms
The CGI (computer-generated imagery) technology behind Rozy isn’t new. It is ubiquitous in today’s entertainment industry, where artists use it to craft realistic nonhuman characters in movies, computer games and music videos. But it has only recently been used to make influencers, the report reads.
South Korean retail brand Lotte Home Shopping created its virtual influencer — Lucy, who now has 78,000 Instagram followers.
Lee Bo-hyun, Lotte representative, said that Lucy’s image is more than a pretty face. She studied industrial design, and works in car design. She posts about her job and interests, such as her love for animals and kimbap — rice rolls wrapped in seaweed.
There is a risk attached
However, there is always a risk attached to it. Facebook and Instagram’s parent company Meta has acknowledged the risks.
In a blog post, it said, “Like any disruptive technology, synthetic media has the potential for both good and harm. Issues of representation, cultural appropriation and expressive liberty are already a growing concern,” the company said in a blog post.
“To help brands navigate the ethical quandaries of this emerging medium and avoid potential hazards, (Meta) is working with partners to develop an ethical framework to guide the use of (virtual influencers).”
However, even though the elder generation is quite skeptical, the younger lot is comfortable communicating with virtual influencers.
Lee Na-kyoung, a 23-year-old living in Incheon, began following Rozy about two years ago thinking she was a real person. Rozy followed her back, sometimes commenting on her posts, and a virtual friendship blossomed — one that has endured even after Lee found out the truth, CNN report said.
“We communicated like friends and I felt comfortable with her — so I don’t think of her as an AI but a real friend,” Lee said.
Click here to read the full article on Mint.
Former Empire actor and red carpet scientist Terrence Howard is currently visiting Uganda as part of a government effort to draw investors from the African diaspora to the nation. He is claiming he has what it needs to change the world.
According to Vice, Howard made a lofty presentation on Wednesday, July 13, addressing officials and claiming to have developed a “new hydrogen technology.”
Famously, Howard argued in Rolling Stone that one times one equals two, and now he says his new system, The Lynchpin, would be able to clean the ocean and defend Uganda from exploitation via cutting-edge drone technology. The proprietary technology he announced in a 2021 press release is said to hold 86 patents.
“I was able to identify the grand unified field equation they’ve been looking for and put it into geometry,” he shared in front of an audience of Ugandan dignitaries. “We’re talking about unlimited bonding, unlimited predictable structures, supersymmetry.”
“The Lynchpins are now able to behave as a swarm, as a colony, that can defend a nation, that can harvest food, that can remove plastics from the ocean, that can give the children of Uganda and the people of Uganda an opportunity to spread this and sell these products throughout the world,” he added.
Howard, who briefly quit acting in 2019 only to come out of retirement in 2020, has seemingly made rewriting history a personal side hustle. According to Vice, he made nebulous claims that rapidly went viral on social media, saying, “I’ve made some discoveries in my own personal life with the science that, y’know, Pythagoras was searching for. I was able to open up the flower of life properly and find the real wave conjugations we’ve been looking for 10,000 years.”
While his latest claims have yet to be clarified, Howard was invited to speak by Frank Tumwebaze, the minister of agriculture, animal industries, and fishery.
Click here to read the full article on BET.
By Hana Kiros, MIT Technology Review
Radiologists assisted by an AI screen for breast cancer more successfully than they do when they work alone, according to new research. That same AI also produces more accurate results in the hands of a radiologist than it does when operating solo.
The large-scale study, published this month in The Lancet Digital Health, is the first to directly compare an AI’s performance in breast cancer screening according to whether it’s used alone or to assist a human expert. The hope is that such AI systems could save lives by detecting cancers doctors miss, free up radiologists to see more patients, and ease the burden in places where there is a dire lack of specialists.
The software being tested comes from Vara, a startup based in Germany that also led the study. The company’s AI is already used in over a fourth of Germany’s breast cancer screening centers and was introduced earlier this year to a hospital in Mexico and another in Greece.
The Vara team, with help from radiologists at the Essen University Hospital in Germany and the Memorial Sloan Kettering Cancer Center in New York, tested two approaches. In the first, the AI works alone to analyze mammograms. In the other, the AI automatically distinguishes between scans it thinks look normal and those that raise a concern. It refers the latter to a radiologist, who would review them before seeing the AI’s assessment. Then the AI would issue a warning if it detected cancer when the doctor did not.
To train the neural network, Vara fed the AI data from over 367,000 mammograms—including radiologists’ notes, original assessments, and information on whether the patient ultimately had cancer—to learn how to place these scans into one of three buckets: “confident normal,” “not confident” (in which no prediction is given), and “confident cancer.” The conclusions from both approaches were then compared with the decisions real radiologists originally made on 82,851 mammograms sourced from screening centers that didn’t contribute scans used to train the AI.
The second approach—doctor and AI working together—was 3.6% better at detecting breast cancer than a doctor working alone, and raised fewer false alarms. It accomplished this while automatically setting aside scans it classified as confidently normal, which amounted to 63% of all mammograms. This intense streamlining could slash radiologists’ workloads.
After breast cancer screenings, patients with a normal scan are sent on their way, while an abnormal or unclear scan triggers follow-up testing. But radiologists examining mammograms miss 1 in 8 cancers. Fatigue, overwork, and even the time of day all affect how well radiologists can identify tumors as they view thousands of scans. Signs that are visually subtle are also generally less likely to set off alarms, and dense breast tissue—found mostly in younger patients—makes signs of cancer harder to see.
Radiologists using the AI in the real world are required by German law to look at every mammogram, at least glancing at those the AI calls fine. The AI still lends them a hand by pre-filling reports on scans labeled normal, though the radiologist can always reject the AI’s call.
Thilo Töllner, a radiologist who heads a German breast cancer screening center, has used the program for two years. He’s sometimes disagreed when the AI classified scans as confident normal and manually filled out reports to reflect a different conclusion, but he says “normals are almost always normal.” Mostly, “you just have to press enter.”
Mammograms the AI has labeled as ambiguous or “confident cancer” are referred to a radiologist—but only after the doctor has offered an initial, independent assessment.
Radiologists classify mammograms on a 0 to 6 scale known as BI-RADS, where lower is better. A score of 3 indicates that something is probably benign, but worth checking up on. If Vara has assigned a BI-RADS score of 3 or higher to a mammogram the radiologist labels normal, a warning appears.
AI generally excels at image classification. So why did Vara’s AI on its own underperform a lone doctor? Part of the problem is that a mammogram alone can’t determine whether someone has cancer—that requires removing and testing the abnormal-looking tissue. Instead, the AI examines mammograms for hints.
Christian Leibig, lead author on the study and director of machine learning at Vara, says that mammograms of healthy and cancerous breasts can look very similar, and both types of scans can present a wide range of visual results. This complicates AI training. So does the low prevalence of cancer in breast screenings (according to Leibig, “in Germany, it’s roughly six in 1,000”). Because AIs trained to catch cancer are mostly trained on healthy breast scans, they can be prone to false positives.
The study tested the AI only on past mammogram decisions and assumed that radiologists would agree with the AI each time it issued a decision of “confident normal” or “confident cancer.” When the AI was unsure, the study defaulted to the original radiologist’s reading. That means it couldn’t test how using AI affects radiologists’ decisions—and whether any such changes may create new risks. Töllner admits he spends less time scrutinizing scans Vara labels normal than those it deems suspicious. “You get quicker with the normals because you get confident with the system,” he says.
Click here to read the full article on MIT Technology Review.
By Jackie Wattles, CNN
A rocket built by Jeff Bezos’ Blue Origin carried its fifth group of passengers to the edge of space, including the first-ever Mexican-born woman to make such a journey.
The 60-foot-tall suborbital rocket took off from Blue Origin’s facilities in West Texas at 9:26am ET, vaulting a group of six people to more than 62 miles above the Earth’s surface — which is widely deemed to make the boundary of outer space — and giving them a few minutes of weightlessness before parachuting to landing.
Most of the passengers paid an undisclosed sum for their seats. But Katya Echazarreta, an engineer and science communicator from Guadalajara, Mexico, was selected by a nonprofit called Space for Humanity to join this mission from a pool of thousands of applicants. The organization’s goal is to send “exceptional leaders” to space and allow them to experience the overview effect, a phenomenon frequently reported by astronauts who say that viewing the Earth from space give them a profound shift in perspective.
Echazarreta told CNN Business that she experienced that overview effect “in my own way.”
“Looking down and seeing how everyone is down there, all of our past, all of our mistakes, all of our obstacles, everything — everything is there,” she said. “And the only thing I could think of when I came back down was that I need people to see this. I need Latinas to see this. And I think that it just completely reinforced my mission to continue getting primarily women and people of color up to space and doing whatever it is they want to do.”
Echazarreta is the first Mexican-born woman to travel to space and the second Mexican after Rodolfo Neri Vela, a scientist who joined one of NASA’s Space Shuttle missions in 1985.
She moved to the United States with her family at the age of seven, and she recalls being overwhelmed in a new place where she didn’t speak the language, and a teacher warned her she might have to be held back.
“It just really fueled me and I think ever since then, ever since the third grade, I kind of just went off and have not stopped,” Echazarreta recalled in an Instagram interview.
When she was 17 and 18, Echazarreta said she was also the main breadwinner for her family on a McDonald’s salary.
“I had sometimes up to four [jobs] at the same time, just to try to get through college because it was really important for me,” she said.
These days, Echazarreta is working on her master’s degree in engineering at Johns Hopkins University. She previously worked at NASA’s famed Jet Propulsion Laboratory in California. She also boasts a following of more than 330,000 users on TikTok, hosts a science-focused YouTube series and is a presenter on the weekend CBS show “Mission Unstoppable.”
Space for Humanity — which was founded in 2017 by Dylan Taylor, a space investor who recently joined a Blue Origin flight himself — chose her for her impressive contributions. “We were looking for some like people who were leaders in their communities, who have a sphere of influence; people who are doing really great work in the world already, and people who are passionate about whatever that is,” Rachel Lyons, the nonprofit’s executive director, told CNN Business.
Click here to read the full article on CNN.