Researchers scanned the genomes of nearly 200 outbred mice from eight parental lines, including some with wild ancestry, to mirror genetic diversity in humans. This wide approach helped them find a strong genetic influence in the prefrontal cortex, a brain area that controls attention.
High-performing mice had much lower levels of the Homer1 gene in that area. The gene sat in a locus that explained almost 20 percent of the variation in attention among the mice.
Further work showed the effect came from two short forms of the gene. Reducing those forms in adolescent mice made them faster, more accurate and less distractible, but the same change in adults had no effect.
Difficult words
- genome — All genetic material of an organismgenomes
- ancestry — Family origins or background in past generations
- prefrontal cortex — Front brain area that helps control attention
- locus — A specific place on a chromosome
- adolescent — A young person not yet an adult
Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.
Discussion questions
- Why is the prefrontal cortex important for attention?
- Why might reducing the gene forms help adolescent mice but not adults?
- Do you think these findings could help people with attention problems? Why or why not?
Related articles
More brain activity in OCD during a sequence task
A Brown University study found that people with obsessive-compulsive disorder (OCD) show extra brain activity when doing a demanding sequence task in an MRI. The findings point to new brain targets that might improve transcranial magnetic stimulation (TMS) treatment.
January 2025 Los Angeles wildfires and the rise in virtual health visits
A study of 3.7 million Kaiser Permanente members found that the January 2025 Los Angeles wildfires caused large increases in virtual care, especially for respiratory and cardiovascular symptoms, and raised other outpatient visits.
New training method helps models do long multiplication
Researchers studied why modern language models fail at long multiplication and compared standard fine-tuning with an Implicit Chain of Thought (ICoT) method. ICoT models learned to store intermediate results and reached perfect accuracy.