Health and Disease

The Plague in the Modern Day

By Michelle Li

Published 8:21 EST, Mon August 30, 2021

Introduction and History

The plague is a disease that is caused by the bacterium Yersinia Pestis. The Black Death, one of the most notable pandemics in history, was a result of the transmission of Y. pestis from rats and fleas to humans, making it a zoonotic disease (Frey, “Plague). This transmission often occurred through flea bites or contact with the body fluid of an infected animal. The plague takes on three different forms depending on the affected area of the body: bubonic (affecting lymph nodes), pneumonic (affecting the lungs), and septicemic (affecting the blood). The Black Death is characterized by both the swelling of lymph nodes, which turn black as a result of bubonic plague, and the blackening of skin, resulting from septicemic plague (Frey, “Plague). An estimated 75 to 200 million lives were lost before the end of the Black Death (Frey, “Bubonic Plague”). Today, the number of fatalities resulting from (and cases of)  the plague are not even remotely close to the numbers seen during the Black Death; the recorded numbers are not zero, however. 

Plague in the modern day

Cases of the plague, although much less frequent, still exist today. A majority of the present day cases of the plague occur in developing countries. In fact, the plague is still endemic to (regularly found in) Madagascar, Peru, and the Democratic Republic of the Congo (Frey, “Bubonic Plague). Madagascar reports more plague cases than any other single country, averaging 200 to 400 cases every year (Hardman).

A portion also occurs in the United States. The Centers for Disease Control and Prevention (CDC) reports between 1 to 17 cases of the plague each year in the United States (with an average of 7 reported cases per year). Plague in the United States occurs in western, rural areas; northern New Mexico, northern Arizona, southern Colorado, California, southern Oregon, and western Nevadea are the most affected regions. 80% of these U.S. cases are in bubonic form (“Plague in the United States”). 

Figure 1: This figure shows a world map of plague cases reported to the World Health Organization (WHO) by country between 2000 and 2009 (Centers for Disease Control and Prevention)

There are about 5,000 cases of the plague reported to the World Health Organization (WHO) each year worldwide, and 95% of these cases occur in Africa. Interestingly, the only two continents that are plague-free are Australia and Antarctica (Frey, “Plague”). 

More “recent” plague outbreaks include an outbreak of pneumonic plague in Surat, India in 1994, where 876 cases of infections, including 52 deaths, were reported to the WHO (Frey, “Plague”). The most recent outbreak in Madagascar occurred in 2017 and resulted in 2,575 confirmed or probable cases of bubonic and pneumonic plague, including 221 deaths (World Health Organization 2017). Even more recently, the WHO began reporting on an ongoing outbreak of suspected pneumonic plague in the Dominican Republic of Congo in January of 2020. As of May 2021, there are a total of 564 suspected pneumonic plague cases, including 43 deaths (World Health Organization 2021).


Today, the plague is treatable with antibiotics. 80% to 90% of patients with bubonic plague that received rapid diagnosis and appropriate treatment will survive; the survival rates for septicemic plague and pneumonic plague are lower in comparison at 75% and 50%, respectively (Frey, “Bubonic Plague”). If left untreated however, each form of the plague is still fatal a majority of the time. Bubonic plague has a mortality rate of 60% to 70% in untreated cases, while untreated septicemic and pneumonic plague both have a mortality rate of 100%. Pneumonic plague, specifically, is 100% fatal if left untreated for 48 hours (Frey, “Bubonic Plague”). 

Streptomycin, an antibiotic discovered in the 1940s, is one of the first-line treatments for plague (Hardman). Gentamicin, chloramphenicol, and tetracycline are alternatives that can also be administered. It is important that the administration of antibiotics is started as soon as possible (Cua). 

The Future

Some scientists have also considered the association between climate and plague. The conditions following warmer weather in the spring and wet weather in the summer are beneficial for fleas and bacteria, which play key parts in the spread of plague (Hardman). Additionally, outbreaks of plague among local animals (called epizootics) most commonly occur after wet winters and cool summers; these epizootics could also affect humans (Frey, “Plague”). In the case of the plague outbreak in Surat, India, rainfall also influenced the spread of plague, as flooding increased contact with drowned, infected animals that were not disposed of (Frey, “Plague”). Similarly to other climate sensitive and infectious diseases, the effects of global warming may increase the number of outbreaks of plague in animals and, in turn, in human populations (Hardman). 

Lastly, the animal reservoirs of the plague (the host animal populations that infectious diseases survive off of) make the disease impossible to eradicate (Frey, “Plague”). Rats play a large role in the spread of the plague. Controlling that spread—considering their sheer numbers and the scope of human capabilities—proves to be near impossible. Surveillance of animal populations and careful reporting of plague cases, however, is still important in preventing plague (Frey, “Plague”).


The plague is an infectious disease that is well known due to its connection to the Black Death, one of the most notable epidemics in history. While the occurrence of the plague has changed since medieval times, the high mortality rates of untreated plague cases have remained and are currently affecting different regions of the world. Considering its connections to climate change and recent developments, the plague is not a disease of the past, but rather, one that is still relevant to the modern world. 

Michelle Li, Youth Medical Journal 2021


Centers for Disease Control and Prevention. “World Plague Map – 2000 to 2009 – CDC.” Wikimedia Commons, Accessed 30 June 2021. Map.

Cua, Arnold, and Rebecca J. Frey. “Plague.” The Gale Encyclopedia of Medicine, edited by Jacqueline L. Longe, 6th ed., vol. 7, Gale, 2020, pp. 4067-71. Gale Health and Wellness, Accessed 1 July 2021.

Frey, Rebecca J., PhD. “Bubonic Plague.” The Gale Encyclopedia of Emerging Diseases, edited by Deirdre S. Hiam, Gale, 2018, pp. 45-52. Gale Health and Wellness, Accessed 1 July 2021.

—. “Plague.” The Gale Encyclopedia of Public Health, edited by Brigham Narins, 2nd ed., vol. 2, Gale, 2020, pp. 847-51. Gale Health and Wellness, Accessed 1 July 2021.

Hardman, Lizabeth. “Plague Today and Tomorrow.” Plague, Lucent Books, 2010, pp. 74-87. Diseases & Disorders. Gale Health and Wellness, Accessed 1 July 2021.

“Plague in the United States.” Centers for Disease Control and Prevention, Accessed 30 June 2021.

World Health Organization. Weekly Bulletin on Outbreaks and Other Emergencies. 15 Dec. 2017. World Health Organization,;jsessionid=9E1F0CC89024B909F1E30F0B1064B43A?sequence=1. Accessed 30 June 2021.

—. Weekly Bulletin on Outbreaks and Other Emergencies. 27 June 2021. World Health Organization, Accessed 30 June 2021.

Biomedical Research

Neonatal Immunology: Our Immune Systems in the Weeks After Birth

By Michelle Li

Published 12:08 AM EST, Sun July 4, 2021


Neonates are newborn infants that are four weeks old or younger. These first four weeks of an infant’s life are when the infant is at highest risk of dying. At this stage in life, neonates do not have fully developed immune systems and are more susceptible to different infections. Of the 5 million infant deaths that occur each year, 1.5 million are due to infections, making it important to understand the developing immune system of neonates (Tregoning). 

Part of understanding the immune systems of neonates is first understanding the transition form the sterile womb to an unsterile environment during birth. The fetal immune system is suppressed in the womb in order to limit interference with the mother’s immune system. While this provides stability before birth, the arrangement changes the second after birth, when the newborn enters the unsterile environment of the world. In addition to the risks of being exposed  to bacteria, the fetal immune system after birth (which was previously suppressed) is antigenically inexperienced; it does not yet have experience responding to different pathogens, which increases the infants susceptibility to infections (“Development of the Immune System”). Therefore, after birth, neonates depend on “passive immunity” for protection, as their own immune systems develop.

Passive Immunity

Neonates depend on antibodies from the mother for protection from different antigens. This is called “passive immunity,” as antibodies from the mother are passed down to the baby passively through the placenta, rather than the antibodies being created by the infant themselves. Most of the antibodies produced by the mother’s immune system cross the placenta during the third trimester, which ensures that there are high levels of antibodies after birth. This also explains the low levels of antibodies in premature babies; the timing of the birth does not allow for the same amount of antibodies to be transferred, making premature newborns more vulnerable to infections compared to full-term newborns. Additionally, breastfeeding is another form of passive immunity that allows for the passing of antibodies to infants (“Development of the Immune System”). 

Passive immunity only provides short term protection for neonates. The antibodies transferred through the placenta or breast milk are generally immunoglobulin A or G (IgA or IgG). Some of these maternal antibodies protect against measles, mumps, rubella, etc (“Immunity: Active, Passive, and Delayed”). The antibodies transferred passively from the mother to the child either through the placenta or breast milk only protection for the first few months of the infant’s life. This allows the infant’s immune system to develop and start working while keeping the infant protected (“Development of the Immune System”).

The Immune System At This Time

Newborns have a limited quantity of phagocytic cells (types of white blood cells such as neutrophils and macrophages), which are important for innate immunity (the nonspecific immune response immediately after the appearance of an antigen). During an infection, the immune system’s response will be limited by the quantity of neutrophils and macrophages. As a result, the pathogen will commonly overtake the immune system, and the infant will require medical care (“Development of the Immune System”).

In addition, there is also adaptive immunity, which is the specific immune response that occurs after the innate immunity system fails; it is the system that protects the body by remembering and destroying pathogens. As the newborn’s immune system is inexperienced, every pathogen is new, resulting in the immune response taking a longer time to develop. The fact that every pathogen is new also means that there are no memory immune responses, which affects antibody production (“Development of the Immune System”). The process of producing antibodies is less efficient in newborns compared to adults. Some B cell (a type of white blood cell) responses require T cells to produce antibodies. The interactions between T cells, which attack specific antigens, and antigen-presenting cells, which present antigens for recognition, are less effective and stimulating in newborns. There are lower levels of cytokines (which regulate the immune response) produced by T cells. Furthermore, the levels of types of T cells are different in newborns than in adults. For instance, there are lower levels of cytotoxic T cells, which are responsible for killing virus infected cells. These factors influence the levels of antibody production. For B cell responses that don’t involve T cells, B cells recognize the repeating proteins on the surface of a pathogen; this response is also reduced in newborns, resulting in increased susceptibility to bacteria (“Development of the Immune System’).


The reduced immune response of newborns affects the efficacy of vaccines, as there is reduced recognition of vaccine antigens as foreign. Therefore, there are also fewer protective memory responses induced by vaccines, making vaccines themselves less effective in newborns compared to adults with developed immune systems (Tregoning). However, this does not mean that early vaccinations are ineffective. They still aid in protecting against diseases, and they become more effective over time as the newborn’s immune system develops (“Development of the Immune System”). 

In fact, as the protection from passive immunity fades over a number of months, vaccinations are required to maintain protection against different antigens. The fading of maternal antibodies is also why there are certain required vaccinations after set periods of times; for instance, the MMR vaccine is required after 1 year of life (“Immunity: Active, Passive, and Delayed”).


The immune systems of neonates are, unsurprisingly, different and less developed than those of adults. As a result, newborns depend on passive immunity (antibodies passed down through the placenta or breast feeding) for protection against infections. The processes in the immune system itself are also different in newborns, which affects the immune system’s capabilities. The increased susceptibility to infections in newborns makes it all the more important to understand the neonatal immune system.

Michelle Li, Youth Medical Journal 2021


“Development of the Immune System.” Children’s Hospital of Philadelphia, Accessed 31 May 2021.

“Immunity: Active, Passive, and Delayed.” World of Microbiology and Immunology, edited by Brenda Wilmoth Lerner and K. Lee Lerner, Gale, 2007. Gale in Context: Science, Accessed 31 May 2021.

Tregoning, John. “Neonatal Immunology.” British Society for Immunology, Accessed 31 May 2021.

Health and Disease

Dyslexia: The Reading Disability

By Michelle Li

Published 1:57 PM EST, Fri May 7, 2021


Dyslexia is a learning disability that is characterized by difficulty in reading, writing, spelling, and other language skills. It was first discovered in 1887 when German physician Rudolf Berlin published a case study on a young boy who had normal intelligence but faced difficulties in reading and writing (Nelson). A few years later in 1896, the first English-language case study of dyslexia was published by the British doctor W. Pringle Morgan. Similarly to the 1887 case study, Morgan also detailed a 14 year-old boy who had normal intellectual capabilities but had not learned to read. Before the term “dyslexia” was put into widespread use, the condition was referred to by Morgan and others as “word-blindness”. It is still retained the key characteristic of difficulty reading (Nelson). 

It is believed to have a hereditary component and is most commonly identified in the early years through symptoms related to hardships in reading or other language skills. Following a diagnosis, options to treat dyslexia through special education also exist.


Dyslexia is believed to be a hereditary condition, as 40% of boys and 20% of girls with a dyslexic parent also develop the disorder. Four genes have been found to be connected to dyslexia, but no specific cause has been identified for the disorder (Nelson). Some studies involving positron emission tomography or functional magnetic resonance imaging have shown that there is lower activity in the left inferior parietal cortex, left inferior frontal gyrus, the left inferior parietal lobule, and the left middle temporal gyrus of the brain in dyslexic children when they are given reading or word tasks to complete, highlighting the connection between dyslexia and certain areas of the brain (Nelson). 

Symptoms and Diagnoses

The symptoms of dyslexia appear through the affected reading, writing, listening, and speaking abilities of individuals. Some symptoms include slow reading speed, difficulty reading and spelling words, omission of words while reading, poor reading comprehension, reversal of words or letters, confusion between similar letters, delayed speech, and transferrence of information across modes—such as reading out loud or writing thoughts or speech (Nelson; Frey). 

When these symptoms create problems in school or work settings, individuals are referred to testing for dyslexia. As instruction on reading begins in kindergarten or first grade in the U.S., it is rare for dyslexia to be diagnosed before the age of five or six (Frey). Children are generally diagnosed with dyslexia when they demonstrate that their reading level is greater than two levels below the expected average for their age or education (Nelson). Other visual, hearing, speech, intelligence, and word or letter recognition tests are also conducted to rule out disorders that could impair vision or hearing and measure a child’s capabilities; they are also evaluated psychologically to rule out depression or anxiety as a cause for the learning impairment (Nelson, Frey). Generally, reading problems must substantially interfere with school or daily life, as outlined by the APA’s diagnostic criteria for dyslexia (Frey).


Appropriate and early intervention through special education has been proven effective in treating dyslexia. Under the Individuals with Disabilities Education Act, children with dyslexia are entitled to individualized education plans (IEP) that address the learning disability (Nelson; Frey). The IEP defines specific problems and the associated learning objectives. This is usually done through a cross-disciplinary approach. The three core principles of the successful approach developed by Samuel Torrey Orton in the 1920s have a sound/symbol based component, where words are broken down into letters and associated sounds; a multisensory component, where visual, auditory, and kinesthetic connections are strengthened; and a highly structure component, which involves working up from letters to words to sentences with repetitive practice (Nelson). There are also a number of techniques that reading specialists may test out to see which are most effective. Generally, dyslexia can be treated with appropriate intervention; the earlier the diagnosis and intervention, the greater the likelihood of improved reading abilities and less interference in education.


As the most common learning disability in the United States, dyslexia interferes in the education and lives of many individuals each year. While a clear cause has not been identified, a hereditary component and low activity in certain areas of the brain have been linked to the condition. Symptoms are related to impairments in reading, writing, spelling, and speech; evaluations of these impairments are used to diagnose dyslexia. Early treatment through specialized education plans have been proven successful in improving reading and related abilities, providing hope for dyslexic individuals.

Michelle Li, Youth Medical Journal 2021


Frey, Rebecca J., and Jack Lasky. “Developmental Reading Disorder.” The Gale Encyclopedia of Children’s Health: Infancy through Adolescence, edited by Jacqueline L. Longe, 4th ed., vol. 2, Gale, 2021, pp. 846-50. Gale Health and Wellness, Accessed 19 Apr. 2021.

Nelson, Katy, and Jack Lasky. “Dyslexia.” The Gale Encyclopedia of Children’s Health: Infancy through Adolescence, edited by Jacqueline L. Longe, 4th ed., vol. 2, Gale, 2021, pp. 901-04. Gale Health and Wellness, Accessed 19 Apr. 2021.

Health and Disease

Sudden Infant Death Syndrome (SIDS): An Unexplainable Nightmare For New Parents

By Michelle Li

Published 7:42 PM EST, Tues April 13, 2021


SIDS is the leading cause of death for infants who are 1 month to 12 months in age in the U.S (Marshall). As the name suggests, SIDS is the sudden and unexplained death of a seemingly healthy infant under the age of 1. This most commonly occurs overnight when the infant is sleeping, therefore it is also known as crib death or cot death. Some time after being put to sleep, the infant is discovered lifeless and limp despite showing no health concerns previously. An investigation is then conducted with no clear causes being identified, leading to a diagnosis of SIDS (Odle).

SIDS is responsible for the deaths of 2,500 to 7,000 infants a year. 80% of these deaths occur in infants younger than 5 months old, and, with 60-70% of cases being male deaths, it disproportionately affects boys more than girls (“Sudden Infant Death Syndrome”). SIDS deaths are more frequent during the winter and early spring. Interestingly, babies whose siblings died of SIDS are at a slightly higher risk of the same syndrome despite the fact that it is not contagious or hereditary (Odle). 


SIDS falls under the even broader category of SUIDS, Sudden Unexpected Infant Death Syndrome. For cases labeled as SUIDS, however, the cause of death can generally be identified. It is usually an environmental factor such as high room temperature, being placed to sleep with a pillow, etc. Whereas with SIDS, the cause of death can not be identified even after thorough investigations, autopsies, examinations of death scenes, and reviews of clinical history. Diagnosis of SIDS is a process of exclusion where other causes of death need to be ruled out (Odle). 

SIDS only officially became known as a cause of death in 1979, but sudden deaths in babies with or without explanation have been reported for hundreds of years. In the 1700s and 1800s, SIDS deaths may have been blamed on mothers for rolling on babies while sleeping with them. In the 1900s, co-sleeping between infants and parents became more rare, and speculative reasoning behind the sudden deaths shifted to the dressing of babies in clothes that were too warm during the night (Marshall). This shows how the speculated causes of SIDS change over time.

Speculation About Causes and Current Understanding of SIDS

Many have speculated about the causes of SIDS, and while a number of possible factors have been pointed out, there is still no clearly defined cause of SIDS to this day. One theory is that a genetic defect in an enzyme causes the brain to be deprived of energy, resulting in a coma (Marshall). Another theory focuses on the connection between abnormalities in breathing patterns/heart rhythm and SIDS (Marshall; Odle). Others speculate that a combination of factors and conditions results in SIDS (Odle).

There are also factors that increase the risk of SIDS, including mothers smoking during pregnancy, using drugs or alcohol, being underweight, having children less than one year apart, having children in teen years, and being obese. Babies who are born prematurely, weigh less than 4 pounds, are not breastfed, or are part of a set of twins/triplets/quadruplets are also at an increased risk of SIDS (Odle).

Additionally, the infant’s sleeping positions seem to play a role in deaths diagnosed as SIDS. Studies have shown that placing a baby on its stomach or side to sleep increases the risk of SIDS, as they would re-breathe their own carbon dioxide and would not be able to turn to get more oxygen (“Sudden Infant Death Syndrome”; Odle). This risk is reduced by putting a baby on its back to sleep in the supine sleeping position. Another study also shows that the use of pacifiers may protect against SIDS, reducing the risk of SIDS by 90% (“Sudden Infant Death Syndrome”).

Awareness Campaigns

New Zealand, Australia, and Norway saw campaigns aimed at raising awareness and reducing the risk factors of SIDS. These campaigns resulted in a decrease of SIDS deaths by as much as 50% (Odle). The U.S. also saw a similar campaign, the “Safe to Sleep” campaign, that focused on encouraging parents to put babies on their back when sleeping, as recommended by the American Academy of Pediatrics. The “Safe to Sleep” campaign also saw significant results, reducing the number of SIDS deaths by 20 to 35% in the 1990s. Results from other countries have also shown that a 5 to 10% in babies sleeping on their stomachs results in SIDS deaths decreasing by 70 to 80% (Odle). Although the direct causes of SIDS are still unknown, it seems that sleeping positions are an important risk factor which can be countered by awareness campaigns. 

Disparities in SIDS Deaths

It is also important to note that there are racial/ethnic disparities in SIDS deaths. African American, Native American, and Alaska Native babies all are at a higher risk of SIDS deaths than caucasian babies. African American babies are twice as likely to die from SIDS, while Native American and Alaska Native babies are three times as likely (Odle). Like the causes of SIDS, these disparities are also unexplained. It may be a result of health disparities and cultural differences that make populations more prone to the risk factors of SIDS. Another explanation that is offered is that the efforts of awareness campaigns in different countries are not reaching certain communities.


Sudden Infant Death Syndrome (SIDS) is a cause for concern for many as the leading cause of death for infants who are 1 month to 12 months in age in the U.S. Its unexplained causes and unequal impact on different communities make it an even greater cause for concern. Thankfully, awareness campaigns fighting to reduce the risk factors of SIDS seem to be effective, and there is hope for measures to prevent SIDS in the future as more developments are made to understand SIDS.

Michelle Li, Youth Medical Journal 2021


Marshall, Liz. “Sudden Infant Death Syndrome (SIDS).” The Gale Encyclopedia of Science: S, edited by Katherine H. Nemeh and Jacqueline L. Longe, 6th ed., vol. 7, Gale, 2021, pp. 4314-16. Gale in Context: Science, Accessed 28 Mar. 2021.

Odle, Teresa, and Rebecca J. Frey. “Sudden Infant Death Syndrome.” The Gale Encyclopedia of Medicine, edited by Jacqueline L. Longe, 6th ed., vol. 8, Gale, 2020, pp. 4954-58. Gale in Context: Science, Accessed 28 Mar. 2021.

“Sudden Infant Death Syndrome.” World of Health, Gale, 2007. Gale in Context: Science, Accessed 28 Mar. 2021.

Health and Disease

Muscular Dystrophy (MD): About the Conditions Centered Around Muscle Weakness

By Michelle Li

Published 10:41 PM EST, Wed March 10, 2021


Muscular dystrophy (MD) is a group of conditions that causes muscle weakness and degeneration. It is an inherited disorder in which symptoms gradually worsen over time (Quercia). Age of onset varies depending on the type of muscular dystrophy, ranging from childhood to the later stages of life. The severity, rate of progression, and pattern of affected muscles also vary with the type of MD. Many individuals with muscular dystrophy lose the ability to walk and, unfortunately, live shorter lives than average due to the condition (“Muscular Dystrophy: Hope in Research”). There are no cures for any forms of muscular dystrophy, but treatments such as physical therapy and braces aim to improve muscle function and slow deterioration.

The first reported case of muscular dystrophy in the 1830s was of two brothers that were experiencing progressive muscle weakness that began around age 10. They developed general weakness and muscle damage, and it was observed that the damaged muscle tissue was replaced with fat and connective tissue. At the time, it was mistakenly thought that these were symptoms of tuberculosis. In the years that followed, more cases of boys developing muscle weakness and dying at an early age were reported. As more cases were studied and observed, the different types of muscular dystrophy were classified, and its genetic link was discovered (“Muscular Dystrophy: Hope in Research”).

Types of Muscular Dystrophy

Duchenne muscular dystrophy (DMD) was named after Guillaume Duchenne, a French neurologist who gave a detailed account of 13 boys with the disorder in the 1860s (“Muscular Dystrophy: Hope in Research”). Duchene muscular dystrophy is the most common—accounting for 50% of MD cases—and the most severe type of muscular dystrophy. It affects one in every 3,500 males (Bosworth). It most commonly affects young boys and is much rarer for females. Symptoms begin to appear in the early toddler years and become apparent with difficulty walking, an affected gait, loss of reflexes, frequent falls, etc. Progressive muscle-weakening begins in the legs before spreading to the upper arms. This results in the loss of the ability to walk and often the use of a wheelchair by early adolescence. Those with DMD also experience muscle wasting, a decrease in muscle mass and strength due to lack of physical activity. Eventually, the cardiac muscles are weakened, which leads to breathing problems and fatal infections. The average life expectancy of individuals with DMD is the late teens or early twenties, but this has improved significantly with some living in their 30s and 40s (Bosworth).

Becker muscular dystrophy shares some similarities with Duchenne MD but is less severe. It affects one in every 30,000 males with most experiencing symptoms at 11 years old to as late as 25 years old (Bosworth). Similar to Duchenne muscular dystrophy, a symmetrical progression of muscle weakness is usually noticed in the upper arms, legs, and pelvis. The rate of progression is slower than that of DMD, so some retain the ability to walk until their mid-thirties or never need to use a wheelchair. Cardiac complications are also often fatal for those with Becker MD, although the average life expectancy is the mid-forties (Bosworth).

Emery-Dreifuss muscular dystrophy also develops in children, most commonly boys, at a young age. Compared to Duchenne muscular dystrophy, they experience slower and less severe muscle weakness in their arms and legs. Prior to significant muscle weakness, those with Emery-Dreifuss MD also experience contractures—tightening of muscles that prevent normal movement—in the spine, neck, elbows, knees, and ankles; this results in locked elbows or rigid spines (“Muscular Dystrophy: Hope in Research”).

Limb-girdle muscular dystrophy affects both males and females with symptoms appearing in late childhood to early adulthood. Individuals experience muscle weakness and wasting of the muscles around the hip and shoulder areas, known as the limb-girdle area; this spreads to the neck and legs. They may have difficulty rising from chairs or have an affected gait. Different types of Limb-girdle MD have also been identified with different rates of progression and severity, ranging from symptoms that develop slowly and interfere minimally with life to more severe muscle damage and inability to walk (“Muscular Dystrophy: Hope in Research”).

Facioscapulohumeral muscular dystrophy also develops in both males and females beginning in late childhood to early adulthood. It affects about one in every 20,000 people (Quercia). This type of muscular dystrophy causes asymmetric muscle weakness in the face, shoulders, and upper arms. Muscles around the eyes and ears are commonly weakened before the shoulders and upper arms. This can also affect an individual’s appearance through slanted shoulders, a crooked smile, flattened facial features, etc (“Muscular Dystrophy: Hope in Research”). 

Myotonic dystrophy, the most common form of muscular dystrophy in adults, causes muscle weakness in the face, feet, and hands for males and females. In addition to progressive weakness, people with myotonic dystrophy also experience an inability to relax muscles after a contraction. Symptoms can appear from birth to adulthood (“Muscular Dystrophy: Hope in Research”). 

Oculopharyngeal muscular dystrophy affects those in their forties or fifties. The first symptom is drooping eyelids and weakness around the muscles in the face and throat. Additionally, the tongue may also be affected. These symptoms result in issues with vision (such as double vision), difficulty swallowing, and changes in an individual’s voice (“Muscular Dystrophy: Hope in Research”). 

Distal muscular dystrophy is characterized by muscle weakness in the muscles of the forearms, hands, lower legs, and feet (distal muscles). While this group of dystrophies is less severe and progresses slowly, it results in difficulties in extending fingers and, similarly to other muscular dystrophies, walking (“Muscular Dystrophy: Hope in Research”).

Lastly, congenital muscular dystrophy is defined by muscle weakness at birth. Failure to meet motor function and muscle control landmarks are usually the first signs of this muscular dystrophy. Those with congenital muscular dystrophy have trouble sitting or standing without support and may never learn to walk (“Muscular Dystrophy: Hope in Research”).

Michelle Li, Youth Medical Journal 2021


Bosworth, Michelle Q., MS, and Rebecca J. Frey, PhD. “Duchenne and Becker Muscular Dystrophy.” The Gale Encyclopedia of Genetic Disorders, edited by Tracie Moy and Laura Avery, 4th ed., vol. 1, Gale, 2016, pp. 579-85. Gale Health and Wellness, Accessed 14 Feb. 2021.

“Muscular Dystrophy: Hope Through Research.” National Institute of Neurological Disorders and Stroke, Accessed 14 Feb. 2021.

Quercia, Nada, and Karl Finley. “Muscular Dystrophy.” The Gale Encyclopedia of Medicine, edited by Jacqueline L. Longe, 6th ed., vol. 6, Gale, 2020, pp. 3510-18. Gale in Context: Science, Accessed 14 Feb. 2021.

Biomedical Research

Mice in Biomedical Research


Mice constantly appear on the headlines of news articles next to a new finding in biomedical research. Laboratory mice have allowed researchers to study cancer, genetic conditions, and all sorts of diseases. But why mice? Why is it that mice are so involved in medical research? What do these creatures that seemingly bear little connection with humans offer to researchers? 

The beginning of mouse genetics research started in 1902 when Lucien Cuénot experimented with coat colors of mice to show that Mendel’s laws of inheritance–which were proved using sweet peas–also applied to mammals. Mice were seen as a more ideal research animal after Clarence Little created the first fully inbred strain of mice, DBA. This provided for genetically identical laboratory mice for experimental use (“How did the lab mouse come to be?”). The mouse genome was sequenced in 2002, and allowed  for more research into the connection between mouse and human genes as well as  health and diseases (“2002: Mouse Genome Sequenced”). Today, the most common species of laboratory mice is the House Mouse or Mus musculus

Why Mice?

Mice serve as ideal animal models for genetics and biomedical research for a number of reasons. For one, they share similarities with humans in DNA. The protein-coding regions of mice and human DNA, which are important for function, are 85% identical. These genes are evolutionarily conserved and range from 60% identical to 99% identical (Parente). 

Additionally, the bodies of mice and humans undergo similar processes and react similarly to diseases. Since the genes that mice and humans share have similar functions, mice have the same organs namely, the  heart, brain, lungs, and kidneys. It also translates to similar bodily systems such as the circulatory, reproductive, digestive, hormonal, or nervous systems (“Why are mice considered excellent models for humans?”). As such mice are susceptible to a number of the  same diseases as humans. For instance, they naturally develop conditions such as diabetes, cancer, and high blood pressure and when linked to genetics, the DNA that mice and humans share provides opportunities to study those genetically-linked conditions. Although they are not perfect, these parallels allow researchers to gain insight into the development of both humans and diseases.

Lastly, mice are convenient animal models for researchers. They  reproduce quickly, sometimes reproducing after nine weeks. Mice are small mammals, and they produce larger numbers of offspring. These factors make them economical to maintain and study. In addition, every one mouse year is the equivalent of 30 human years; this means that the entire life cycle of humans can be replicated in a few mouse years, allowing researchers to study aging and diseases over time (Parente). 

The mouse genome can also be easily edited to study a specific gene or disease. Knockout or knock-in mice have been used to study the role of specific genes through gene manipulation. By using a process in DNA repair called homologous recombination, artificial pieces of DNA can be introduced into the cell nucleus of mouse embryonic stem cells (cells from young mouse embryos). The cells with the manipulated piece of DNA are then injected into a surrogate female mouse. This process allows a targeted gene to be neutralized, as the artificial piece of DNA replaces or “knocks out” the original. Genes or mutant genes can also be introduced to produce knock-in mice that have a desired gene (Parente). Researchers use these mice to study the effects of the loss of a specific gene or the introduction of a mutant gene. This also allows researchers to introduce genes that would make mice susceptible to a specific condition, creating opportunities to study a specific disease.

Xenografting is another type of mouse model in which immunodeficient mice (mice with defective immune systems) are transplanted with cells from another species. Human cancer cells or tumor tissues are often transplanted. This allows for the studying of the effect of an anticancer drug on the mice and tumors (Parente). Transgenic mice, on the other hand, are inserted with genes from another source, specifically humans; this allows for a human gene that does not naturally occur in mice to be expressed in mice. For instance, when the human growth hormone gene was inserted using this technique, it resulted in large mice. Transgenic research has resulted in a better understanding of genetic regulation and a number of other diseases (Parente).

Ethical Concerns

The role of mice in biomedical research raises ethical concerns over animal health and welfare. Lab mice should be housed in see-through plastic cages with bedding and fed a specific nutritional diet. It is also important to provide them with enrichment and to house them with other mice in groups or pairs, as they are social animals (“Mouse”). There are existing guidelines and policies regarding animal experimentation; most provide strict regulation of laboratory animals. One widely used principle is that of the 3Rs (Replacement, Reduction, and Refinement), which aims to provide a framework for more humane animal studies. When possible, technologies or approaches that can fully or partially replace animals in experiments should be used. The number of animals used per experiment should be minimised while still allowing for the study to be conducted. Lastly, methods that minimise the pain, suffering, distress, or harm of a research animal should be taken to avoid compromising the results (“The 3Rs”). Laboratory mice in biomedical research are stand-ins for humans when studying diseases. It is important that their welfare is ensured, as they play an important role in furthering scientific knowledge.


Mice may not seem like ideal test subjects in research studies, but they bear a surprising resemblance to humans in their physiology and DNA. In this way, mice provide researchers with a convenient window into human genetics and diseases. Despite the ethical concerns of animal experimentation, it is undeniable that mice play a central role in biomedical research that is vital to furthering our understanding of human health.

Michelle Li, Youth Medical Journal 2021


“The 3Rs.” National Center for the Replacement Reduction and Refinement of Animals in Research, Accessed 31 Jan. 2021.

“How did the lab mouse come to be?” The Jackson Laboratory, Accessed 31 Jan. 2021.

“Mouse.” Understanding Animal Research, Accessed 31 Jan. 2021.

Parente, Matilde. “Mouse Model.” Genetics, 2nd ed., vol. 3, Gale, 2018, pp. 127-32. Gale in Context: Science, Accessed 31 Jan. 2021.

“2002: Mouse Genome Sequenced.” National Human Genome Research Institute, Accessed 31 Jan. 2021.”Why are mice considered excellent models for humans?” The Jackson Laboratory, Accessed 31 Jan. 2021.

Biomedical Research

Color Blindness: Not Just Black and White


Color blindness, also known as color vision deficiency, is the inability to differentiate between certain colors. It can range from mild to severe, and while some see only black, white, and shades of grey, most experience a more mild form of colorblindness in which they can see a limited range of hues (Fallon). Depending on the severity of a person’s color blindness, they may have difficulty distinguishing between the colors of a traffic light or determining the ripeness of fruit. It may also prevent them from having professions in which perceiving color is important to the job, such as being a pilot (“Color Blindness,” National Eye Institute). These impairments are not detrimental, however, and many have adapted around them. As there are different types and varying ranges of color blindness, color blind people don’t see the world in the exact same way as others and are each impacted differently. The passing down of genes is the main cause of color blindness. While there is no cure for color blindness, there are methods that diagnose and help colorblind individuals adapt to the condition.

Types of Color Blindness

Red-green color blindness is the most common type of color blindness. It affects 7% of the male population and 0.4% of the women population in the U.S. (Fallon). Those with red-green color blindness mainly have difficulty telling the difference between red and green. Deuteranomaly, which makes shades of green look more red, is the most common form of red-green color blindness. Another form, protanomaly, makes shades of red look more green. Both are mild forms of color blindness. Protanopia and deuteranopia, however, are more severe forms that result in the complete inability to differentiate between reds and greens (“Types of Color Blindness”). 

Blue-yellow color blindness is less common, occurring in fewer than 1 in 10,000 people around the world. Despite its name, people with blue-yellow color blindness confuse blue with green and yellow with red. Those with tritanomaly have difficulty distinguishing between the mentioned colors, while those with tritanopia cannot distinguish between the colors at all (“Types of Color Blindness”).

Figure 1: This figure shows how the views of individuals with different types of color blindness compare with that of normal color vision (Simulation of Different Color Deficiencies, Color Blindness).

Achromatopsia—when someone sees only black, white, and shades of grey—is the rarest type of color blindness, despite it being what some perceive color blindness as. It occurs in about 1 in 30,000 people. Those with achromatopsia cannot see or distinguish between any colors and only perceive black, white, and shades of grey (“Types of Color Blindness”).


The type of colorblindness is dependent on the affected photoreceptor cells in the eyes, which are each responsible for sensing wavelengths of light and distinguishing red, yellow, or blue light. When a certain type of photoreceptor cell, called cones, is absent or altered, a person’s perception of color is affected (Turbert). For instance, the absence of L cones, which are sensitive to red light, correlates to protanopia and results in the inability to perceive the color red. More mild forms of color blindness result when cones are faulty.

Figure 2: This diagram shows the effects of the absence of different cones on color vision (“Types of Colour Blindness”).

Color blindness is also linked to genetics and is passed down through genes. Mutations in certain genes cause the absence or alteration of cones, resulting in color blindness. This hereditary connection can also be seen through the disparity between men and women with red-green color blindness. The genes connected with red-green color blindness are on the X chromosome, which males have one of and females have two of. Only one X chromosome with the gene connected to red-green color blindness is needed for a male to be red-green colorblind. For females, both X chromosomes must have the gene (“Causes of Color Blindness”). This explains the drastic difference between the 7% of men and 0.4% of women in the U.S. that are red-green color blind and the hereditary nature of color blindness.

Color blindness can also be caused by diseases or injuries. Alzheimer’s disease, glaucoma, and leukemia are some chronic illnesses that may lead to color blindness. Damage to the retina of the eye or the brain can also result in acquired color blindness (Fallon). 

Diagnosis and Treatment

Specific tests exist to identify color blindness. One example is the Ishihara Test, which is made up of eight plates. Each plate contains colored dots, and in the center of the plate is a number made up of dots that are a different color than the background. Someone with normal color vision is able to identify the number in the center. An individual with red-green or blue-yellow color blindness will see a different number, as they will see the colors and numbers outlined by the dots differently. The Ishihara Test is one of the more well known of many color blindness tests, which distinguishes and helps diagnose those with color blindness (Fallon).

Figure 3: This figure shows how individuals with different types of color blindness may perceive a plate from the Ishihara Test.

There is no cure for color blindness, but many have adapted to their color blindness by using color cues and other details. Eyeglasses that correct color blindness have also been developed. These eyeglasses filter out wavelengths of light, preventing the overlap of wavelengths that causes colors to look similar or be hard to differentiate for color blind individuals (Fallon).


Despite the common misconception of color blind people seeing just black and white, the majority of those with color blindness have a different type of color deficiency with varying degrees of severity. Many have difficulty distinguishing between certain colors rather than a complete loss of color vision. The main cause of color blindness is the passing down of genes, and while there is no cure for the condition, many eventually adapt to their color blindness. Another option for those with color blindness may lie in color correcting eyeglasses. These eyeglasses serve as proof of the understanding that has been gained on color vision deficiency, one that is important to know and that will hopefully become widespread to counter misconceptions about color blindness.

Michelle Li, Youth Medical Journal 2021


“Causes of Color Blindness.” National Eye Institute, Accessed 27 Dec. 2020.

“Color Blindness.” National Eye Institute, Accessed 27 Dec. 2020.

Fallon, L. Fleming, Jr., and Monique Laberge. “Color Blindness.” The Gale Encyclopedia of Medicine, edited by Jacqueline L. Longe, 6th ed., vol. 2, Gale, 2020, pp. 1269-72. Gale Health and Wellness, Accessed 27 Dec. 2020.

Ishihara Test. Wikimedia Commons, Accessed 27 Dec. 2020.

Simulation of Different Color Deficiencies, Color Blindness. Wikimedia Commons, Accessed 27 Dec. 2020.

Turbert, David. “What Is Color Blindness?” American Academy of Ophthalmology, 6 Sept. 2019, Accessed 27 Dec. 2020.

“Types of Color Blindness.” National Eye Institute, Accessed 27 Dec. 2020.

“Types of Colour Blindness.” EdPlace, Accessed 27 Dec. 2020.

Health and Disease

Cochlear Implants: A Success Story For Auditory Prosthesis


A cochlear implant is an electronic device that is surgically implanted to treat patients that are profoundly deaf. The first cochlear implant devices, invented by William House and John Doyle, were given to patients in 1961 (Turkington). Despite mixed results, the devices were the first successful auditory prosthesis, and they presented an interesting possibility: the ability to restore the human sense of hearing through electrical stimulation. Now, though modern cochlear implant devices have evolved from those first used in 1961, that possibility has turned into a reality. With technological advancements, cochlear implants have resulted in more viable hearing for patients. According to the National Institute on Deafness and Other Communication Disorders, 96,000 people in the US and 324,200 people worldwide have received cochlear implants as of 2017 (Turkington).

A cochlear implant is a viable option for patients who have sensorineural hearing loss—hearing loss due to damage to the sensory hair cells in a part of the inner ear called the cochlea. Movement of hair cells stimulates nerve cells (ganglion cells), which carry an electrical current to the auditory nerve that, in turn, sends the signals to the brain. In sensorineural hearing loss, however, sounds do not make it to the auditory nerve or to the brain, where the electrical signals are interpreted as sound, due to damaged hair cells. Cochlear implants combat this issue, and, while they cannot fully restore hearing, they allow patients to sufficiently hear and understand speech (Turkington).

How do cochlear implants work?

Figure: Diagram showing the external and internal parts of a cochlear implant (Diagram of Cochlear Implant).

A cochlear implant contains external parts (worn on the outside of the ear) and internal parts, which are surgically implanted in the patient, underneath the skin. The external parts (a microphone, speech processor, and transmitter) are responsible for collecting and sending sounds to the internal parts. The microphone receives sound from the environment, which is converted into a digital signal by the speech processor; this signal includes information about the sound received, such as pitch, loudness, and timing. The digital signals are converted into FM radio signals and sent to the internal parts of the cochlear implant by the transmitter (Turkington).

The signals reach the internal parts, which are implanted through an outpatient procedure for adult and adolescent patients and a one-night stay at the hospital for children. The area behind a patient’s ear is first shaved or the hair is sterilized. An incision is then made that opens the mastoid bone, allowing a device called the receiver-simulator to be placed in a depression in the bone before being sutured (stitched). The receiver-simulator receives the FM radio signals sent by the transmitter and converts them to electrical signals. The next step in the surgical procedure is threading the electrodes through the cochlea so that electrodes are positioned closely to the ganglion cells. The receiver-simulator and the electrodes that are in the cochlea are connected by a wire. These electrodes take on the job of stimulating the ganglion cells that transmit signals to the auditory nerve, which hair cells are normally responsible for. This increased nervous response to sound allows the electrical signal to reach the brain (Turkington). The resulting sounds from the implant are more artificial and mechanical than natural sounds, but they allow for improved sound detection and speech understanding.


The results of cochlear implants vary, but the most optimal result is a near normal ability to understand speech. They improve a patient’s ability to talk on the phone, lip-read, watch TV with facial cues, and listen to music. They also help differentiate between the type and volume of sounds; patients may better perceive loud, medium, and soft sounds, such as the slamming of doors, ringing of phones, or rustling of leaves. Additionally, cochlear implants can improve a patient’s speech by regulating it so that it is easier to understand (Turkington). 


Cochlear implants have risks associated with the surgical implant procedure, as surgery with general anesthesia is needed for the implant. Some of these risks are injury to the facial nerve, cerebrospinal and perilymph fluid leak, meningitis, infection, and ringing and numbness around the ear, among others. There is also the risk of the implant failing if it’s rejected by the body, which results in the need for another surgical procedure and, possibly, localized inflammation (“Benefits and Risks of Cochlear Implants”). However, the failure rates of cochlear implants are generally low; around 0.2% of patients reject the implant, and 0.5% require reimplantation (“Cochlear Implants”).


To those with sensorineural hearing loss, cochlear implants are a viable option that restores a sense of hearing for patients. As with any other surgical procedure, there are a number of risks, but cochlear implants are generally one of the safer procedures for medical prosthesis. They hold the potential to restore patients to near normal hearing with benefits in speech and sound reception. While they produce “mechanical” sounds and are not capable of fully restoring a patient’s hearing, cochlear implants represent the progress and potential of medical prosthesis in restoring the human body and even the human senses.

Michelle Li, Youth Medical Journal 2020


“Benefits and Risks of Cochlear Implants.” U.S. Food and Drug Administration, Accessed 29 Nov. 2020.

“Cochlear Implants.” Hearing Link, Accessed 24 Nov. 2020.

Diagram of Cochlear Implant. Mayo Clinic, Accessed 29 Nov. 2020.

“Modern Cochlear Implant.” Albert and Mary Lasker Foundation, Accessed 29 Nov. 2020.Turkington, Carol A., and Josephine S. Campbell. “Cochlear Implants.” The Gale Encyclopedia of Surgery and Medical Tests, edited by Deirdre S. Hiam, 4th ed., vol. 1, Gale, 2020, pp. 383-88. Gale Health and Wellness, Accessed 29 Nov. 2020.

Turkington, Carol A., and Josephine S. Campbell. “Cochlear Implants.” The Gale Encyclopedia of Surgery and Medical Tests, edited by Deirdre S. Hiam, 4th ed., vol. 1, Gale, 2020, pp. 383-88. Gale Health and Wellness, Accessed 29 Nov. 2020.

Biomedical Research

Placebos and The Placebo Effect in Clinical Trials


Something as simple as giving a sugar pill or a saline injection has proven to have beneficial effects for a patient. Because while the treatment itself has no therapeutic value, the patient’s belief that they are being medically treated or their trust in the physician can improve symptoms. That improvement of symptoms is called the placebo effect. Placebos come in various forms, and while an ethical controversy is attached to the use of placebos, it can’t be ignored that they play an important part in modern clinical trials and may play a part in future treatments.

Placebos in Clinical Trials

Currently, a drug must outperform a placebo in a clinical investigation and have “substantial evidence of effectiveness” to be approved by the FDA (Katz). However, treatments were not always held to this standard. In the past, placebos were not used in clinical trials or practice, but this changed after it became suspected that some cases of improved symptoms were not because of an effective drug or treatment but because of psychological factors–later identified as the placebo effect. Thus the placebo effect began to be taken into account during clinical trials for new drugs and treatments. Having control groups with placebos is critical in determining whether results are due to the treatment’s effectiveness or the placebo effect. 

Placebos ensure that the results obtained and symptoms reported by participants are due to the drug, and not because of any demand characteristics. The awareness of receiving a drug may result in subjects falsely reporting relief from symptoms of the disease, and not because of the drugs being tested. As a result, while the experimental group receives the drug, a control group is given a placebo that looks identical but is sugar or water-based, to ensure that all results are due to the drug, improving the validity of the study. Furthermore, to avoid researcher bias, most experiments use double-blind trials, where both researchers and participants are unaware of which group receives the placebo. This is optimal, as both the patients’ report of symptoms and the researcher’s analysis is uninfluenced by the knowledge of which participants were in what group, improving the reliability and validity of the study (“Placebo”). 

However, placebos are not limited to drugs or medication, with placebo surgery showing increasing success. Because the simple act of administering anesthesia or making an incision without any further operation being done has proved to play a role in determining the efficacy of procedures and surgeries. For instance, percutaneous coronary intervention (PCI) is done to treat angina- chest pain caused by reduced blood and oxygen reaching the heart, which is often treated by placing a stent to widen arteries. A 2018 ORBITA study questioned the effectiveness of the stent itself. In the study, participants with stable angina were randomly assigned into groups that would receive either PCI or a placebo procedure (where no stent was placed). After six weeks, their heart was put under stress through rigorous exercise, to test out the hypothesis of the placebo effect. The study found that the endpoint times of exercise of participants who had received PCI were no different than those who received the placebo procedure, alluding to the idea that the improvement of symptoms and reported success of PCI may be at least partially attributed to the placebo effect (Al-Lamee).

In the study, three participants in the placebo group experienced major bleeding, and other complications occurred (Al-Lamee). This brings up controversies over placebo surgery and the use of placebos themselves in studies and practice.

Ethical Evaluation

The controversy around placebos in research is because of the ethicality of the procedure itself. Because the participants in clinical trials that receive placebos act as controls, and while they may experience the ‘placebo effect’, they are still denied a drug that may have a higher success rate. In addition, critics argue that clinical trials involving placebo surgery result in unnecessary surgeries that run the same risks as regular procedures, as a cut is still made and anesthesia may still be used (Ford-Martin). For instance, in the ORBITA trial, the three placebo-receiving patients that had major bleeding experienced those risks. However, they only received the placebo procedure and can only experience the placebo effect, instead of the benefits of a stent or other procedures (Al-Lamee). Thus, critics argue that the use of a placebo created unnecessary risks and pain, for almost no benefit.

Therefore, there are guidelines for using placebos in clinical practice, and participant consent is a major focus because failure to obtain consent undermines the trust in a physician-patient relationship, affecting all future treatments for a patient. However, when consent is obtained, the given placebo may help relieve symptoms at least temporarily in situations where there is no well-known treatment. The placebo may even be effective when the patient knows that it will be used but doesn’t know when it was given or what exactly the placebo treatment looked like. 

For instance, in a study that had an open-lid placebo treatment for chronic low back pain, participants in the study were told they were receiving the placebo medication and made aware of its lack of active ingredients. Part of the group then continued the usual treatment for chronic low back pain, while another group also took the placebo medication as well as usual treatment. Participants reported their pain intensity by rating their pain levels on a scale of 0-10, in addition to rating difficulties in completing daily activities. At the end of the study, participants who had received the placebo reported a 30% reduction in usual pain levels despite being aware of the placebo and its effect, or lack thereof (Carvalho). While it may not be true for every case or condition, the placebo effect may still work even when patients are aware of its presence.


For all its controversies and debates on effectiveness, the placebo has shown to have a significant impact on treating patients for various conditions. A systematic review focusing on the effectiveness of placebo treatments for migraine prophylaxis showed 58% responded positively to sham surgery, and 22% responded positively to oral placebo medicine. Those that responded positively to the placebo treatments experienced a reduction in migraine frequency of at least 50% (Meissner).

The use of placebos has helped determine the efficacy of medications and procedures during clinical trials. The positive effects shown by placebos provide hope that the phenomenon known as the placebo effect can be developed into a viable form of treatment in the future. Therefore, despite our limited understanding of the phenomenon and the constant debate on its ethicality, the effectiveness of placebos cannot be questioned; and its use in clinical trials ensures that all drugs, procedures, and treatments are fully understood, before being introduced to the public.

Michelle Li, Youth Medical Journal 2020


Al-Lamee, Rasha et al. “Percutaneous coronary intervention in stable angina (ORBITA): a double-blind, randomised controlled trial.” Lancet (London, England) vol. 391,10115 (2018): 31-40. DOI:10.1016/S0140-6736(17)32714-9

Carvalho, Cláudia et al. “Open-label placebo treatment in chronic low back pain: a randomized controlled trial.” PAIN vol. 157,12 (2016): 2766-2772. DOI: 10.1097/j.pain.0000000000000700

Ford-Martin, Paula, et al. “Placebo Effect.” The Gale Encyclopedia of Alternative Medicine, edited by Deirdre S. Hiam, 5th ed., vol. 4, Gale, 2020, pp. 2101-03. Gale Health and Wellness, Accessed 31 Oct. 2020.

Katz, Russell. “FDA: evidentiary standards for drug development and approval.” NeuroRx : the journal of the American Society for Experimental NeuroTherapeutics vol. 1,3 (2004): 307-16. DOI:10.1602/neurorx.1.3.307

Meissner, Karin et al. “Differential effectiveness of placebo treatments: a systematic review of migraine prophylaxis.” JAMA internal medicine vol. 173,21 (2013): 1941-51. DOI:10.1001/jamainternmed.2013.10391

“Placebo.” The Gale Encyclopedia of Science, edited by K. Lee Lerner and Brenda Wilmoth Lerner, 5th ed., Gale, 2014. Gale in Context: Science, Accessed 31 Oct. 2020.

“Placebo Effect.” The Gale Encyclopedia of Psychology, edited by Jacqueline L. Longe, 3rd ed., vol. 2, Gale, 2016, p. 895. Gale in Context: Science, Accessed 31 Oct. 2020.”Use of Placebo in Clinical Practice.” American Medical Association, Accessed 31 Oct. 2020.

Health and Disease

Obsessive-Compulsive Disorder (OCD): More Than Being Obsessively Neat and Tidy


Obsessive-Compulsive Disorder (OCD) is the fourth most common psychiatric illness (Fitzgerald). According to the Anxiety and Depression Association of America, 1 in 40 adults and 1 in 100 children in the U.S. are affected by OCD. OCD is a two-part mental disorder beginning with an obsession over certain thoughts and leading to the repetition of certain actions in a continuous cycle that impairs daily life. The process begins with obsessive thoughts (obsessions) that cause anxiety and lead to the individual repeating a behavior (compulsions). This could look like visualizing the spread of germs contaminating one’s hands, leading to excessive hand washing. Following through with compulsions only grants temporary relief from anxiety before the individual encounters another OCD trigger. However, if the individual does not follow through with the compulsion, it could result in anxiety and panic attacks. Both the obsessions and compulsions are involuntary. 

Types of OCD

OCD has many faces, and while every case is different, a person’s OCD commonly falls into one of these five categories.

  1. Checking

Checking is when someone with OCD “makes sure” of something, whether that’s if they left the stove on, if they turned off the bathroom lights, if they locked the front door, etc. It’s accompanied by fears of a dreadful event, such as the person’s home burning down or the death of someone close to them or even themselves (“Types of OCD”). The anxiety disorder side of OCD is what fuels this fear. The individual must check if they forgot their wallet; otherwise, their two-year old niece will be diagnosed with an incurable disease. These thought processes and the resulting compulsions—in this case, checking for a wallet—commonly impede the person’s daily life. It can result in someone being unable to leave their house until they check (and then check again), causing delays in a person’s day that they cannot do anything about.

  1. Contamination / Mental Contamination

The obsessions of someone with contamination related OCD is tied to a fear of harm to themselves or a loved one. It is also associated with a fear of germs and dirt. People with contamination OCD often avoid objects, places, or other people in fear of contamination through germs, dirt, etc. They may avoid public spaces, door knobs, shaking hands, among other things. Compulsions may be repeatedly washing hands until they’re raw, brushing teeth, showering, or laundering clothes immediately after returning home. They follow through with these compulsions to ensure that they don’t become ill or cause others to become ill (“Types of OCD”).

In addition, there is also a mental side to contamination related OCD. It is similar to physical contamination with the exception that people with this subtype of OCD perceive the contamination as happening internally, inside their body. They also feel the urge to clean out the contaminants, which in this case are negative thoughts or things they’ve heard, instead of the usual germs or debris. Similar to contamination OCD, they do this by showering and washing (“Types of OCD”). The key difference between contamination and mental contamination OCD is the presence of a physical object versus a human. The source of germs and dirt in contamination OCD is a physical object. However, the “germs and dirt” of mental contamination OCD—aka the negative things they’ve heard—originates from another human.

  1. Hoarding

Hoarding is not unique to OCD; it can be a mental disorder on its own or a symptom of another mental illness such as OCD or OCPD (“Hoarding: The Basics”). Hoarding is when someone is unwilling to discard certain possessions and, instead, feels the need to save them, resulting in an excessive accumulation of clutter that impairs their daily life. That need to save possessions can be for a number of reasons, and, in some cases, is linked to OCD and anxiety. For instance, some people with hoarding OCD believe that items that touch the floor are contaminated; therefore, no one should touch these items else they also become contaminated (“OCD Symptoms: OCD-Related Hoarding”). These obsessive thoughts are similar to those of someone with contamination related OCD. This thought process renders the person physically and mentally unable to dispose of the item, leading to excessive accumulation as the cycle continues.

  1. Rumination and Intrusive Thoughts

Both rumination and intrusive thoughts in OCD revolve around certain thoughts in an individual’s head. In the context of OCD, rumination is when an individual spends an excessive amount of time focusing on a question or thought. They can be focusing on a religious or philosophical topic such as life and death. However, they don’t arrive at a conclusion that satisfies them, leaving them to ponder for excessive amounts of time (“Types of OCD”).

For someone with OCD, intrusive thoughts are when disturbing thoughts reappear over and over in an individual’s mind. These thoughts are involuntary, and someone with OCD may begin believing these thoughts. Anxiety stems from the fear that they may act on the repugnant thoughts or impulses. These thoughts may be related to violence, sexual harm, relationships, etc. An example of an intrusive thought would be obsessing over the thought of harming other people with kitchen knives or other sharp objects, and the compulsion would be locking away those objects so as not to harm anyone (“Types of OCD”). The individual may question why they are having these thoughts or be consumed by the thought that they have already performed the violent action (even though they haven’t).

  1. Symmetry

People with symmetry-related OCD feel uncomfortable when objects are not aligned symmetrically or if an action isn’t done symmetrically. They become fixated on the positions of objects—such as books or clothes—and cannot move on until those objects are arranged in the “right” way (“Types of OCD”). People with OCD can also feel this way about certain actions; they have to perform an action on both sides or any number of sides to maintain balance and symmetry. For example, if someone with symmetry-related OCD scratches the left side of their face, they must scratch the right side to avoid a feeling of discomfort (Fitzgerald). This can be tied to a fear of harm to the person themself or someone close to them, but it can also just be to avoid the unease that they experience before satisfying a compulsion.


Symptoms of OCD commonly begin to appear in the pre-adolescent and early adulthood stages of life. The most common age range in which people start to experience the symptoms of OCD is between the ages of 10 and 24, but OCD can start at any age (Fitzgerald). The causes behind OCD are not crystal clear, but they may be linked to biological, genetic, and environmental factors. 

Cases of OCD have been linked to family in many ways. For instance, immediate family members of a person with OCD have a 25% chance of also developing the disorder (Ford-Martin). This means that the parents, siblings, and children of someone with OCD have increased chances of also having OCD, hinting that the disorder is somehow connected to family lines. This pattern may be a result of learning from and watching the behaviors of a family member with OCD (“Obsessive-compulsive disorder (OCD)” [NCH Healthcare System]). Part of the answer may also lie in genetics. Some twin studies have revealed that identical twins—twins that come from the same egg and therefore share all of their genes—are more likely to both develop OCD than fraternal twins. Genetics is not the only determiner though, as the rate for both identical twins exhibiting OCD is not 100% (Fitzgerald)

Some suggest that abnormal brain activity that differs from those without a mental illness or disorder may also be responsible for the obsessions and compulsions of OCD. Researchers have found that the orbital cortex of the brain is hyperactive in people with OCD, which may be responsible for the feelings of “alarm” that push people toward fulfilling compulsions (Ford-Martin). Abnormally low levels of serotonin in the brain may also play a role in OCD, as serotonin aids in communication between the frontal lobe and other parts of the brain that are connected to OCD compulsions (Ford-Martin).


OCD can only be diagnosed by a mental health professional such as a psychiatrist or psychologist. It is usually done through an interview-like process in which a series of questions are asked to identify if the core aspects of OCD are present and therefore warrants a diagnosis. Examples of questions that a professional may ask are if the patient frequently cleans, if the patient checks things a lot, if the patient is bothered by thoughts they can’t rid themselves of, etc. Professionals take into account the effect of the symptoms on the patient’s life, whether the symptoms are time-consuming (taking up more than an hour each day), cause distress, impede function in daily life, etc. (“Diagnosing OCD”). One commonly used assessment is the Yale-Brown Obsessive Compulsive Scale (Y-BOCS), which has five questions each for obsessions and compulsions; the Y-BOCS assesses the factors mentioned above (Fitzgerald).


OCD is usually treated with cognitive-behavioral therapy (CBT) and medications.

Cognitive-behavioral therapy is a type of psychotherapy that has proven to be effective in treating certain mental illnesses. More specifically, the technique of exposure and response prevention (ERP) is the most effective in treating OCD; it helps reduce symptoms in 75%-80% of OCD patients (Fitzgerald). In ERP, the patient and therapist create a list of the patient’s obsessions and compulsions, starting with something mild and getting more extreme—these tasks look different for every patient based on their OCD. The idea is to expose the patient to OCD-triggers without having them give into compulsions. Patients start at a mild level where they will be able to tolerate not giving in to a compulsion. With each CBT session, the patient moves up the list with more difficult, OCD-inducing tasks. The tasks are repeated, and with each exposure, the anxiety associated with an obsession is reduced until the patient finds it manageable. One example of ERP is if a patient with contamination OCD is tasked with touching contaminated objects with increasing time between when they make contact and when they’re allowed to wash their hands (or give in to their compulsion).

OCD can also be treated by medication that increases levels of serotonin (selective serotonin reuptake inhibitors or SSRIs). These are fluoxetine, fluvoxamine, paroxetine, sertraline, citalopram, and escitalopram. Clomipramine and venlafaxine are antidepressants that may also be prescribed for OCD patients; risperidone and haloperidol are antipsychotics that are an option for severe cases of OCD (Ford-Martin).

The last resort for OCD patients that don’t respond to CBT or medication is brain surgery. The operation removes a part of the brain called the “cingulate cortex” (Ford-Martin). The surgery is beneficial to 30% of OCD patients who receive it, resulting in lessened symptoms (Fitzgerald).


Obsessive-Compulsive Disorder is a mental disorder that significantly interferes with the daily lives of sufferers through a series of obsessions and compulsions; it is much more than the media’s portrayal of OCD being obsessively neat or tidy. OCD can look different for every patient, and the exact cause of the disorder hasn’t been determined. However, treatment through CBT and medication can help lessen symptoms, and it is important that people with OCD are not misunderstood and seek the professional help they need.

Michelle Li, Youth Medical Journal 2020


“Diagnosing OCD.” OCD-UK, Accessed 28 Sept. 2020.

Fitzgerald, Jane A., et al. “Obsessive-compulsive and Related Disorders.” The Gale Encyclopedia of Mental Health, edited by Brigham Narins, 4th ed., vol. 3, Gale, 2019, pp. 1149-56. Gale Health and Wellness, Accessed 28 Sept. 2020.

Ford-Martin, Paula, and Lisa C. DeShantz-Cook. “Obsessive–Compulsive Disorder.” The Gale Encyclopedia of Alternative Medicine, edited by Deirdre S. Hiam, 5th ed., vol. 4, Gale, 2020, pp. 1942-45. Gale Health and Wellness, Accessed 28 Sept. 2020.

“Hoarding: The Basics.” Anxiety and Depression Association of America, ADAA, Accessed 23 Sept. 2020.

“Obsessive Compulsive Disorder.” Georgia Behavioral Health Professionals, Accessed 28 Sept. 2020.

“Obsessive-compulsive Disorder.” National Alliance on Mental Illness, Accessed 28 Sept. 2020.

“Obsessive-Compulsive Disorder.” National Institute of Mental Health, Accessed 28 Sept. 2020.

“Obsessive-Compulsive Disorder (OCD).” Anxiety and Depression Association of America, ADAA, Accessed 24 Sept. 2020.

“Obsessive-compulsive disorder (OCD).” NCH Healthcare System, Accessed 23 Sept. 2020.

“OCD Symptoms: OCD-Related Hoarding.”, Accessed 28 Sept. 2020.

“Types of OCD.” OCD-UK, Accessed 27 Sept. 2020.

“What Is Obsessive-Compulsive Disorder?” American Psychiatric Association, Accessed 28 Sept. 2020.