Biomedical Research

Adeno-Associate Virus (AAV): A Virus That Benefits Lives


Upon hearing the term “virus,” it is the common trail of thought to picture the destructive manipulation of cells through a rapid replication process. Since diseases are now able to thrive in the body’s weakened state, such dismantling of the body’s immune system can lead to pathogenesis. Long story short, this perception of a virus is attached to a negative connotation where the virus harmfully invades an organism to manipulate the metabolic equipment that resides in cells. These viruses effectively inject their DNA/RNA information into the cell and commence the production of proteins that create more replications. Such significant shifts in the cell’s activity largely disrupt homeostasis and can permanently damage the cell. The growing virus hijacks more and more cells to amass more and more copies, thus destroying tissues and eventually organs. Different viruses may target different areas. For instance, the notorious Influenza virus targets the area involving the nose, throat, and lungs. Cells that line lung airways are susceptible to viral attack, and the body pays a toll by fending the virus off and destroying any remains of the virus. This is where the common symptoms of the flu arise. Overall, viruses can generally be characterized as aggressive, damaging particles that can leave the body weak and devastated. 

However, a special type of virus exists—one that does not involve the uncontrollable hijacking of defenseless cells.

Adeno-Associated Virus Discovery

The Adeno-Associated Virus (AAV) was initially discovered around 50 years ago. Scientists Bob Atchinson, M. David Hoggan, and Wallace Rowe uncovered the new virus particles while researching the established adenoviruses, which are very common viruses that elicit symptoms of an ordinary cold. This new virus was established to be a member of the Parvoviridae virus family since it consisted of single-stranded DNA. When the basic background research of this virus was underway, it was clear that it was unique; there was a drastic difference in virus behavior that was not consistent with other adenoviruses. AAV did not replicate within a cell culture—the main function associated with a virus! Unlike many other viruses, AAV did not execute a replication spree that manipulated and destroyed cells; it simply did not replicate. It was eventually found that AAV was able to execute standard virus replication while being introduced into a cell with other adenoviruses, or “helper-viruses,” concurrently. The researchers concluded that viral pathogenesis (virus leading to disease) was not possible with this virus due to the inability of replication. 

Around 15 years after the initial discovery, more research elucidated the details of the virus’s basic genetic information. Namely, it was confirmed that the virus can manufacture up to 100,000 particles per infected cell when paired with a helpervirus. This set the path for vector research.

AAV as a Vector

With such distinct features, it was clear that this virus could be utilized in a therapy-based manner. Yet another striking aspect of the virus strain was the size of the genome. The AAV genome is extremely small and consists of around three genes. This feature, along with the previously mentioned features of non-existent pathogenesis, controllable viral replication, and general minimized risk, led to AAV’s use as a recombinant vector.  In other words, scientists sought out to use AAV to deliver non-native genetic information as a therapeutic method in patients with genetic diseases. Such a preferable genome allowed for very effective genetic editing.

Modern Gene Therapy with AAV

In modern-day AAV-based gene therapy, scientists have managed to create an efficient way of editing the virus vector to their liking. Simply put, the single strand of DNA within the virus is cut apart. The middle part of the DNA strand is removed, but the ends of the strand are left as they are necessary for gene transfer later on. New foreign DNA containing therapeutic genes is placed to fill in the strand and spliced with the two ends to join the pieces together.

Following this is the actual vector process. As mentioned previously, AAV requires the presence of another virus in order to replicate. This “helpervirus” certainly plays a role in gene therapy. Both the edited AAV vector, as well as a compatible helpervirus, are combined with a bacterium. This effectively creates what is known as a plasmid (one plasmid being the AAV vector with the bacterium, and another plasmid being the helpervirus with the bacterium), which is a genetic formation that is capable of replication and can be used to easily affect genes and gene expression. Once both plasmids are introduced to the target cell, many AAV particles are produced within the cell and thus effectively activate the therapeutic gene, which creates the protein needed to potentially resolve a disease or problem within the given tissue.


AAV therapy has already been proven to be capable of relieving the symptoms of several types of diseases and issues that have been problematic to many patients. AAV is also behind some of the very few Food and Drug Association (FDA) approved gene therapies, which means that the benefits substantially outweigh the risks. 

An example of such a therapy is “Luxturna,” an AAV-based therapy used to treat a type of retinal disease that is passed down through inheritance. Over time, blindness can result in a patient as a result of this disease. This disease involves the RPE65 gene, which normally contains the instructions to produce the RPE65 protein, a necessary protein for vision. Mutations in this gene cause a lack of RPE65 proteins, which can negatively affect the function of RPE cells that contain the proteins. These cells are responsible for photoreception, and when a large amount of RPE cells become dysfunctional, loss of vision can be imminent. 

AAV is used as a vector for delivering unmutated RPE65 genes to the RPE cells and producing the RPE65 proteins that were initially lacking. The vector is sent inside the body via eye injection. Once the vector reaches the targeted RPE cells, functional cell numbers return to normal, and vision can be restored effectively. 

Looking Forward

AAV has certainly been a breakthrough in the past and is serving as an effective therapy for problems that were considered unsolvable a few years back. For the future, it is clear that the next steps are to utilize AAV vectors as solutions to more genetic problems and diseases and to ensure that these therapies are FDA approved. With further research into AAV vectors, many diseases with temporary fixes can be treated with more effective solutions. AAV is truly quite a unique virus, and applications can certainly be maximized in the future.

Brian Caballo, Youth Medical Journal 2020


Biomedical Research

The Ketogenic Diet and Epilepsy


The ketogenic diet, commonly referred to as ‘keto,’ is an increasingly popular recent trend in the dieting world. While there are a variety of ketogenic diets, including the Atkins and  South Beach diet,  the principle idea behind keto is that you use fats and ketones in your body to provide energy, rather than carbohydrates and glucose.  

However, keto isn’t just a new practice that started in the past couple of years.  With the first records of this practice dating back to the time of Hippocrates, ketogenic diets and fasting have been used for many medicinal reasons besides weight loss such as helping to treat or reduce the symptoms of various conditions such as acne, polycystic ovary syndrome to even genetic disorders. 

Ketogenic diets have been studied extensively in people with epilepsy. They are often recommended for those who either fail to respond to anti-seizure medications or can’t tolerate their side effects and have been proven quite effective.

How Does the Ketogenic Diet Work?

Typically in standard diets, some carbohydrates burn up quickly and are packed with intense fuel that yields large bursts of energy.  The human body’s main energy store is through fats [1].  Proteins and fats burn slowly thus results in a constant stable release of energy.  This is beneficial as you would not experience any peaks or crashes in energy in comparison to carbohydrates by which glucose is released instantaneously in the body and not stored [1,2].  This is how ketosis also regulates blood glucose levels without complications. Involving complex carbs into your diet causes your body to heighten your blood sugar and as a result, produce insulin.  As high glucose levels are perceived as toxic by the body, insulin is released by the pancreas as negative feedback to decrease and regulate this.  When your body starts struggling to keep up, excess glucose is converted into fat, and insulin stores it in cells. Your body is capable of regulating your blood sugar on its own without help when you aren’t mainlining so much sucrose. Ketogenic diets avoid such problems.

Fat is essentially satiety. Consuming fat keeps you full and combines with low carbohydrate intake, allows the body to produce ketones [2,3].  In Glucose 1 transporter (Glut 1) deficiency and pyruvate dehydrogenase (PDH) deficiency ketones provide an alternative energy source to glucose [3].

As with any alterations to the body, the Ketogenic diet too has side effects whilst adjusting to the low-carb lifestyle.  Though for many the intended goal is weight loss, in other cases where the ketogenic diet is prescribed, side effects can include feeling weary with symptoms of influenza, also known as the Keto-Flu, constipation, or diarrhea, and high cholesterol.

The Ketogenic Diet for Epilepsy

Most often lifelong, epilepsy is a common condition that affects the brain and causes frequent seizures. Seizures that are resistant to standard medications remain a major clinical problem.

In severe epileptic patients, after 2-3 days of fasting from food, and even in some cases, water, there was severe seizure control observed.  These observations have been around for millennia. 

We don’t completely understand how the ketogenic diet works but there is some evidence that the brain needs energy from glucose to create a seizure. The ketogenic diet makes the body think that it’s in a state of starvation or fast [3].

Though having years of insight and trials, clinically, doctors do not assign patients keto from initial diagnosis of epilepsy; it is usually reserved for drug-resistant epilepsy.  This is due to the balance of the diet needing to be carefully worked out for each individual and because vitamins and other special supplements are needed. It is not advisable as a treatment option for some with some metabolic disorders or other neurological disorders.

The ketogenic diet was initially for children and currently in the UK, medically still is; this treatment is offered for children between 3 months to 16 years suffering from any type of epilepsy [2,4,5].  During infancy, the brain is much more efficient at extracting and utilizing ketone bodies from the blood due to the higher levels of ketone metabolizing enzymes and monocarboxylic acid transporters produced during this period [6-8]. 

In a review by Mcgill et al., four studies with 385 participants reported that more children achieved seizure freedom with a ketogenic diet than with control. Two small underpowered studies reported no events in adults with 141 participants reported no adults achieved seizure freedom; the analysis was underpowered [9].

There is limited evidence for how effective or tolerable the ketogenic diet is for adults but it has been highly proven to be effective for pediatric epilepsy. Some particular studies show that keto can be effective on the adult brain as it increases its levels of monocarboxylic acid transporters and ketone metabolizing enzymes rapidly during periods of stress such as ischemia, trauma, or low glucose as in the case of keto [8,10].  A review of 16 studies in adults with uncontrolled epilepsy found that ketogenic diets were well-tolerated long term and typically resulted in significantly fewer seizures or, in a minority of cases, complete freedom from seizures [8].

A study by Rho et al. suggests that adults may produce ketones at different levels in response to a ketogenic diet, possibly explaining the inconsistencies in findings on the effect of keto on the adult brain.  In addition, they also demonstrated variations between the production of ketones in rats and humans, which is a crucial consideration when running animal trials before implementing techniques and diets clinically [11].


The ketogenic diet is not just a trend for health gurus to lose weight.  It lessens the burden and potentially frees many, if not, millions of those suffering from drug-resistant seizures from epilepsy.  Though not much is known about epilepsy as a condition in itself, and the ketogenic diet heavily varying in effect amongst people of different ages, for centuries, techniques like fasting and keto have been observed to be helpful.  Though not as commonly prescribed or allocated to adults, it is an extremely effective technique on children, potentially improving their trajectory and progress of the condition.

Nara Ito, Youth Medical Journal 2020


[1] Wasserman, D. H. (2009). Four grams of glucose. American Journal of Physiology-Endocrinology and Metabolism, 296(1), E11-E21.

[2] Great Ormond Street Hospital for Children. (2011). Ketogenic diet.

[3] Laffel, L. (1999). Ketone bodies: a review of physiology, pathophysiology and application of monitoring to diabetes. Diabetes/metabolism research and reviews, 15(6), 412-426.

[4] Lambrechts, D. A., de Kinderen, R. J., Vles, J. S., de Louw, A. J., Aldenkamp, A. P., & Majoie, H. J. (2017). A randomized controlled trial of the ketogenic diet in refractory childhood epilepsy. Acta neurologica Scandinavica, 135(2), 231–239.

[5] Sharma, S., Goel, S., Jain, P., Agarwala, A., & Aneja, S. (2016). Evaluation of a simplified modified Atkins diet for use by parents with low levels of literacy in children with refractory epilepsy: A randomized controlled trial. Epilepsy research, 127, 152–159.

[6] Sokoloff L. (1973). Metabolism of ketone bodies by the brain. Annual review of medicine, 24, 271–280.

[7] Bilger, A., & Nehlig, A. (1992). Quantitative histochemical changes in enzymes involved in energy metabolism in the rat brain during postnatal development. II. Glucose-6-phosphate dehydrogenase and beta-hydroxybutyrate dehydrogenase. International journal of developmental neuroscience : the official journal of the International Society for Developmental Neuroscience, 10(2), 143–152.

[8]McNally, M. A., & Hartman, A. L. (2012). Ketone bodies in epilepsy. Journal of neurochemistry, 121(1), 28–35.

[9] Martin‐McGill, K. J., Jackson, C. F., Bresnahan, R., Levy, R. G., & Cooper, P. N. (2018). Ketogenic diets for drug‐resistant epilepsy. Cochrane Database of Systematic Reviews, (11). 

[10]Prins M. L. (2008). Cerebral metabolic adaptation and ketone metabolism after brain injury. Journal of cerebral blood flow and metabolism : official journal of the International Society of Cerebral Blood Flow and Metabolism, 28(1), 1–16.[11]Prins M. L. (2008). Cerebral metabolic adaptation and ketone metabolism after brain injury. Journal of cerebral blood flow and metabolism : official journal of the International Society of Cerebral Blood Flow and Metabolism, 28(1), 1–16.

Biomedical Research

Genetic Engineering: Can Diseases Be Eradicated Forever?


Since its discovery in 1973, huge strides have been made in genetic engineering. Some examples include arctic apples, mouse ear cress, and even onions that do not make you cry. Genetically modified food even comprises much of what we eat today, with roughly 60-70% of processed food in grocery stores containing genetically modified parts. However, some are still concerned about the potential consequences of this relatively new technology, as there are unknown side effects. Even though we constantly hear disturbing reports about genetically modified food, we barely hear something substantive about this exciting issue. Researchers have been looking into altering the genetic information of contagious diseases. These diseases have plagued the earth for decades, taking millions of lives along the way. Could the power of genetic engineering be used to eliminate them completely?

The Zika virus has spread exponentially across the globe, through mosquitoes. The virus causes horrific birth defects in pregnant women and weakens people’s immune systems by damaging nerve cells. There is no treatment available. Another virus that spreads in a similar way is malaria. In the salivary glands of a mosquito, thousands of sporozoites wait before the mosquito penetrates a humans flesh, they then move to the liver, while avoiding the immune system. They remain here in stealth mode for up to a month, eating the cells alive and creating thousands of copies of themselves. These diseases are a threat to nearly half the world’s population, and account for millions of deaths. What if you could use genetic engineering to stop the spread?

What is Genetic Engineering?

Genetic engineering, or gene modification, is the DNA alteration process in the nucleus of an organism. According to the national Human Research Institute, genetic engineering can be done by using recombinant DNA (rDNA) or DNA derived from two or more distinct species and then combined into a single molecule.This technology has been used to create things like safer lithium-ion batteries and crops such as Sweet Plum. Genetically engineered plants, known as genetically modified organisms (GMOs), may be engineered to be less susceptible to disease or to suit particular environmental requirements. In addition to this, there are many genetically modified organisms, for example, mosquitoes that were modified in the lab.

How Can This Help Stop Diseases?

The particular virus of Zika is spread through the bite of a mosquito. The mosquito is the natural carrier for human pathogens that have been present for nearly 200 million years. There are trillions of them, and one will lay up to 300 eggs at a time. They’re virtually difficult to eliminate. In order to stop the mosquito the entire mosquito population must be re-engineered. This can be done using a new technology, CRISPR. CRISPR is an easy but efficient method for genome editing. It helps researchers to quickly alter DNA sequences and change the role of genes. It also serves as a tool to modify genes. This can be achieved by adding a cut or break in the DNA and by tricking the cell’s own DNA repair mechanisms into making the improvements that one needs.Through this genetic modification, scientists have successfully developed a strain of mosquitoes that are resistant to the malaria parasite by inserting a new antibody gene directly targeting plasmodium. This same theory could be applied to the Zika virus, or any other similar virus that relies on a host such as Lyme disease, sleeping sickness, and West Nile virus. These mosquitoes would never even carry the disease and millions of lives could be saved. On top of this, if the new gene becomes dominant over the next generation it will overpower the old gene.  If sufficiently modified mosquitoes were to mate with natural mosquitoes, the gene would spread very quickly. Thanks to this, 99.5 percent of all designed mosquitoes’ offspring will not carry the virus.


There are still many concerns considering that CRISPR is a new technology. Critics of genetic engineering are vocal about the potential consequences such a move may have, such as potential environmental harm. Additionally, after the gene is edited, there is simply no way to change it back to normal; meaning that those consequences are there to stay if something goes wrong. There are simply many things that we do not know yet. Perhaps the worst-case situation here is that it does not work, or that the parasite adapts in a negative way. Finally there are ethical concerns, with people stating that genetic engineering is “playing with nature.” All of these factors make it unsure if this technology will be used.  

Harshal Chinthala, Youth Medical Journal 2020


“What Is Genetic Engineering?” Facts, The Public Engagement Team at the Wellcome Genome Campus, 17 Feb. 2017, 

Bohanec, Dr. Borut. “10 Successful Examples of Genetic Modification – Metina Lista %.” Metina Lista, 4 Jan. 2017, 

Plumer, Brad. “How Widespread Are GM Foods?” Vox, Vox, 3 Nov. 2014, 

Vidyasagar, Aparna. “What Is CRISPR?” LiveScience, Purch, 21 Apr. 2018, 

@sxbegle, Sharon Begley, et al. “Biologists: Let’s Sic ‘Gene Drive’ on Zika-Carrying Mosquitoes.” STAT, 8 Aug. 2016, 

“’Gene Drive’ Mosquitoes Engineered to Fight Malaria.” Nature News, Nature Publishing Group, 

Biomedical Research

The Lack of Racial Diversity in Dermatological Images


Ellen Weiss noticed uneven, desiccated patches on her toddler’s dark brown skin and wondered if the patches signified eczema, or rather a skin condition more serious. “I Googled it and noticed immediately the pictures were all of white skin. I Googled other conditions and it was the same. No matter what I searched, there were almost no images of dark skin,” she recounted (McFarling).

The availability of medical reference photos, particularly those depicting dermatological conditions on dark skin, is an issue that has not received the awareness it needs. Today, many Brown and Black individuals encounter problems identifying a condition on their skin. Even when approaching a doctor, people still do not receive the help that they need (Prescod). The inherent problem concerning medical images is the inequality of reference material from across the skin tone spectrum, necessary for care and treatment of their skin’s health.

Medical images are crucial in training future physicians as these images display the anatomical and etiological aspects of the body. With such material for reference, physicians are able to recognize certain diseases, disorders, and injuries a patient may have. For instance, an assessment of the intensity of burns is dependent upon a physician’s visual inspection (Schaefer). By looking at certain characteristics of a burn, doctors can identify the injury as one of three measures: first degree, second degree, and third degree. Burns are only an example of which its primary method of diagnosis is through visual inspection alone, and as this process enters the realm of dermatological conditions, it encounters complications that obstruct the way for an easy, and sometimes correct, diagnosis for individuals with dark skin (Singh).


Before investigating the hardships when it comes to diagnosing dermatological conditions on dark skin, it is imperative to first understand the inherent statistics of medical images concerning the measure of racial diversity that they encompass. 

Rhiannon Parker, a researcher at the University of Wollongong in Australia became aware of the lack of diversity in medical images. Parker and her colleagues sought to understand the extent to which the lack of diversity in dermatological reference photos exactly accounts to. Inspecting more than 6000 medical images derived from 17 different anatomy books published between the years 2008 and 2013, they firstly focused on gender disproportionality. They found that only 36 percent of images with an identifiable sex were female, and they furthermore analyzed these images’ racial make-up. Of these images that displayed females, 86 percent of them were White (Eveleth). 

A more recent study, conducted in April 2020 and led by Jules Lipoff, an assistant professor of clinical dermatology at the University of Pennsylvania, analyzed the skin type diversity in general medicine textbooks. Lipoff found that only 4.5 percent of images with an identifiable skin tone displayed dark skin (McFarling). The disproportionality within medical textbooks is evidently overwhelming.

Dermatological Conditions

Dermatology is a branch of medicine that focuses on the structure, etiology, and treatment of the skin. Dermatologists undergo years of training to especially develop an ability to diagnose a certain skin condition based on visual inspection. However, dermatologist Jenna Lester from the University of California, San Francisco describes the difficulties she encountered when preparing to treat her Black patients for a particular rash commonly associated with Covid-19. The popularly known “Covid-19 rash” is an emerging symptom of coronavirus that scientists are still researching. When Lester heard of this rash, she searched throughout various medical literature to analyze what the rash would particularly look like on dark skin. From her ongoing searches, Lester still did not find one picture displaying the rash on dark skin (Lester). 

She mentioned, “I was frustrated because we know Covid-19 is disproportionately impacting communities of color. I felt like I was seeing a disparity being built right before my eyes” (McFarling).

Considering the emerging idea that certain dermatological conditions can look very much different across the skin tone spectrum, the lack of racial diversity poses disadvantages for individuals with dark skin (Figure 1). As physicians continue to train upon a white foundation, those with darker skin face errors and complications made by doctors within inspection, and therefore face higher rates of misdiagnosis (Rabin). The hardships that many encounter when simply attempting to identify a skin condition is beyond a representation of today’s inequality for people of color.

The picture demonstrates how differently the same disease can present on dark versus light skin.

Figure 1: @brownskinmatters (2019, September 10). [Condition pictured: Kawasaki Disease]. Retrieved from

This issue is relatively new and merits a call for social awareness. For instance, Malone Mukwende, a second year medical student at St. George’s University of London, strives to combat the inadequate research in dermatological conditions on dark skin.. He and his colleagues are working together to write a book, “Mind the Gap,” which addresses the various clinical signs of skin disorders on black and brown skin (“A Medical Student”). At the time of this article, the book is not yet published, and still no official release date has been confirmed. Furthermore, a book by Cyron Mandia, “All About the Skin: A Microscopic Lens to the Integumentary System,” which will be published around this December, utilizes its incorporation of dark skin to combat the racial misrepresentation within dermatological images.

 As society continues its works to fight the lack of diversity in medical images, a modern and racially inclusive dermatological curriculum waits upon us. However, until then, it is our duty to continue to spread awareness for change.

Cyron Mandia, Youth Medical Journal 2020


A medical student couldn’t find how symptoms look on darker skin. He decided to publish a book about it. Washington Post. 

Eveleth, R. (2019, May 9). Medical Textbooks Overwhelmingly Use Pictures of Young White Men. VICE.

McFarling, U. L. (2020, July 20). Dermatology faces a reckoning: Lack of darker skin in textbooks and journals harms care for patients of color. STAT. 

Prescod, J. (2020, March 4). Is the lack of dark skin in medical books harming us? Gal-Dem. 

Rabin, R. C. (2020, August 31). Dermatology Has a Problem With Skin Color. The New York Times. 

Singh, N. (2020, August 17). Decolonising dermatology: why black and brown skin need better treatment. The Guardian. 

Schaefer, Timothy J. “Burn Evaluation And Management.” StatPearls [Internet]., U.S. National Library of Medicine, 10 Aug. 2020,

Lester, J. C., Jia, J. L., Zhang, L., Okoye, G. A., & Linos, E. (2020). Absence of images of skin of colour in publications of COVID‐19 skin manifestations. British Journal of Dermatology, 183(3), 593–595.

Biomedical Research

Artificial Organs: A Social and Ethical Analysis

The Problem

Artificial organs are a recent form of life saving technology that rely on methods such as 3-D printing and stem cell implementation, which are becoming more common in the medical community as automation advances. Lately, scientists have been putting in a lot of research and resources into the development of these artificial organs, particularly as a result of  the significant imbalance between those in need of organs and those who are actually receiving them. In 2015, about 121,000 people needed an organ transplant, but only about 31,000 patients received one (Figure 1).

Patients that Received Organs (blue) vs. Patients in Need (grey)

     Figure 1, Organ Procurement and Transplantation Network, 2015

This gap is a huge problem; people with failing organs do not have much time to spare.  According to the Nation Health Resources and Services Organization, about 20 people die on average per day because of a lack of viable organs for transplant. As a result, artificial organs have become especially important, as they provide other means for people to obtain the organs that they require.  

A Societal Perspective

The wait for essential organs is absurd and inhumane. In fact, it can generally take up to five years to get off the waitlist for a transplant, which can cost a human life. Consequently, some find it necessary to resort to unconventional ways to get what they need. As expressed by Nancy Scheper-Hughes, a professor of anthropology at the University of California, Berkeley,  the extensive wait for these organs often motivates people to take the state of their lives into their own hands. In this instance, they become willing to travel long and far to obtain organs that they receive through channels of legal or illegal means. These distances are often foreign countries, with the most popular destination being India.

With the implementation of biomedically engineered artificial organs, these desperate patients will no longer need to put themselves and even their families in jeopardy to live. This way, the patients are able to receive the organs that will save their lives in a legal way without the trouble of traveling in their unfortunate conditions. With new and improved organs, not only will people have a better quality of life, but they’ll be able to contribute to society in ways that may not have been possible for them before.

The Ethical Dilemma

While these organs may seem like a perfect solution, there are some serious ethical consequences that arise as a result of increased implementation. A main issue that will accompany the spread of these high-tech organs is their large cost. As shown in Figure 2, the cost of procedures that implement artificial body parts such as hips and knees are already what many would consider high, with a simple ear tube procedure costing up to $4,500. 

The Cost and Annual Revenue of the Artificial Organ Market

Figure 2

Considering these organs would only be available to those who could pay, it could be considered unvirtuous to allow them to become very widespread, as others that cannot afford these situations will be at a major disadvantage that could potentially decide their quality of life. Also, since one cannot control their health if they are on the verge of death, it becomes immoral to make these organs available to those that can afford the high expenses. 

Potential Solution

After evaluating this complex issue from multiple perspectives, it becomes clear that a solution is needed. Overall, in order to help solve the issue of organ scarcity whilst reducing the ethical implications, highly-demanded artificial organs should be implemented, however the complexity of their software should be reduced. The limitations of this solution, however, are that it takes extensive research and time in order to maintain the effectiveness of these organs while simultaneously reducing their complexity, and that the artificial organs being produced would only be available for those in need of major organs. Despite these drawbacks, the outlook for biomedically engineered artificial organs is exceptionally bright, and this market has the potential to save a myriad of lives that would otherwise be lost.


Hutchison, Katrina, and Robert Sparrow. 2016. “What Pacemakers Can Teach Us about the 

Ethics of Maintaining Artificial Organs.” Hastings Center Report 46 (6): 14–24. doi:10.1002/hast.644.

Malchesky, Paul S. 2011. “Organ Replacement, Medical Device Costs, and Medical 

Tourism: Globalization of the Clinical Application of Artificial Organ Technologies?” Artificial Organs 35 (12): 1139–41. doi:10.1111/j.1525-1594.2011.01396.x.

  “Organ Donation Statistics.” Organ Donor, September 30,  2019.

Sanjairaj, Vijayavenkataraman. (2016). “A Perspective on Bioprinting Ethics.” Artificial 

Organs. 40. 1033-38. 10.1111/aor.12873. 

Scheper-Hughes, Nancy. “The Global Traffic in Human Organs.” Current Anthropology 41, no. 2 (April 2000): 191–211. doi:10.2307/3596697.

Vermeulen, Niki & Haddow, Gillian & Seymour, Tirion & Faulkner-Jones, Alan & Shu, Wenmiao (Will). (2017). “3D bioprint me: A socioethical view of bioprinting human organs and tissues.  Journal of Medical Ethics. 43. medethics-2015. 10.1136/medethics-2015-103347.

Reem Hassoun, Youth Medical Journal 2020

Biomedical Research

Artificial Intelligence in Medicine

Artificial intelligence in medical practice is the use of computer techniques to perform clinical diagnoses and suggest treatments in medical areas [1]. It has the capability of detecting meaningful relationships in a data set and could be used for the diagnosis, treatment, and for coming to a particular conclusion. Similar to the way doctors are educated through years of medical schooling and learning from mistakes, artificial intelligence algorithms learn to do the same job as a doctor. They perform tasks requiring human intelligence like pattern and speech recognition, image analysis, and decision making. The Artificial Intelligence algorithm includes feeding data in the computer system, which are structured having a label recognizable to the algorithm, performance is analyzed just like exams give an analysis of a medical student’s performance thus giving results. Based on the results of this analysis the algorithm can be modified, fed more data, or rolled out for the decision-making of the person writing the algorithm [2].

Fig: AI algorithm learning the basic anatomy of a hand and can recreate where a missing digit should be. This could allow physicians to see the proper place to reconstruct a limb/put a prosthetic.

These performances and results are tested with a physician’s performance to determine its clinical ability and value. In medicine language, it includes input data based upon numericals such as Heart Rate or Blood Pressure and based upon images such as Magnetic Resonance Imaging Scans or Images of Biopsy Tissue Samples. The algorithms from this data could be a probability or a classification. The result of the above example could be the probability of having an arterial clot according to the heart rate and blood pressure data or the labeling of an imaged tissue sample by classifying it as cancerous or non-cancerous. There are two recent applications in the Artificial Intelligence of clinical and accurate algorithms benefiting both patient and doctor for the diagnosis. One is the algorithm researchers at Seoul National University Hospital and College of Medicine developed called Deep Learning-based Automatic Detection to analyze chest radiographs and detect abnormal cell growth (cancers). The results were compared to many physician’s detection abilities and were found to perform better than the doctors [2].

Fig: Artificial Intelligence Algorithm. Left panel showing the image fed into an algorithm. The right panel shows a region of potentially dangerous cells, as identified by an algorithm, that a physician should look at more closely.

Fig: Artificial Intelligence algorithm; Deep Learning Method.The left panel shows the original X-ray. The right panel shows the X-ray with orange color indicating signs of pneumothorax which could be unnoticed by radiologists

The second algorithm was developed by researchers at Google AI Healthcare called Lymph Node Assistant which analyzed histology slides stained tissue samples to identify metastatic breast cancer tumors from lymph node biopsies. It could identify suspicious regions of the sample which could not be distinguished with the human eye. It was proven to accurately classify a cancer as cancerous or non-cancerous in 99% of the cases. Hence these algorithms could help doctors with correct diagnosis thus allowing them to invest time in solving cases that computers cannot solve [2].

Fig: AI algorithm; Lymph node biopsy

Artificial intelligence could be considered a boon as it may help for early diagnosis of diseases whose later diagnosis can cause delays in the treatment and may be harmful to the patient. For example, researchers have claimed that it could be used to diagnose Alzheimer’s disease years before symptoms appear. The computers can be trained for brain scans to be able to spot subtle signs of dementia that could be missed by humans allowing early diagnosis. This could probably be done using 18-F-fluorodeoxyglucose positron emission tomography (FDG-PET). In an FDG-PET scan FDG, a radioactive glucose compound is injected into the blood. PET scans can then measure the uptake of FDG in brain cells, an indicator of metabolic activity. Through Deep Learning, the algorithm can teach itself metabolic patterns that correspond to Alzheimer’s disease.If one can detect the symptoms earlier, it would help investigators to find better ways to reduce or halt the disease process. Future research should take into consideration, training the deep learning algorithm to look for patterns associated with the accumulation of beta-amyloid and tau proteins, abnormal protein clumps, and tangles in the brain that are markers specific to Alzheimer’s disease, according to UCSF’s Youngho Seo, Ph.D., which can add another dimension to using Artificial Intelligence in Alzheimer’s disease detection [3].

Fig: Fluorine 18 fluorodeoxyglucose PET images from Alzheimer’s Disease Neuroimaging Initiative set preprocessed with the grid method for Alzheimer disease patient

Artificial Intelligence has many clinical applications to improve patient care and potentially save lives. Maintaining medical records and past history is the first step in health care where robots collect, store, reformat, and trace data to provide faster and more consistent access. They also analyze data including notes and reports from a patient’s file and clinical expertise to help to choose the right treatment pathway [5].There are some latest tools and technology developed in the health care sector based on the Artificial Intelligence algorithm. This includes: MelaFind, which is a tool that does not involve introduction of instruments into the body and gives extra information to dermatologists in early detection and recognition of skin cancer,lesions and helps in it’s examination. It also helps in evaluation of skin lesions up to 2.5 mm beneath the skin. By using Artificial Intelligence based algorithms, dermatologists can analyze irregular moles and diagnose serious skin cancers such as melanoma. The device demonstrated 98.3% sensitivity by correctly identifying 172 out of 175 melanomas and high-grade lesions. Robotic-assisted therapy is used in neurological patients and is specially used for stroke patients’ recovery. The robotic arm and hand use digital algorithms to detect motions that patients cannot execute during therapy thus improving their performance per hour than they would have if worked with a physical therapist alone thus allowing speedy recover[4].Robots can also perform tests, x-rays, CT scans, data entry, and other tasks faster and more accurately. Cardiology and Radiology are two fields where the amount of data to analyze is huge and time-consuming. Future cardiologists and radiologists should look only at the most critical cases in which human monitoring is useful [5]. Caption Guidance, which is an Artificial Intelligence guided ultrasound platform or software capable of instructing clinicians on obtaining a clearer picture of the heart in motion. It will be used for capturing echocardiographic images of the patient’s heart without special training, spotting high-quality 2D heart images, and automatically recording video clips for later analysis, while calculating heart function measures thus improving the diagnosis of heart diseases [4].

Fig: AI tools in health care

Conclusion: Artificial Intelligence will surely improve the healthcare industry, from predictive medical care and more accurate diagnosis to motivating the patients to take care of their health. It will certainly continue enhancing the patient’s experience and healthcare expertise in general. The use of Artificial Intelligence is predicted to decrease medical costs as there will be more accuracy in diagnosis and better predictions in the treatment plan as well as more prevention of disease. It will not replace healthcare workers but instead allow them to spend more time for the bedside care of their patients, resulting in the greater outcomes for all.


[1]Chan, Y., Chen, Y., Pham, T., Chang, W., & Hsieh, M. (2018, July 15). Artificial Intelligence in Medical Applications. Retrieved September 13, 2020, from

[2]Says:, A., Says:, D., Says:, J., Says:, T., Says:, C., Says:, B., . . . *, N. (2019, June 19). Artificial Intelligence in Medicine: Applications, implications, and limitations. Retrieved September 13, 2020, from

[3]Staff, S. (2018, November 06). Artificial intelligence predicts Alzheimer’s years before diagnosis. Retrieved September 13, 2020, from

[4]Swetha. (2019, November 28). 10 Common Applications of Artificial Intelligence in Health Care. Retrieved September 13, 2020, from

[5]Micah Castelo Micah Castelo is a web editor for EdTech: Focus on K-12 and a regular contributor for HealthTech. Her experience includes education and community news coverage for the Syracuse Post-Standard and international news reporting. (2019, May 01). The Future of Artificial Intelligence in Healthcare. Retrieved September 13, 2020, from

Pratiksha Baliga, Youth Medical Journal 2020

Biomedical Research

Reconstructive Plastic Surgery: An Overview and Technology’s Role


Plastic surgery is often associated with cosmetic procedures, relating to the elective enhancement or change of one’s body or facial features. In the modern age, this has been popularized by social media influencers. Thus, plastic surgery has taken on a reputation of being non-vital and less important than other fields like neurosurgery or cardio-thoracic surgery. However, plastic surgery is not limited to this. Instead, reconstructive surgery is another vital segment.

Reconstructure plastic surgery, on the other hand, is not cosmetic. It is completed on patients suffering from conditions and complications such as cancer, trauma, and deformities. This is meant to normalize their appearance or improve bodily function. Reconstructive plastic surgery is considered to be extremely diverse with many new innovations. Technology, such as augmented reality or the use of three-dimensional imaging is an important and useful part of reconstructive plastic surgery [1,2].


A Japanese research group created an augmented reality system, allowing surgeons to create a three-dimensional version of the results over the original. This involves the use of smart glasses, which project the image over a patient’s face during the procedure. This was designed to be simple and efficient, guiding the surgeons and ideally minimizing mistakes [3,4]. 

More recently, researchers at The University of Michigan created new technology that can aid surgeons in complex reconstructive microsurgery cases. Known as the “arterial everter” this device with a pen-like structure can essentially make the process of connecting arteries more efficient and simple. The amount of time saved with this device is not only beneficial to the patient, but to doctors and to the healthcare system as well. To provide a scenario, if a patient’s arm is heavily severed due to an accident, this device can be extremely beneficial to the surgeons in this case[5].

At UC Davis, plastic surgeon Dr. Michael Wong, has tested the benefits of using a high-tech camera in certain surgical cases, such as breast reconstructions. This device generates three-dimensional images allowing patients to see how their end result might look like. The camera is big in size and has multiple arms and lenses which take pictures from different angles, and creates a 3-D rendering through the use of computer software. Not only does this technology allow patients to be more satisfied, but it also allows surgeons to improve the quality of their work[6]. 


The youngest patient to undergo a facial transplant–Kattie Stubblefield–had suffered a gunshot wound. The majority of her face was destroyed, and she experienced severe brain injury. After 31 hours, a team of 40 surgeons, and 2 operating rooms, the Cleveland Clinic finished Kattie’s surgery, which is known to be the most substantial face transplant. The procedure required the replacement of all her facial tissue including the scalp, eye sockets, facial muscles, and other components. The primary plastic surgeon working in this case—Dr. Brian Gastman, MD detailed how complex these facial procedures are. The facial tissues from the donor were cut so as to retain function while being transferred to the patient. Then, augmented reality allowed her surgeons to know where to cut and to improve the placement. The use of augmented reality, in this case, comes with many benefits. According to Dr. Gastman, this technique allowed doctors to create ideas in advance and do more extensive modeling. 

Overall, technology in reconstructive plastic surgery has proven to be extremely beneficial, as it allows for fewer mistakes, better results, patients are more likely to be satisfied, and it can even make for a better teaching tool.


  1. Cosmetic vs Reconstructive Surgery ” UF Health Plastic Surgery and Aesthetics Center ” UF Academic Health Center ” University of Florida. (n.d.). Retrieved September 15, 2020, from
  1. Plastic and Reconstructive Surgery. (2019, December 28). Retrieved September 15, 2020, from
  2. Augmented Reality Technology May Help Guide Facial Reconstructive Surgery. (2017, November 28). Retrieved September 02, 2020, from
  3. How is technology transforming facial reconstruction surgery? – Medical Technology: Issue 25: March 2020. (2020, March 06). Retrieved September 02, 2020, from
  4. Byline:, & News, N. (n.d.). U-M signs agreement for reconstructive surgery technology. Retrieved September 15, 2020, from
  5. UC Davis Health, D. (n.d.). UC Davis plastic surgeon testing high-tech camera. Retrieved September 15, 2020, from

Kim, Y., Kim, H., & Kim, Y. (2017, May). Virtual Reality and Augmented Reality in Plastic Surgery: A Review. Retrieved September 15, 2020, from

Biomedical Research

Lab Grown Mini Brains and the Future of Neurology


Scientists attempting to grow human organs themselves are not an entirely new concept; in fact, scientists have been conducting this process for well over a decade, growing human organs ranging from kidneys to skin. Researchers have now been able to grow a mini-brain with neural activity that mirrors a preterm infant. This is an enormous stride forward as earlier work was unable to demonstrate brain activity that was similar to how the brain actually functions. 

This research was largely founded based on neural oscillations, rhythmic brain signals found across species. Neural oscillations are one of the many cellular networks that eventually develop into circuits in the human brain during the maturation process. While this eventual development of neural oscillations into circuits is known, it is unclear when these networks precisely develop. Recent students in mice have demonstrated that oscillations develop immediately after birth. However, due to the fact, there previously were not any adequate models of the human brain in the laboratory setting it is unclear whether the same course of events occurs in humans brains. 


The team at the University of California, San Diego first grew human induced pluripotent stem cells (iPSCs), cells that can self renew, in neurons found in the cortex of the brain, which is responsible for controlling thought and behavior. The researchers opted to use iPSCs during this experiment as they have been shown in recent studies that they have the capacity to mimic various development features and the cellular and molecular processes of the human brain. In order to successfully grow the pluripotent stem cells, researchers created a solution that contained a mixture of transcription factors that regulate fetal development. Creating the perfect mixture was pivotal as it allowed for organoids to last long amounts of time. Many of these organoids were still usable after a year since the research was completed. These conditions allowed for the researchers at UCSD to mimic the neural oscillations and electrophysiological network activity. 

This process differed from earlier attempts to create mini-brains as the team created the most optimal conditions for mini-brain development at every stage. For example, unlike standard protocols which start with a clump of cells to construct the organoid, the team started from a single cell to build the organoid. In addition, they also changed the timings and concentrations of certain aspects in the mixture. While the process was meticulous, it certainly paid off. 

Fig 1: The mini brains the team at UCSD created


The results of this study revealed that the mini-brains produced neural activity that mirrors a preterm infant; thus, mini-brains have the potential to serve as laboratory models for studying psychiatric conditions. The iPSCs used in the study, like most stem cells, can be differentiated to specialize in any cell; in this case, the iPSCs were directed to specialized into neurons and glia. 

Due to the team’s unique way of creating an organoid from scratch, the team was able to identify a particular neuron that has previously never been able to be created in a laboratory setting: the GABAergic neuron. Another example of the team’s diligent process paying off was seen when measuring the electrical activity of the mini-brains using an electroencephalogram (EEG). The test revealed the mini-brains following the new protocol had 300,000 electrical impulse spikes per minute. With the old process, the earlier mini-brains produced a mere 3,000 spikes per minute. 

The results are what lead the UCSD to compare their results with the electrical patterns in newborn baby brains. The neural oscillations discussed earlier in the introductions, change according to age. Newborn baby brains tend to be the least active in oscillations. Their brains tend to have almost no waves between spikes of electrical activity. As we get older, these periods with no activity tend to get shorter, and eventually become constant. Other than age, oscillation patterns can also be affected by cognition skills and various diseases. 

The team compared their mini-brains with a previously published dataset of 567 EEG recordings from 39 babies born prematurely (between 24 and 38 weeks gestation). This cross-analysis revealed that the organoids revealed similar patterns in their EEG levels for up to 9 months after being developed. 


The successful development of a functional mini-brain has far-reaching applications as it broadens the range of neurologic conditions that could be adequately studied. Further, in most psychiatric conditions the neuronal circuitry is impaired: mini-brains would allow for a better understanding of diseases such as autism and epilepsy. Prior to this study, there were no adequate models in the laboratory to study certain neurological diseases adequately. For example, Alysson R. Muotri, Ph.D., one of the lead scientists of the study, reported sending the brain organoids to the International Space Station, in an attempt to determine the effect microgravity has on brain development. Potentially, the results yielded from this experiment could determine the prospects for human life outside of Earth. 

While developing mini-brains further holds the potential to change the way we study neurological diseases, this study exists in a fine line between science and ethics. Critics of this study and other members of the scientific community ask, “Are we getting too close to re-creating the human brain?”. Muotri understands these concerns and responds to these concerns by stating that the mini-brains developed in the lab are far from being functional adult human brains. He points out that the mini-brains are not only much smaller than fully developed brains but also lack hemispheres and blood vessels. Muotri also stated, “They are far from being functionally equivalent to a full cortex, even in a baby… In fact, we don’t yet have a way to even measure consciousness or sentience…”. Muotri asserts that science is far ways from creating a fully functional, developed brain, but as the medical landscape changes faster than ever ethics dilemmas that arise similar to this situation cannot be ignored. 


1. “Lab-Grown “Mini Brains” Can Now Mimic the Neural Activity of a Preterm Infant.” 

Scientific American, 30 Jan 2020 

1. “Machine Learning Algorithm Can’t Distinguish These Lab Mini-Brains from Preemie Babies,” UC San Diego Health, 29 Aug 2019, ini-brains-from-preemie-babies.aspx 2. “Mini Brains” Are Not like the Real Thing,” Scientific American, 30 Jan 2020,

 3. Cleber A. Trujillo, Richard Gao, Priscilla D. Negraes, Gene W. Yeo, Bradley Voytek, 

Alysson R. Muotri. 2019. Complex Oscillatory Waves Emerging from Cortical Organoids Model Early Human Brain Network Development. Cell Press. Vol. 25; 1-12 showall%3Dtrue

Biomedical Research

CRISPR/Cas9: A Dangerous Breakthrough in Medicine

Introduction: What is CRISPR/Cas9 and How Does it Work?

Clusters of regularly interspaced short palindromic repeats, commonly known as CRISPR, is a simple yet formidable implement allowing researchers to effectively edit genomes. Some of its potential applications include treatment and prevention of diseases, amelioration of genetic defects, and removal of certain gene types. The CRISPR/Cas9 technology can be deconstructed into four components (CRISPR, spacer, crRNA, and Cas9), each serving its own purpose. The CRISPR is a unique sequence of deoxyribonucleic acids (DNA) consisting of two distinct attributes: recurrent nucleotide progressions and spacers, which are short-variable sequences interspersed between nucleotides [1]. These spacers are derived from the DNA of viruses that have previously attacked the host CRISPR [2]. Once the CRISPR has been fashioned and the virus attacks again, the CRISPR is transcribed and processed into crRNA. The crRNA integrates with a secondary RNA string, the trans-activating crRNA, to provide the Cas9 enzyme with a passage to its target site, where the Cas9 enzyme will then execute what is recognized as a double-stranded break to remove the superfluous module of the genome sequence. One theory as to why CRISPR/Cas9 technology is so attractive compared to other genome editing techniques to researchers and consumers alike is its meticulous accuracy and efficiency in deriving results. It is predicted that the rationale behind this lies in the technology’s prototype adjacent motifs or PAMs. These PAMs serve as a tag, sitting adjacent to the Cas9 enzyme’s target site. Should there be no PAM next to the target site, the Cas9 enzyme will avoid “cutting” the particular genome sequence [1]. Once the cut has been finalized, programmed DNA may take its place.

Why CRISPR: A Deeper Understanding

Over the past decade, four major classes of engineered nucleases have been used for genome editing: meganucleases, zinc-finger nucleases (ZFN), transcription activator-like effector nucleases (TALEN), and CRISPR [6.] Meganucleases are endonucleases that can recognize extended DNA progressions of 14-40 bp via extensive non-modular protein-DNA connectors, but their target specifications have proven to be difficult to re-engineer [6]. Both ZFNs and TALENs are “fusions between arrays of ZF or TALE DNA-binding domains and the non-specific, dimerization-dependent FokI nuclease domain” [6]. Put simply, these three classes of nucleases all rely exclusively on protein-DNA connections to recognize their target sites.

In contrast, the CRISPR/Cas9 endonuclease relies on RNA strands to guide the Cas9 protein to its target site. This process allows researchers to easily modify and reprogram gene sequences. Additionally, due to the Cas9 enzyme’s complex nature of being intersected with a secondary RNA string, it can be re-engineered to recognize a target site based off of a protospacer and a PAM progression [5]. The PAM progression serves as a mediator, ensuring the Cas9 enzyme does not lacerate a necessary fibril of the DNA sequence. Thus, engineers can easily facilitate gene modification by simply editing the composition of the RNA strand conjoined with the Cas9 enzyme. They are not to worry about accidentally fabricating unwanted modifications to the RNA progression because the Cas9 enzyme will not act on it should a PAM not be present. These distinctive aspects of the CRISPR/Cas9 technology promote accuracy by minimizing off-site targeting by the Cas9 enzyme, leaving little to no space for error. Despite the expeditious advances in the field of genetic engineering/modification, bioethical questions are still raised regarding human trials.

Bioethical Concerns

Currently, the majority of all genome editing technologies are being tested in the interest of treatment and prevention of human disease. Genome editing shows considerable promise in treating and preventing both single-gene diseases such as muscular dystrophy and sickle cell disease and more complex disorders such as cancers and human immunodeficiency virus (HIV) [4]. A significant amount of research conducted on genome editing directly affects somatic cells (cells that are not egg and sperm cells). However, at the alarming pace that this field of study is evolving, there is no doubt that gene alteration technology to edit germline cells (egg and sperm cells) will be readily available soon, which is where bioethical challenges are raised. The ability to edit germline cells will enable consumers to enhance normal human characteristics, such as height or intelligence. It will also allow consumers to edit their offspring’s appearance [4].

The second major debacle that is brought into question is human autonomy. When germline editing becomes readily available, it will allow consumers to change normal human attributes in their offspring. This will be a non-consensual act on the parents’ behalf which will impact their offspring. Further, genetic editing will allow consumers to model the perfect human, which in turn, will infringe on a diverse number of societal rights [7]. These societal rights include but are not limited to the right to morality, the right given to doctors to advise patients, the rights scientists are given to conduct research, and the right children should have to consent or not consent to decisions their parents will make that affect their livelihoods.

Based on these ethical challenges, many countries have made it illegal to further study genome editing, more specifically germline and embryo genome editing. However, many more countries are persistent in advancing research and trials into the field.


Genome editing presents an attractive approach and considerable promise to treatment and prevention of human disease. Out of the four common classes of engineered nucleases used in the past decade, CRISPR/Cas9 has shown the most promise regarding accuracy. Its distinct features that make it a favorable tool are its negligible margin of error for targeting off-site DNA progressions via PAMs, its reliance on intertwined Cas9 enzymes and RNA progressions which allow for efficient reprogramming, and its unique PAM system which enables engineers to directly trim an RNA sequence without effectuating complications. However, amidst the steady progress of the genetic engineering field and its universal appeal to researchers, one can make a point to further analyze bioethical inquiries that have arisen before implementing further studies, trials, and inevitably, introducing this technology to the healthcare market.


  1.     E. P. (2014, July 31). CRISPR: A game-changing genetic engineering technique. Retrieved August 29, 2020, from
  2.     Barrangou, R., Fremaux, C., Deveau, H., Richards, M., Boyaval, P., Moineau, S., Romero, D.A., and Horvath, P. (2007). CRISPR provides acquired resistance against viruses in prokaryotes. Science 315, 1709–1712
  3.     Doudna, J. A., & Charpentier, E. (2014). The new frontier of genome engineering with CRISPR-Cas9. Science, 346(6213), 1258096. doi:10.1126/science.1258096*, E. P. (2014, July
  4.     Jiang, W., Bikard, D., Cox, D. et al. RNA-guided editing of bacterial genomes using CRISPR-Cas systems. Nat Biotechnol 31, 233–239 (2013).
  5.     What are genome editing and CRISPR-Cas9? – Genetics Home Reference – NIH. (n.d.). Retrieved August 29, 2020, from
  6.     Tsai, S., & Joung, K. (2020). Defining and improving the genome-wide specificities of CRISPR-Cas9 nucleases. Nat Rev. Genet, 1-25. doi:10.1038/nrg.2016.28.

Sabriyah Morshed, Youth Medical Journal, 2020

Biomedical Research

Can Canines Use Their Sense of Smell to Identify Cancer?

Although cancer is a widespread disease that is vigorously studied and researched, we as humans have not yet developed a cure. Most health officials agree that the best way to stop cancer in a patient would be to try and stop it early on. While detecting cancer early might sound like an easy task it can be difficult. Whether it be the steep price or the sparse locations screenings and tests can be hard. But researchers across the globe have been striving for a better solution. [1]One study by researchers at BioScentDx shows that Dogs can accurately detect early-onset cancer through smell. The researchers used a form of clicker training to teach for dogs to distinguish between normal blood serum and samples from patients with malignant lung cancer. After many trials and tests, the researchers concluded that the dogs had a 96.7 percent accuracy of identifying the lung cancer samples and a 97.5 percent accuracy of identifying the normal samples. 

But how do the dogs do this? Researchers from many universities, including Stanford Med and BioScentDx, say it is from the very sensitive smell receptors that K9’s have. Dogs have 10,000 times more sensitive smell receptors than humans which would allow them to smell and identify various more biologic compounds such as cancers. Other researches have said that dogs are able to smell cancer due to the olfactory ability that they have. This allows them to detect very low concentrations of alkanes and aromatic compounds generated by malignant tumors through urine or in the breath of humans. The very thought that dogs could detect cancer dated back to 1989 when it was published in The Lancet Medical Journal

If the research is deemed accurate by secondary trials and dogs are able to detect cancer, it could be very exciting for the medical community and for cancer research. The company BioScentDx has said that it would pave the way for further research along two paths, both of which could lead to new cancer-detection tools. One is using canine scent detection as a screening method for cancers, and the other would be to determine the reach of K9’s senses. The company plans to use canine sent detection to develop a non-invasive way of screening for cancer that would be less expensive as well as more accessible.

Overall, the use of cancer sniffing dogs in the medical and health science fields could be a huge discovery for the livelihood of many patients. If we already had the K9’s in action we could estimate that nearly ⅛ of all the people with cancer would have been screened earlier, stated the Center for Disease Control, which potentially could have saved hundred of thousands of lives. In the near future, many companies are planning on further testing canines to sniff cancer and other diseases too. And if it works, dogs could change the world for the betterment of society.


Price, B. (2020). Canine’s sense of smell. Retrieved 2020, from

Mosbergen, D. (2015, September 07). ‘Groundbreaking’ Trial Will Test Cancer-Sniffing Dogs. Retrieved September 02, 2020, from

Geographic, N. (2018). Dogs in Health Care. Retrieved 2020, from

Today, U. (2019). Cancer Sniffing Animals. Retrieved September 02, 2020, from