Biomedical Research Health and Disease

Why should we not underestimate the role of epigenetics in treating cancer?

Over the last several hundred years, we have witnessed marvellous breakthroughs in genetics. From the works of Charles Darwin to Mendel, there is no doubt that these theories have moulded our understanding of genetics today. However a recently emerging area of scientific research could add to our understanding of genes. Epigenetics is an emerging area of medical research of how our behaviour and environment can change the way genes work. Epigenetics cannot alter our DNA sequence, however it can affect how the body reads the DNA sequences.

In the 18th century, the French scientist Lamarack argued that acquired genes can be transmitted. However he believed that this was the sole basis of inheritance which we know is not to be the case. Whereas in Darwin’s theory of evolution, he suggested that lifetime experiences could lead to the formation of gemmules which attached themselves to egg and sperm, hence affecting offspring. 

How do epigenetic mechanisms work?

Epigenetic changes affect how genes are expressed. There are various epigenetic mechanisms which can occur in our bodies. DNA methylation and histone modification are examples of these mechanisms. 

DNA methylation is the addition of a methyl group to the 5th carbon of cytosine residues( which are linked by a phosphate to a guanine nucleotide ) catalysed by DNA methyltransferases. Consequently this forms 5-methylcytosine.  The cytosine residue linked to the nucleotide is known as a CPG dinucleotide. The methyl group is obtained from the methyl donor S -adenosine methionine. Levels of this methyl donor(SAM) depend on the intake of vitamin B12, B6 and folic acid. The methylation of these cytosine residues to form 5-methylcytosine significantly influences cell differentiation. The methylation of CPGs in the promoter region is associated with gene repression. Methylation is known to turn genes ‘off’.

Similar to DNA methylation, histone modification does not alter the DNA sequence however it modifies its availability to the transcriptional machinery. Chromatin consists of histones and DNA. An example of a well known histone modification is the histone acetylation of lysine. Acetylation neutralises  the positively charged lysine residue in the histone tail: this reduces the strength of the bond between the DNA and histone tails. This causes it to be more accessible to transcription factors.

Causes behind epigenetic marks

Epigenetic marks can be affected by exposure to various metals. Experimental analyses have shown that there were DNA methylation changes after arsenic exposure.Arsenic can be found in rocks, soil and insecticides. Another metal which is shown to have caused epigenetic alterations is cadmium. Cadmium toxicity mechanisms can cause epigenetic alterations during embryonic development : a set of genes responsible for transcription regulation control have shown changes in DNA methylation associated with concentrations of cadmium in pregnant women. Cadmium can be found in soil, and contaminated water, as well as through diet, for example through cereals, vegetables and smoking.

Furthermore air pollution can affect the epigenome. Exposure to atmospheric pollutants can lead to changes in DNA methylation of immunity and inflammation genes, which has been associated with reduced lung function and thus lung cancer. Benzene is also associated with changes of DNA methylation. Low-level benzene exposure has been linked to blood DNA methylation changes such as a decrease in DNA methylation of the genes LINE-1 and MAGE-1: this could increase the risk of developing acute myelogenous leukaemia.

Diet also can influence epigenetic mechanisms. A reduction in calorie intake might attenuate the epigenetic changes which occur during ageing. Smoking can also result in epigenetic changes. At specific parts of the AHRR gene, smokers typically have less DNA methylation than non-smokers. After a smoker quits, the smoker tends to have increased DNA methylation at this gene.

Epigenetic marks: a cause behind cancer

The first human disease to be linked to epigenetics was cancer. Researchers found the diseased tissue caused by colorectal cancer had less DNA methylation than normal tissue. In normal cells,  CpG clusters(known as CpG islands) are normally free of methylation. However, in cancer cells, these CpG islands are excessively methylated, leading to  genes turning off that should not be silenced . This typically occurs in the early stages of cancer. 

Excess methylation of the promoter of the DNA repair gene MLH1 causes a microsatellite (a repeated sequence of DNA) to become unstable  by shortening or increasing its length. This has been linked to many cancers such as gastric, endometrial and colorectal cancers.

Epigenetic Treatments

At present, two classes of epigenetic drugs have been approved by the FDA, DNA methylation inhibitors and histone deacetylase inhibitors. The first approved drug was 5-azacitidine.

5-azacitidine is an analog of cytidine, with a nitrogen atom in the position of the 5th Carbon. Cytidine can be incorporated into DNA and RNA. Due to 5-azacitidine’s similarity to cytidine, both compounds are recognised by DNA and RNA polymerases, therefore the drug is incorporated into the DNA during replication. The drug is recognised by DNA methyltransferase. The DNA methyltransferase transfers a methyl group as usual. However as the nitrogen is in the fifth position this causes a permanent  bond between the DNA methyltransferase and 5-Azacitidine. This causes DNA methyltransferase to degrade, which leads to the reduction in methylation . The drug had a high level of toxicity when tested on mice. Hence the drug is now given in low but repeated doses so the epigenetic effects can occur without a high level of cytotoxicity.



RG108 is a non-nucleoside analog which specifically  targets DNA methyltransferases. This interacts with the catalytic domain(the region of an enzyme that interacts with its substrate to cause an enzyme reaction), and then blocks its active site with a low level of cytotoxicity.  Unlike nucleoside analogs like 5-Azacitidine, non-nucleoside analogs do not incorporate themselves into DNA. Therefore they do not induce any toxicity.



In conclusion, epigenetics has significantly added to our understanding of how environmental influences can affect whether and how genes are expressed.  Epigenetics drugs have a great potential to be effective against a number of cancers by reversing epigenetic mechanisms. The field of epigenetics will continue to grow, enabling scientists to develop more targeted drugs against cancers.


·       BMJ. 2016. Epigenetics for dummies. [online] Available at: <; [Accessed 15 February 2022].

·       Toraño, E., García, M., Fernández-Morera, J., Niño-García, P. and Fernández, A., 2016. The Impact of External Factors on the Epigenome:In Uteroand over Lifetime. BioMed Research International, 2016, pp.1-17.

·      Hamilton, J., 2011. Epigenetics: Principles and Practice. Digestive Diseases, 29(2), pp.130-135.

·      Centers for Disease Control and Prevention. n.d. What is epigenetics?. [online] Available at: <; [Accessed 19 February 2022].

· 2008. Epigenetic Influences and Disease | Learn Science at Scitable. [online] Available at: <; [Accessed 19 February 2022].

·      Lanata, C., Chung, S. and Criswell, L., 2018. DNA methylation 101: what is important to know about DNA methylation and its role in SLE risk and disease heterogeneity. Lupus Science & Medicine, 5(1), p.e000285.

·       Heerboth, S., Lapinska, K., Snyder, N., Leary, M., Rollinson, S. and Sarkar, S., 2014. Use of Epigenetic Drugs in Disease: An Overview. Genetics & Epigenetics, 6, p.GEG.S12270.

·      Ganesan, A., Arimondo, P., Rots, M., Jeronimo, C. and Berdasco, M., 2019. The timeline of epigenetic drug discovery: from reality to dreams. Clinical Epigenetics, 11(1).

·      Science Direct. 2016. N Phthaloyltryptophan. [online] Available at: <; [Accessed 19 February 2022].

·       Zheng, Z., Zeng, S., Liu, C., Li, W., Zhao, L., Cai, C., Nie, G. and He, Y., 2021. The DNA methylation inhibitor RG108 protects against noise-induced hearing loss. Cell Biology and Toxicology, 37(5), pp.751-771.

Picture Credits:

·      n.d. Azacitidine. [image] Available at: <; [Accessed 27 February 2022].

·      n.d. RG108. [image] Available at: <; [Accessed 27 February 2022].

Health and Disease

Impact of Obstetrics and Gynecology upon Modern Society

Obstetrics and gynecology are medical expertise making an enormous impact within society. It is responsible for continuing our species; without the specialty’s advancements, maternal and infant mortality rates would still be astronomical, and populations would have struggled through periods when deaths peaked, such as WW1 and WW2. Before antibiotics, blood transfusions, and cesareans, giving birth was fatal. Teenage girls gave birth with no anesthesia; in Victorian England, it is estimated that the MMR rate, a ratio comparing maternal mortality rates against births, was 10.5. This equates to roughly 10% of mothers dying per 1000 births. Steadily, scientific and technological advancements have revolutionized childbirth. Women benefit from a variety of drugs, including gas to full spinal block, and surgery, improving maternal and infant safety. From the beginning of pregnancy, ultrasounds allow obstetricians to monitor the baby. Inducing labor and dilation of the cervix prevents the fetus from getting stuck in the birth canal and becoming distressed. During birth, the fetal heart monitor is vital in guiding obstetricians as to how quickly the baby should be delivered. Emergency cesareans are available, intensive care can resuscitate fetuses, and ventilators can help babies breathe, even if they are weeks premature. 

Despite advancements, birth is hazardous, and obstetricians often have to step into perilous situations. A plethora of complications can occur, from the umbilical cord wrapping around the baby’s neck to the mother going into shock, and any negligence causes serious damage. It is crucial to realize birth is not safe for all. Difficult births can contribute to postnatal depression, impacting a mother’s ability to bond with her child. Obstetrics plays a major role in a special but terrifying time in women’s and children’s lives. Undoubtedly, the field impacts society immeasurably. 

Unfortunately, there is a correlation between increased mortality rates and living in a low-income country; few resources and unsterile conditions are detrimental to pregnancies in less economically developed areas. Countries such as Syria and Iraq are on high alert for maternal mortality. Accordingly, MMR rates show roughly 3%-11% of women are dying per 100,000 births. It is imperative that humans divert attention towards obstetrics and gynecology and recognize its colossal impact on society to improve these worrying statistics. Awarding more funding to this crucial field could send teams to struggling countries and focus on infection control, bleeding management, and providing education on pregnancy complications, the principal causes of mortality.

Furthermore, gynecology has an invaluable impact on society because of the contraceptive options it offers women, allowing reproductive rights and preventing a patriarchal system where pregnancy prevents women from having freedom over their sexuality. Removing the mini pill, IUD coil, and abortion means women would be sexually restricted, making them unable to engage in intercourse without the threat of unwanted pregnancy. Since abortion was legalized, fewer women have undertaken risky procedures, and a rising number of women are choosing to have terminations safely, rather than resorting to underground “backstreet” clinics that were historically responsible for many vulnerable young women’s deaths, desperate to abort. Through providing contraception, such as condoms, gynecology has reduced the spread of sexually transmitted diseases such as syphilis, gonorrhea, and HIV. If more funding were offered, specialist gynecologists could prioritize areas, offering education, helping to reduce the transmission of AIDs, especially in less economically developed countries where family planning facilities and access to contraception is still worryingly limited. 

Evidently, obstetrics and gynecology have a prodigious impact on society. Not only does it reduce deaths and support the human race, but it also offers options to women and actively reduces sexually transmitted diseases, a pressing issue affecting our liberal society. Without the care this specialty offers women, birth would still be extremely perilous. 


G. Chamberlain, “British maternal mortality in the 19th and early 20th centuries” J R Soc Med. 2006 Nov; 99(11): 559–563 ;

GOV UK, “Abortion statistics for England and Wales: 2019” 2020 Jun;

KidsHealth, “When your baby’s in the NICU”, 2019 Jan ;

NHS, “Ultrasound Scan”, 2018 May;

Office for National Statistics, “Trends in births and Deaths over the last century” 2015 Jul;

Stanford Children’s Health, “Fetal Heart Monitoring”, (unknown date) ;

World Health Organisation, “Maternal mortality”, 2019 Sep;

Biomedical Research Health and Disease

Influence of Technology in Dentistry

As technology is already seen to be changing in many sectors of society, the development of new devices and systems to benefit the whole healthcare system, including dentistry, is definitely inevitable. New systems to benefit patients and dentists will overall lead to better patient-centred care. Some of these new technologies will be explored here and how this will impact the field in general.

One form of technology, which has been introduced within society for a while but only just begun to be utilized in dentistry, is Augmented Reality (AR) and Virtual Reality (VR). Augmented Reality uses visual elements to create an enhanced version of the real physical world by analysing the world in front of the viewer and adding filters. This has excellent uses in dentistry such as dental students using AR to practice procedures. Rather than using mannequin heads which cannot be used at all times, dental students can improve and develop their manual dexterity skills anywhere. In the dental practice, dental professionals can create accurate representations of patients’ teeth on a model and present them what their teeth should look like after treatment and procedures. This automatically creates a greater satisfaction and comfort for the patient about how their teeth can be improved, which can give them confidence and be less fearful of the whole process. It gives the patient a greater awareness of their problem and makes them more likely to undergo treatment without being skeptical. Furthermore, multiple AR models can be created with different aesthetics to present to a patient clearly what treatment they would like. This would overall lead to greater patient satisfaction. Similarly, these benefits can be seen for both the dentist and patient with Virtual Reality. VR involves a person wearing a headset to immerse themselves in a completely different environment to what they are actually in. This contrasts AR where a person can visualise something through a screen but would not experience the same immersion. For this reason, VR can be used by training dentists and dental students to observe a real procedure and learn how to carry it out from an experienced dentist’s perspective. Learning from a third person perspective would not be as engaging. Using VR would mean students would learn much more effectively and even practice these skills with greater understanding. Similarly, for patients needing specialist care or patients who are fearful of procedures, VR headsets can be used to make the environment more comforting for them, which also makes the procedure easier for the dentist to complete.

Furthermore, the use of Artificial Intelligence (AI) has been utilized to analyse data throughout many aspects of society and has excellent opportunities in the field of dentistry too. AI algorithms have already been set up to analyse huge amounts of data to find the best treatment options for patients. Health data, research and treatment techniques can be analysed as a whole to offer diagnostic recommendations for individual patients. Further collection of data and analysis, such as with genomic data would offer a deeper understanding into each person, providing a better personal care. With AI having access to such information, better treatment options are available. However, one drawback of this is that the handling of such huge amounts of data has to be done with care as practices may be susceptible to data hacks and leaks which would ruin patient privacy and confidentiality. However, with such large-scale data processing, much better security systems would be in place too. Another implementation of AI on a larger scale could involve the use of smart toothbrushes. With our homes being filled with smart devices, the use of a smart toothbrush would improve our lives further. Used in conjunction with an app, a variety of sensors in the toothbrush analyse the method with which the user is brushing their teeth and while scanning the area of the mouth, the user can be notified on how to improve their brushing. With real time feedback on whether too much pressure is applied, which areas have been missed and which technique should be used, over time the user’s oral health would improve greatly. At the expense of these benefits, there are some negatives which include the extent to which data is being collected while these systems are used, which would put off some consumers.

In addition to all of these, multiple technological advancements, including in robotics, allow for a better treatment for the patient and follows the philosophy of patient-centred care to a greater extent. For example, intra-oral cameras have begun to be utilized by dentists to view harder-to-see areas of the mouth in greater detail. When complex procedures occur, the site has to be inspected clearly and intra-oral cameras ensure no abnormalities are missed. Similarly, as such a huge number of patients receive dentures every year all over the world, intra-oral scanners play an amazing role in the production of dentures. Normally, when an impression of the teeth has to be made, a thick liquid material, usually alginate or polyvinyl siloxane, is set in the patient’s mouth before a set of dentures can be made. This has to be sent to a lab to make a set of dentures for a patient. However, using intra-oral scanners means that a quick digital impression of the teeth can be formed with just one tool and this digital scan of the patient’s mouth can be sent to the lab. The process of creating an impression is much faster and is much easier for the patient. The resulting denture will be more accurate too which results in better patient satisfaction too. Furthermore, as dental practices regularly send impressions to dental labs, some dental implants could be made by the dentist itself with the help of 3D printing. Using the same digital scanning technique but instead creating an implant quickly for a patient makes the whole process faster. Rather than running a whole dental laboratory, dentists can 3D print certain implants and money can be saved, which results in patient expenditure falling too and costs reduced. The greater accuracy too leads to better results. Furthermore, the actual procedure of setting implants within the patient can be assisted with technology too, such as with the use of the YOMI robotic system which increases accuracy of procedures whilst ensuring safety.

The technologies utilized in dentistry allow for an excellent improvement to patient care as procedures become more accurate and patients are satisfied with their results which mirrors the dentists aim of providing patient centred care.

Arya Bhatt, Youth Medical Journal 2022


The Medical Futurist. 2020. 9 Technologies That Will Shape The Future Of Dentistry – The Medical Futurist. (online) Available at:

Evanson, A., n.d. How 3D Printing Is Revolutionizing Dentistry. (online) Evanson DDS. Available at:

n.d. Professional 3D Printing Materials. (online) Available at:

Biomedical Research Health and Disease

The Human Respiratory System: A Marvelous Bodily Circuit or a Fragile Interconnected Network?


Breathing is an automatic action controlled by our medulla while asleep, unconscious, or awake. The fact that our bodies take this simple action of inhaling the air that ubiquitously surrounds us as an autonomic action suggests a significant importance to the body’s function. This is only the start of humans’ detailed, intricate, and adapted respiratory system- an undoubtedly marvellous feat of human evolution. In this article, we will explore deep into this system and question the failsafe’s- since, without oxygen in our bodies, we become ineffective, weak, and fruitless: often leading to death.

Why Do We Need Oxygen?

As multicellular organisms, humans contain an abundance of cells. These cells must carry out their specialized functions within tissues- most of which are active processes (requiring energy). This introduces the process of cellular respiration, an enzyme-controlled reaction that releases energy in the form of an activated nucleotide known as ATP (Adenine Tri-Phosphate). This enables cells to carry out DNA replication, active transport, synthetic pathways, and muscle contraction via actin-myosin interactions in muscle fibres. As a result, oxygen is necessary for cells to respire and carry out basic essential functions. Therefore the process of breathing to obtain oxygen is critical for humans.

In a fatal context- the lack of oxygen to the brain due to no inhalation of air(and hence oxygen) will lead to a deadly condition known as Cerebral Hypoxia. This is because the cells of the brain require a constant supply of oxygen to respire. Therefore a lack of oxygen means glucose is not metabolized quickly enough within brain cells, and hence there is not enough ATP being released. The lack of energy causes brain cells to die and neurons to shut down. Therefore in Cerebral Hypoxia- the brain cells can die out in 5 minutes without oxygen, leading to death.

The Mechanism and Adaptations

It is vital to understand how our cells receive oxygen via the respiratory system- but as we explore this, we shall also come across numerous organs that host multiple functions, making what seems easy, effortless. Everyday actions turn into a precise mechanical process. The respiratory system starts by a simple inhaling activity conducted manually or automatically. This first step is crucial for the function of the respiratory system. Hence it is assigned as an autonomic action by the body- controlled by the Medulla oblongata of the brain.

The air then travels through our mouths and down to the first main structural feature- The Larynx (seen in figure 1).

Figure 1. A detailed view of the larynx, including examples of Cartilage, explored further in the paragraph below. Image sourced from WFSA – Airway Masterclass 3 : The Larynx

The Larynx (also referred to as the voice box) is located just behind the tongue. It is a complex organ with nine pieces of Cartilage within it, namely: the epiglottis, thyroid, arytenoid, cuneiform, corniculate and cricoid cartilage. Cartilage is a connective tissue that has major structural significance. Referring back to the Larynx, Cartilage’s abundance ensures it is held in place without collapsing and allows it for some flexibility to change shape. The Larynx connects the throat and the trachea with four main functions: Protecting the top part of the trachea, directing food and drink away from the trachea, enabling speech, and finally allowing for unrestricted flow of air towards the trachea. We can primarily focus on the last two functions as these are the ones most concerned with the respiratory system. When we inhale air, the muscles hold the Cartilage firmly in place so that the air from the mouth or nasal passages (if inhaled through the nose) may flow through the Larynx smoothly. 

The Larynx is also adapted in numerous ways to fulfill its functions as the voice box. The Larynx has a thin lining of mucous membranes and hence has several secretory and squamous epithelial cells lined on its surface. This ensures that the Larynx remains moist, so vibrations of the vocal folds are more smooth. The vocal folds are formed by narrow ligaments: the false vocal folds and true vocal folds. The false vocal folds do not make sound at all but serve to protect the true vocal folds located beneath them. Sound is produced by the muscles of the Larynx (cricothyroid and thyroarytenoid), moving the vocal folds into the stream of passing air. Pitch depends on how tight the folds have been stretched across the creek. This links back to the abundance of cartilage in the Larynx- connecting the muscles to the vocal folds and indirectly changing vocal fold length.

Figure 2. A view of the respiratory system, from Larynx to Lungs. Image sourced from Pharmacy180-Trachea

Once the air passes through the larynx, it enters the Trachea- our next prominent structural feature of the Respiratory system. The Trachea is a cylindrical tube that carries air down from the bottom of the larynx to the bronchial tubes. The Trachea is adapted to ensure that it does not restrict the mobility of the neck, enables the unrestricted flow of air, and is strong enough to prevent collapse during low internal air pressure as inspiration occurs.

The Trachea achieves this by containing alternating bands of cartilage and muscle to hold open the Trachea and allow air to flow easily. This also provides sufficient structural support to ensure the Trachea has some resistant properties to external forces, such as being crushed and not collapsing during low internal pressures due to inhalation(inspiration). Additionally, the inside and outside of the tracheal walls are lined with membranes of solid elastic fibres to provide flexibility which ensures the Trachea does not limit movement of the neck. The neck moves by a pivot joint and is vital for seeing in different directions; hence, these elastin fibres in the tracheal lining are crucial.

Figure 3. A cross section of the trachea with detail on the surface lining. Labels point to the glands on trachea surface, explored further below. Image from Yale Histology

 The trachea is also lined with ciliated epithelial cells and goblet cells. The goblet cells release mucus via the Glands, seen above in Figure 3, and trap any dust or bacteria that have been inhaled before they can reach the lungs and cause infection. The cilia, hair-like projections formed from microtubules, are present on the surface of the epithelial cells lining the tracheal wall and waft the mucus up the trachea for the mucus to be expelled via coughing. The process of removing the mucus occurs by simple cilia action and coughing or sneezing (if particles/bacteria are caught in nasal passages). In the first case, the cilia move the mucus by creating a rhythmic beat and hence a current. This causes the slime to move up the trachea, through the larynx, and into the pharynx, where it shall mix with saliva from the mouth. From here, it may be swallowed back and travel down the esophagus for it to be broken down by the stomach acid. In a cough, the central airways narrow, and the phlegm/mucus globs are propelled up the trachea by a column of high-velocity air. The noticeable coughing sound is formed by the air moving past the larynx at such high speeds. The phlegm/mucus is transferred directly to the pharynx, where it may be swallowed back down into the esophagus or expectorated. The process of sneezing is a mechanism to clear the nose following the detection of foreign bodies in the nose, such as pollen, dirt, or bacteria. When we sneeze- our chest muscles contract and cause the lungs to become slightly compressed while our throat muscles relax. Then a column of air is sent through the nose at approximately 100mph to clear out the built-up phlegm.

The trachea, therefore, carries air down into the bronchi. The bronchi are transport tubes that further carry the air into the lungs. The lungs can hold about 5 liters of air, and since we have two lungs, the Bronchi must transport the air into both lungs. As a result, we see a split of the main bronchial tube into the left and right bronchus. However, this is not the end of the bronchi’s branching as the right and left bronchus now rapidly subdivide into numerous small tubes. The smallest of these are the bronchioles, with about ½ mm diameters. The bronchial tubes are attached to thousands of tiny air sacs called alveoli. Cumulatively, the millions of alveoli can create a surface of about 100 square meters. The alveolar walls are also surrounded by a network of one cell thick capillaries. As a result, the high surface area means that diffusion, where oxygen moves from the air sacs to the veins, can occur more quickly. Alveolus and Capillaries are also one cell thick- reducing the diffusion pathway significantly and allowing the gaseous exchange of CO2 and O2 to happen effortlessly. Moreover, the fact that we continuously inhale and exhale maintains a crucial diffusion gradient between the contents of O2 in the blood capillaries and the alveoli. 

Once oxygen diffuses into the blood by the gaseous exchange at the alveoli, it must be transported to the cells to allow for the vital process of cellular respiration to occur. A specific protein facilitates the transportation of oxygen within the blood called hemoglobin. This quaternary globular protein contains four prosthetic Haem groups (Fe 2+). These bind with oxygen to form an Oxyhaemoglobin complex. This reaction is reversible, as when the oxyhemoglobin reaches the bodily cells requiring oxygen- it must release the oxygen. 

As a result, the cells now obtain oxygen and, assuming it also receives sufficient glucose, can carry out the process of respiration to metabolize the glucose, release ATP and hence have enough energy to conduct basic, essential, and active functions.


Upon exploring the respiratory system, we can appreciate that it is highly detailed and relies on multiple structures to carry out their functions for oxygen to be transported to the cells. This naturally raises a few questions: has the body prepared for the case where one of these structures fail to carry out their function? What if the trachea unexpectedly closes due to the wearing of cartilage? What if the lack of cilia prevents unrestricted airflow through the trachea?

These would significantly impact the delivery of oxygen to cells and disrupt cells’ conduction of necessary actions. Therefore there must be failsafe’s in place to account for these unexpected malfunctions because this meticulous, interconnected, and adapted system is essential to sustaining life.

Focusing primarily on the lungs, the site where CO2 is removed from the bloodstream and O2 is taken up, we can see an example of a protective measure. The pleura, wall of the lungs, is composed of very soft material with two layers. The visceral pleura covers the lungs while the parietal pleura lines the diaphragm and ribs. By attaching to the inside of the rib cage- there is a prevention of collapse and damage from external pressure. This ensures that the lungs’ shape, structure, and function are maintained. However, this is only a preventative measure rather than a direct failsafe. What if a broken middle rib punctured the lungs? Are there any measures to ensure the rest of the respiratory system can continue its operations? The answer is frankly no.


In the prolonged absence of oxygen, the human body cannot function and shuts down and dies. The respiratory system ensures that we continuously provide our numerous cells with sufficient oxygen, enabling them to carry out essential, active bodily processes as part of tissues or organs. It’s a marvellous, meticulous and adapted bodily circuit, but it is fruitless if one part of the circuit malfunctions. The lack of failsafe makes our beautiful respiratory system a fragile network of interconnected structures on thin ice.

Aryan Bhadra, Youth Medical Journal 2022


  1. (n.d.). Tracheal Tube – an overview | ScienceDirect Topics. [online] Available at:

[Accessed 21 Nov. 2021].

2. DEZUBE.REBECCA (2019). Overview of the Respiratory System. [online] MSD Manual Consumer Version. Available at:

3. Cleveland Clinic (2020). Respiratory System: Functions, Facts, Organs & Anatomy. [online] Cleveland Clinic. Available at:

4. Healthline. (2017). Why Do We Sneeze? Everything You Need to Know. [online] Available at:

5. Dickey, B.F. (2018). What it takes for a cough to expel mucus from the airway. Proceedings of the National Academy of Sciences, 115(49), pp.12340–12342.

Health and Disease Neuroscience

Dementia and Music Therapy: An Overview of the Most Underutilized Tool in Dementia Care and a Personal Encounter

What is dementia?

Dementia is characterized by a deterioration in the cognitive behavioural social and emotional functions of a person beyond what might be expected from the usual consequences of biological ageing. Dementia is currently the seventh leading cause of death among all diseases and one of the major causes of disability and dependency among older people globally. Dementia has physical, psychological, social and economic impacts, not only for people living with dementia, but also for their careers, families and society at large.

Approximately 55 million people worldwide suffer with dementia, and this is projected to triple by 2050. There are over 200 subtypes of dementia but the five most common are Alzheimer’s disease, vascular dementia, Lewy body Disease, frontotemporal dementia, and mixed dementia. Alzheimers’s accounts for approximately 75% of all dementias.

Why is music therapy perhaps better than medications and other forms of treatment?

Medications have limited effect on treatment of many of the disease’s features. Many authors emphasise positive effects of music on the brain, in this sense, several studies showed that people with dementia enjoyed music and the ability to respond to it is preserved even when verbal communication is no longer possible.

Research suggests that listening or singing to songs can provide emotional and behavioural benefits for people with Alzheimer’s and other types of dementia. Musical memories are often preserved in Alzheimer’s disease (which accounts for approximately at least 75% of dementia) because key brain areas linked to musical memory are relatively unaffected by the disease. Music and emotions are linked in a powerful way. Babies sway to music even before words and language have developed and this continues even towards the end of their lives where verbal ability may be lost.

Music can tap into powerful memories and emotions as demonstrated in the BBC 1 programme “Our Dementia Choir” when 91-year-old Eileen with advanced dementia made a startling change in verbalization and behaviour, her real personality shining through with the music she was listening and singing to. Music memory is a form of implicit memory, there’s evidence from scientific studies that listening to music lights up regions of the brain that others cannot. Multiple studies that included meta-analysis of systematic reviews and randomised trials published have shown It can reduce stress anxiety depression and agitation in dementia.

Music and memory are inextricably linked, and the recollection of music varies according to age. In order to create personalized music playlists tailored for people living with dementia, the study from the Journal of Multidisciplinary Healthcare Rao,C.B,et al (2021) 14: 2195-2204 aimed to determine the age at which healthy individuals could best recall music that was popular at the time. Surveys were designed asking participants to identify the number of songs they recalled from a random selection of 10 from the 100 most popular songs from each year, presented in random order of years, from 1945 to 2015. Of the 311 individuals born between 1929 and 2002, who responded to the survey, 157 met the inclusion criteria. Results showed that the median peak of recollection was between the ages of 13 and 19 across all age-cohorts, with participants recalling a maximum median number of 6-8 songs in all of the age-cohorts. There was no evidence of a difference in the peak age of recollection between those who recognized seven or more songs in at least 1 year and those who recognized fewer than seven songs in all years. The conclusion from this study was that the peak of recollection of popular music occurs in the teenage years, regardless of era of birth. Music from this “reminiscence bump” provides a rich source of retained music that should be tapped when creating playlists of meaningful music for people living with dementia.

My personal encounter 

Recently, I volunteered at a dementia care home for a month, and although I was trying to interact with the patients as much as possible, I still felt like there was still a barrier whilst talking to the residents. Many residents could not remember me after a week, and so I had to reintroduce myself weekly, explaining how I was a volunteer and I wanted to get to know them. On reflection there were a variety of dementia patients at the residential home who varied in the degree of severity and how much they had progressed into the illness. Some were unable to talk and feed themselves whereas others could have an engaging conversation with me. A couple of weeks in, after leading a bingo session with them, I played the piano to a very traditional, quite simple but memorable song- I played this song twice in a row and the second time many more residents within the living room  joined in with the lyrics. To me, it was astonishing, and although I already heard of music therapy helping these patients, it truly shocked me seeing it in real life practice. Some who had not spoken to me throughout the whole month had begun to sing lyrics to some of these songs, and even though they might not be the exact lyrics to the song, it was clear to see that many of the residents had improved their mental state of mind.

Ideally it is best to personalise the music for the individual,but this may not be possible in all care settings. General rule of the thumb is start with gentle quiet music and involve the person wherever possible. Simply having loud music in the background could be over stimulating,sometimes distressing and could have a negative impact. Music can awaken negative emotions and memories as well as positive ones. Watch out for how the person reacts. If there are any signs of distress, turn the music off. However, expressing sadness may be a normal reaction to a memory or an association to the music,so this should be kept in mind in selecting playlists. The evidence for receptive music therapy(just listening to music) against interactive music therapy (singing along,dancing etc) is debatable as some studies show interactive music therapy has better outcomes but other studies vice versa,the most recent study being from the Journal of the American Medical DA-“ Receptive Music Therapy Is More Effective than Interactive Music Therapy to Relieve Behavioural and Psychological Symptoms of Dementia :A Systematic Review and Meta-Analysis”. Irrespective of whether it is receptive or interactive, regular musical leisure activities can have long-term cognitive, emotional, and social benefits in mild/moderate dementia and could therefore be utilized in dementia care and rehabilitation.


NICE (National Institute of Clinical Excellence)recommends that people with dementia should be offered activities that should help wellbeing. Music therapy is an established psychological clinical intervention which is delivered by the healthcare profession society council(HPSC). Compared with usual care in most studies, both singing and music listening improved mood, orientation, and remote episodic memory and to a lesser extent, also attention ,executive function and general cognition. Singing also enhanced short-term and working memory and caregiver well-being, whereas music listening had a positive effect on QOL(Quality of Life).

Music engages an extensive network of auditory, cognitive, motor, and emotional processing regions in the brain. Coupled with the fact that the emotional and cognitive impact of music is often well preserved in ageing and dementia, music could be a powerful tool in the care and rehabilitation of many ageing-related neurological diseases and dementia. It is cost effective, requires little resources and training to implement and should be more widely implemented in care settings and in the wider community.

Neha Biju, Youth Medical Journal 2022


Music Therapy in the Treatment of Dementia: A Systematic Review and Meta-Analysis:Frontiers in Medicine (Lausanne). 2020; 7: 160.–Published online 2020 May 19. doi: 10.3389/fmed.2020.00160

Music-based therapeutic interventions for people with dementia – Cochrane Database Syst Rev,. 2018 Jul 23;7(7):CD003477. doi: 10.1002/14651858.CD003477.pub4.

Does music therapy enhance behavioral and cognitive function in elderly dementia patients? A systematic review and meta-analysis.– Ageing Res Rev,. 2017 May;35:1-11. doi: 10.1016/j.arr.2016.12.003. Epub 2016 Dec 23.

“Music-based interventions for people living with dementia, targeting behavioral and psychological symptoms: A scoping review.” . Sousa, L., et al. (2021).International Journal of Geriatric Psychiatry 14: 14.

Comparative Efficacy of Active Group Music Intervention versus Group Music Listening in Alzheimer’s Disease— Int J Environ Res Public Health. 2021 Aug; 18(15): 8067.Published online 2021 Jul 30. doi: 10.3390/ijerph18158067

Preferred Music Listening Intervention in Nursing Home Residents with Cognitive Impairment: A Randomized Intervention Study— J Alzheimers Dis. 2019;70(2):433-442. doi: 10.3233/JAD-190361

Cognitive, emotional, and social benefits of regular musical activities in early dementia: randomized controlled study. Gerontologist. 2014 Aug;54(4):634-50. doi: 10.1093/geront/gnt100. Epub 2013 Sep 5.

“A focus on the reminiscence bump to personalize music playlists for dementia.” Rao, C. B., et al. (2021).Journal of Multidisciplinary Healthcare 14: 2195-2204. Camerlynck, M. F., et al. (2021).

“Can music reminiscence approaches be used in moderate-severe dementia? A pilot of music mirrors.” Dementia 20(3): 1162-1171. Baker, F. A., et al. (2021).  

Biomedical Research Health and Disease

Vacuum Induced Uterine Tamponades (VIUTs) for Postpartum Hemorrhages

A stroke of brilliance: a faster and simpler non-surgical intervention to tamponade uterine bleeding in postpartum complications.


When a new life comes into this world, the delivery involves uterine contraction. It pushes out the placenta, which exerts pressure onto the bleeding vessels attached to the placenta. If the uterine contraction is restricted or obstructed for any reason, or there is residual attachment of placenta, these blood vessels bleed freely. The result is excessive placental bleeding (compared to normal conditions), causing a postpartum hemorrhage. Postpartum hemorrhages are one of the most significant causes of mortality due to childbirth worldwide, creating a demand for effective solutions as incidence rates rise. In most cases, postpartum hemorrhage is caused by atony – the failure of the uterus to contract after delivery. Atony is regularly prevented through use of oxytocic drugs that induce uterine contraction. When prevention fails, the uterus must be mechanically stimulated to contract. The most common solution used in developing countries is the balloon system which exerts outward hydrostatic pressure on the uterus.

Intrauterine balloon systems with applications of an intrauterine vacuum are a promising new method. It aims to reduce uterine size for the control of a postpartum hemorrhage with no additional intervention required thereafter. With advancing technology the vacuum induced uterine tamponade (VIUT) has come into play. It was created through a device inserted transvaginally into the uterine cavity. An occlusion balloon built into the device shaft was inflated to the level of the external cervical os (the junction between the cervix and the vagina) to create a uterine seal.

Internal mechanics of VIUTS

The process consists of applying low level intrauterine vacuum to facilitate physiologic forces of uterine contractions to constrict myometrial blood vessels and achieve homeostasis. 

The tamponade itself is made of medical grade silicon, of which the distal end (placed in the uterus) is an elliptical loop, containing 20 vacuum pores on its inner surface. The proximal end of the tamponade has a vacuum connector.

Prior to the beginning of the process of placing the device, a manual sweep of the uterine cavity is performed. After this, a Bakri balloon is inserted into the uterus, given that either a postpartum hemorrhage has occurred (and treatment consists of oxytocin followed by prostaglandins), or when uterine bleeding has continued despite removal of placenta or retained placental tissue.

The procedure¹ further includes inflation of the balloon with 50-100 ml of physiologic saline solution. The catheter is then connected to a non sterile tube, which is in turn connected to a vacuum device, at which point an intrauterine vacuum is applied with 60-70kPa. The tube positioning and uterine cavity condition must be monitored through ultrasonography to detect an unwanted accumulation of blood. If no bleeding is visible then the patient is stable and the balloon should be deflated completely and removed.

In very rare cases, this process requires additional antibiotics or analgesia administration.

The time taken for uterine collapse and hemorrhagic control is relatively short. In the prospective study² carried out, the initial collapse of the uterus took a median of one to two minutes from the time of vacuum connection. Bleeding came into control within 5 minutes for 82 percent of women. The median overall duration of the vacuum treatment was 144 minutes (median being 85-295 minutes), which includes the required 60 minutes of vacuum treatment time and 30 minutes of observation without any vacuum connected, but with the device still in place.

Fig 2: Vacuum induced uterine tamponade in vitro observed in ultrasonography


The above study, as well as an FDA review³ that followed, deemed the device to be safe. Some possible adverse effects were found, such as endometriosis, laceration, and vaginal infection. However, these were resolved in further trials without any serious clinical consequences.

Although the method has yielded confidence from its success rates, VIUTs must undergo periodic evaluations. Further investigation should be undertaken to determine whether the success rates found so far were solely due to vacuum induced uterine tamponade methods or proper management of postpartum hemorrhage patients. The majority of clinicians (according to this poll²) reported that the vacuum induced hemorrhagic control device was easy to use and recommended it for future patients.

Pratiksha Baliga, Youth Medical Journal 2022


1. Haslinger, C., Weber, K. and Zimmermann, R., 2022.  Vacuum-Induced Tamponade for Treatment of Postpartum Hemorrhage. [online] National Center for Biotechnology Information. Available at: <>

2. D’Alton ME, Rood KM, Smid MC, Simhan HN, Skupski DW, Subramaniam A, Gibson KS, Rosen T, Clark SM, Dudley D, Iqbal SN, Paglia MJ, Duzyj CM, Chien EK, Gibbins KJ, Wine KD, Bentum NAA, Kominiarek MA, Tuuli MG, Goffman D, M., 2020. National Library of Medicine: National Centre for Biotechnology Information. 2022. Intrauterine Vacuum-Induced Hemorrhage-Control Device for Rapid Treatment of Postpartum Hemorrhage. [online] Available at:


3. D’Alton, M., Rood, K., Simhan, H. and Goffman, D., 2022. Profile of the Jada® System: the vacuum-induced hemorrhage control device for treating abnormal postpartum uterine bleeding and postpartum hemorrhage. [online] Taylor & Francis. Available at: <>

Health and Disease

HIV/AIDS is More Than a Disease: Epidemiology, Stigma, and Future Targets


HIV, or Human Immunodeficiency Virus, is a highly stigmatized disease that, if not treated for a significant period, can develop into AIDS (Acquired Immunodeficiency Syndrome). HIV is currently an incredibly prevalent disease and is classed as a ‘global epidemic’ by the World Health Organisation, due to the huge numbers of people affected: there are approximately 38 million people1 globally who are living with HIV now. In 2018 alone, around 770,000 individuals died from AIDS2. Many people with HIV, unfortunately, have minimal access to any form of prevention or treatment – and even if it is available, it may not be economically accessible or the severe prejudice against HIV and AIDS will prevent people with these conditions from seeking medical help. Additionally, there is still no cure for either HIV or AIDS. These issues are compacted with the fact that HIV disproportionately affects developing and emerging countries – for instance, eastern and southern Africa is most predominantly affected, where it is estimated that 54% of all people with HIV live. South Africa specifically has the highest prevalence of HIV cases, containing 7.5 million people who live with HIV. After eastern and southern Africa, the western and central regions of Africa are the most severely affected, where there are approximately 4.9 million people with HIV1.

The Origin of HIV

While the Human Immunodeficiency Virus was only first identified and diagnosed in people in the 1980s, it is suggested that it originated in the 1920s in the Democratic Republic of Congo3. HIV developed from SIV (Simian Immunodeficiency Virus), which is a virus that can be contracted by monkeys and apes, and like HIV, attacks the immune system in these primates. Both HIV and SIV are primate lentiviruses, and share neuropathological features including causing white matter lesions, subtle white matter astrocytosis and viral macrophages invading the brain4. The strain of SIV which can infect humans is known as SIVcpz, and it is not fully known how this viral strain was transferred from chimps to humans. One theory of how this may have occurred is humans hunting and eating the chimps, who were affected by SIV, or the infected blood of the chimps entering open wounds of the humans, while they hunted the chimps or otherwise5. The SIVcpz then mutated inside the human host cells to produce the new strain: HIV-1. There are multiple different strains of HIV-1, and the four main groups of such strains are M, N, O and P. HIV-1 Group M is the most studied and most widespread strain of HIV to date5.


The most commonly known method of transmitting HIV is sexual transmission, though semen or vaginal secretions from the infected host to an unaffected individual – but this virus can also be transmitted through many other bodily fluids. Examples include blood, for instance during blood transfusions, and breast milk. An infected mother can also transmit HIV to her offspring throughout pregnancy across the placenta, and also during delivery. This is called a perinatal transmission and is the main way in which children are infected with HIV, but this method of transmission is decreasing in prevalence due to medical developments. If a pregnant mother takes HIV medicine daily throughout her pregnancy, and the child is given HIV medicine for 4 to 6 weeks following delivery, the risk of the child contracting HIV is below 1%6.

Furthermore, HIV can be transmitted via contaminated needles, predominantly through intravenous drug use but can also be through tattoo needles, for example. The latter, however, is very rare – and indeed there are no known cases of HIV being transmitted in this way6. It remains a possible method of spreading HIV though, as the unsterilized needles could be contaminated with the blood of an infected host with HIV.

Additionally, HIV is not spread through shaking hands, hugging, sweat, saliva nor through the air – these were and still are perceived as methods of transmitting HIV7. This exacerbates the stigma surrounding those living with HIV and AIDS as it can lead them to feel isolated if people purposefully avoid any close or physical contact with them.


8After an individual contracts HIV, they will likely experience a flu-like illness anytime between 2- and 6-weeks following infection, typically only lasting between 1 and 2 weeks. Approximately 80% of people who contract HIV experience this and are likely to have symptoms such as fever, sore throat, rash, muscle pain, joint pain, tiredness, and swollen glands. As these symptoms are not limited to HIV, it can mean that people may not realize they are infected – and afterwards, HIV often will not cause symptoms for many years. This is a key reason why HIV is so underdiagnosed, as the virus will be actively damaging the host’s immune system while they will still feel and appear healthy – and this process can last up to ten years. When the immune system has been significantly damaged, other symptoms can follow, such as weight loss, night sweats, chronic diarrhea, and recurrent infections. Improved diagnosis and earlier treatment of HIV can prevent the disease from causing greater damage and developing into AIDS.

AS HIV is a virus, when it is transmitted to an individual it will bind to their host cells, as viruses are unable to replicate outside of living cells. HIV specifically binds to T-helper cells, which are a type of white blood cell and can also be referred to as CD4 cells, and fuse with the DNA inside the cell9. 10The HIV life cycle has seven stages, the first stage being the binding of the HIV to the receptors on the cell surface membrane of a CD4 cell. The second stage is fusion, where the HIV envelope fuses with the cell membrane of the CD4 cell as the HIV particle permeates the cell. Reverse transcription is the next stage and involves an HIV enzyme called reverse transcriptase. This converts HIV RNA into HIV DNA, which then allows the HIV DNA to bind with the genetic material of the CD4 cell. Integration is the fourth stage, involving integrase (another HIV enzyme) being released within the nucleus of the CD4 cell so that the HIV DNA can fuse with the cell’s DNA. Replication is the following stage, and this consists of HIV utilizing the CD4 cell’s machinery to synthesize HIV proteins. During the sixth stage (assembly), new viral proteins and HIV RNA move further away from the nucleus and towards the surface of the cell. It is at this point in the cycle where immature HIV is assembled. In the final stage (budding), the assembled viral particles leave the CD4 cell and release protease (an additional HIV enzyme). This enzyme breaks up the immature viral particles to form mature viral particles, which are infectious.

A person is considered to have AIDS when the CD4 cell count drops below 200 cells per cubic millimetre of blood. In a healthy person, a CD4 cell count is between 500 and 1600 cells per cubic millimetre of blood. A person can also be considered to have AIDS when they develop one or more opportunistic infections – due to their severely weakened immune systems11. Common opportunistic infections include a salmonella infection, where bacteria affect the intestines, and toxoplasmosis, which is a parasitic infection of the brain12.

Stigma and Global Crisis

HIV and AIDS are shrouded in stigma, especially due to common methods of contracting HIV being sexual intercourse or intravenous drug use. It was during the AIDS epidemic in the 1980s in the United States, where cases were reported on a large scale as they had not been before. The AIDS crisis in the 1980s was the first large-scale instance of HIV and AIDS being recognized, and there was an arduous struggle to quickly determine the causes, risk factors and modes of transmission for these conditions. One of the groups of people where there was the highest prevalence of HIV and AIDS was gay men. In 1983, the most ‘at-risk’ groups of contracting HIV were colloquially referred to as the ‘4H Club’13, consisting of gay males, hemophiliacs, heroin users (as well as other intravenous drug users) and those of Haitian origin.

Heroin and other intravenous drug users were at risk of being infected with HIV as blood from an infected host could be transmitted to them via a contaminated needle. People with hemophilia also could easily become infected when they received the clotting factors they lacked from donated blood – which was not screened for HIV and thus could be infectious. In 1985, blood screening for donated blood was introduced and greatly decreased the transmission of HIV via this treatment method for haemophilia14.

13Since 1982, the prevalence of AIDS in Haiti has been higher than in any other country in the Caribbean. Being Haitian or of Haitian descent does not increase the risk of becoming infected with HIV – as there is no genetic risk factor – and the modes of transmission for this virus are the same for Haitians as for all other people. It is suggested that HIV and AIDS was, and is, widespread in Haiti due to migrants arriving there from the Democratic Republic of Congo, where HIV is thought to have originated from.

Gay men were considered to be the most at-risk group of HIV and AIDS during the 1980s AIDS epidemic in the United States, but that has not yet disappeared. In June 1982, there were several cases of severe immune deficiency amongst the gay male population of Southern California, which led to the disease being called ‘gay-related immune deficiency’ or GRID15. There were approximately 1.2 million people in America living with HIV in 2018, and 740,400 of them were gay or bisexual men. A key reason why this is apparent is that anal sex is considered to be the form of sexual intercourse where there is the highest risk of transmitting HIV, due to the thin rectal lining making it easier for HIV to enter the body6.


While there remains no cure for either HIV or AIDS, the former can be treated using antiretroviral therapy (ART)16. The key purpose of this treatment is reducing the individual’s viral load to an undetectable level – and if this can be maintained, the individual will have an almost zero risk of transmitting HIV to their partners (who do not have HIV) via sexual intercourse. ART consists of a combination of HIV medications that must be taken daily, and these medications work by preventing HIV particles from multiplying. This will reduce the number of HIV particles in the body, thus reducing the viral load. ART should preferably begin as soon as possible after infection, but this can be difficult if the affected person is unaware they have HIV. Combivir is a combination of two antiretroviral drugs taken as part of ART by those with HIV, and this treatment was approved by the FDA in September 199715.

Targets and Future Development

The significant social stigma surrounding HIV and AIDS undoubtedly persists, yet there is an increasingly global movement to tackle this and advocate for improved treatment – possibly even a cure in the future. The Joint United Nations Program on AIDS, known as UNAIDS was established in 1996 to coordinate responses to HIV and AIDS across the UN15. There are currently global targets in place to work towards the eventual goal of ending HIV and AIDS. These form part of the Sustainable Development Goals (SDGs), where target 3.3 is to ‘end AIDS as a public health threat by2030’17. Target 16 is ‘Peace, justice and strong institutions, including reduced violence against key populations and people living with HIV’18. The Millennium Development Goals were outlined by the United Nations in 2000, and within these goals, targets are addressing HIV and AIDS – for instance, goal 6 is to ‘combat HIV/AIDS, malaria and other diseases”19. Perhaps the most challenging targets outlined by the UN concerning HIV and AIDS are within the ‘Getting to Zero Strategy’ between 2011 and 2015; the objective was to achieve: zero new HIV infections, zero AIDS-related deaths, and zero discrimination against sufferers of HIV or AIDS20.


Arguably the stigma surrounding HIV and AIDs, in addition to limited access to healthcare in less economically developed regions of the world where HIV and AIDS tend to be more prevalent, is one of the main limiting factors of the global fight against HIV and AIDS. Prominent societal figures such as Freddie Mercury and Princess Diana of Wales have been instrumental in addressing this societal prejudice. Freddie Mercury was the lead singer of the British Band ‘Queen’, and he died from AIDS. Princess Diana opened an AIDS ward in Middlesex Hospital, London, in 1987, and she was photographed shaking hands with a person who had AIDS21. This was to break down the idea that HIV or AIDS was spread through skin contact, and work towards reducing the social isolation that many people living with HIV or AIDS are often forced to endure. The connections between intravenous drug use and sexual intercourse between men and HIV/AIDS has worsened the public view of these conditions due to homophobia and general discrimination. These global issues need to be confronted simultaneously with research into further treatment for HIV and AIDS, and potentially a cure.

Samara Macrae, Youth Medical Journal 2022


1.   KFF: “the Global HIV/AIDS Epidemic” –

2.   Wikipedia: “Epidemiology of HIV/AIDS” –

3.   Faria, N.R. et al (2014) ‘The early spread and epidemic ignition of HIV-1 in human populations’ Science 346(6205):56-61

4.   National Library of Medicine: “Comparison of simian immunodeficiency virus and human immunodeficiency virus encephalitides in the immature host” –

5.   Avert: “Origin of HIV & AIDS” –

6.   Centers for Disease Control and Prevention: “Ways HIV Can Be Transmitted” –

7.   Centres for Disease Prevention and Control: “Ways HIV is Not Transmitted” –

8.   NHS: “Symptoms: HIV and AIDS” –

9.   Avert: “How HIV Infects the Body and the Lifecycle of HIV” –

10.   National Institute of Health: National Institute of Allergy and Infectious Diseases: “HIV Replication Cycle” –

11.   HIV gov: “What are HIV and AIDS” –

12.   HIV gov: “Opportunistic infections” –

13.   Microbiology Book: “Microbiology and Immunology On-Line” –

14.   The New York Times: “Hemophilia and AIDS: Silent Suffering” –

15.   Avert: “History of HIV and AIDS Overview” –

16.   HIV info/ “HIV Treatment: The Basics” –

17.   United Nations: “Sustainable Development Goals” –

18.   UNAIDS: “HIV Preventions 2020 Road Map” –

19.   United Nations: “Millennium Development Goals” –

20.   UNAIDS: “2011-2015 Strategy/ Getting to Zero” –

21.   Tatler: “How Diana, Princess of Wales was instrumental in trying to stop the stigma against HIV/AIDS” –

Health and Disease

Epileptic Seizures: Causes, Symptoms, Treatment and First Aid

What is an epileptic seizure?

Epileptic seizures can vary in their presentation and which part of the brain they occur. In some people, seizures are very obvious and involve loss of consciousness with jerking and stiffness, which are classified as tonic-clonic seizures. However, other types of seizures, such as absence seizures, may be much more difficult to notice and present as blankness or unresponsiveness without visibly appearing to be unconscious. Seizures are typically categorized through where in the brain they begin and whether the patient is aware of their surroundings or if other signs such as muscle contractions and jerking occur. 

What categorizes a seizure as epileptic?

Epileptic seizures are seizures that start in the brain. Seizures can be a symptom of other underlying conditions such as hypoglycemia, a young child overheating, Lyme’s Disease, and, as we have recently witnessed in the media with the interest in drink spiking in the UK, seizures can indicate the presence of a toxin or poison in someone’s system. Although these can all be classed as seizures, because the brain is not the place of onset, they are not epileptic seizures.

Focal onset seizures: the causes and symptoms

Focal onset seizures begin in one part of the brain and only affect that side. Within the broader category of focal onset seizures, there are seizures where the person may be aware (which are also commonly referred to as simple partial seizures or focal aware seizures) and seizures which affect awareness (focal impaired awareness seizures or complex partial seizures). These types of seizures can also be broken down into whether they present with physical movement or not to help a medical team determine what treatment is needed. The physical movements that would be presented with focal onset seizures include spasms, muscle contraction and relaxation, and excessive movement or what could be perceived as fidgeting. In cases of non-motor onset focal seizures, the patient may not move at all and seem to be staring or experience cognitive and emotional symptoms such as not being able to speak properly or having emotional outbursts. 

Generalized seizures: the causes and symptoms

Generalized onset seizures originate in both sides of the brain simultaneously and begin very suddenly. Similar to focal onset seizures, generalized onset seizures can be separated into motor symptoms and non-motor symptoms. A commonly missed non-motor type of generalized seizure is an absence seizure, where the patient may appear to still be conscious and may be able to continue walking but is not able to respond and is unaware of their surroundings. 

Can doctors always classify seizures?

Often doctors may struggle to decipher what the specific onset of a patient’s seizure is, especially if they happen very irregularly or the patient has only ever had one seizure. While awaiting further investigation, doctors may describe someone as having unknown onset seizures as they are unsure where in the brain the seizure originates, yet this can still be categorized into the motor or non-motor presentation. Unclassified is another term that doctors can use to describe a patient whose seizures have a unique presentation or whose witnesses are unable to describe the symptoms. 

How can doctors treat epileptic seizures?

Those who regularly suffer from epileptic seizures may go through a period of trialing different anti-epileptic drugs after they have been diagnosed through an electroencephalogram, a scan that detects the electrical signals in the brain so it can record a seizure. Anti-epileptic seizures (AEDs) are effective in roughly 7 out of 10 people with epilepsy and they control seizures by changing the chemical balance of the brain. If doctors are unable to treat the seizures with AEDs, and the precise type of seizure is diagnosed, surgical options to remove a small section of the brain can help to prevent seizures. Although more invasive options such as surgery and the implantation of a small electrical device can be used, lifestyle options such as a ketogenic diet have also been noted to help those with epilepsy. 

What is the first aid for an epileptic seizure?

  1. Ensure the person is not in danger from their surroundings (i.e. traffic or electricity).
  2. Support their head and prevent injury to it through padding.
  3. Loosen anything restricting around their neck to keep their airway open (i.e. a collar). 
  4. Once the seizure has stopped, place them in the recovery position. Recovery position is when the person is on their left side with their arms out horizontally and right leg at a right angle to their body.
  5. Record the start and finish time of the seizure while staying with them to reassure them as they come round.
  6. If the person has never had a seizure before, an injury is obtained during a seizure, or they do not regain consciousness, call an ambulance.

Note: One seizure lasts for over 5 minutes.

Sophie Farr, Youth Medical Journal 2022


Epilepsy Foundation, “Types of Seizures”, Last Accessed 29/10/21 from: 

Epilepsy Society, “Epileptic Seizures”, Last Accessed 29/10/21 from: 

NHS England “Epilepsy Treatment”, Last Accessed 29/10/21 from: 

NHS England, “What to do if someone has a seizure (fit)”, Last accessed 29/10/21 from:

Health and Disease Neuroscience

Schizophrenia and Highly Educated Guesses: Exploring Common Practices in Treating this Psychotic Disorder


The danger of the unknown. The medical world’s ability to not account for the unknown. These concepts are what Michael Focault’s “Madness and Civilization” emphasized; ideas lost in history. It is this very concept that allows us to formulate schizophrenia as a diverse disease under one helm. Schizophrenia is a psychotic disorder classified by positive symptoms: hallucinations and delusions, and an array of negative symptoms such as loss of will and loss of feelings, among many others (Shultz et al., 2007).  Schizophrenia occurs most commonly among men in their early 20s and women in their late 20s (Patel et al., 2014). Data on 16,423 Americans from the U.S. National Library of Medicine National Institutes of Health indicate higher rates of diagnosis among Latino Americans (13%) and African Americans (15%) compared to Euro-Americans (9%) and Asians (9%; Schwartz & Blankenship, 2014). From the idea of dementia praecox to modern schizophrenia, we have yet to grasp the disease truly. Thus, in schizophrenia, one must wonder, how should schizophrenia be treated globally? This paper aims to review the history of schizophrenia and the development of past and current treatments, both in the United States and worldwide.

History of Schizophrenia

Benedict Augustine Morel (1809–1873) used the term dementia praecox as an early label of schizophrenia (Lavretsky, 2008). Morel thought of schizophrenia as an early form of dementia. It does make sense as there are intersecting traits such as worsened cognitive functions, which may contribute to his belief that schizophrenia was a form of dementia (Lavretsky, 2008). However, Emil Kraepelin’s description of catatonia, hebephrenia, along with his dementia paranoia, created the foundation for further interest in what schizophrenia was (Lavretsky, 2008). Beur then revolutionized schizophrenia by bringing the disease under one helm. Kraepelin had several different forms of what he called dementia praecox (Lavretsky, 2008). He believed there were fundamental symptoms all people with schizophrenia had accessory symptoms that changed person to person (Lavretsky, 2008). Psychic schisis or split, ambivalence, cognitive features of “loose associations,” avolition, inattention, autism, and incongruent features signified primary deficits for Bleuler (Lavretsky, 2008). In comparison, delusions and hallucinations were treated as accessory features of schizophrenia (Lavretsky, 2008). With the pioneers of schizophrenia allowing the development of ideas into more concrete symptoms and clear definitions, along with this came treatment. 

History of Treatment for Schizophrenia

Early treatment was prolonged barbiturate-induced sleep therapy, insulin coma, or psychosurgery. Sleep therapy would induce unnaturally long sleep, sometimes leading to comas and death (Lopez-Munoz et al., 2005). Insulin comas were induced by giving the patients large amounts of insulin, putting the patient into a coma, which often did not help and sometimes led to death (Wright-Mendoza, 2018). With psychosurgery, the idea was to alter the brain, an idea created by António Egas Moniz (Toler, 2021). The most popular form of psychosurgery is a lobotomy, which tries to change the brain’s frontal lobe, which controls personality and behavior. However, some worrying result was brain damage and death (Toler, 2021). Terrier et al. cite that schizophrenia is found in 84% of the 771 lobotomized patients. The postoperative mortality was 7.4% (57 deaths)” (Ögren & Sandlund, 2007) and another saying, “When complications were reported, seizures represented the most common sequelae (1%–23%), followed by chronic headache (15%)…The death rate could have reached 5%.” (Terrier et al., 2019). Furthermore, for those with multiple lobotomies, seizures were more frequent, saying that 25.6% of patients had convulsions in prefrontal lobotomy. In comparison, convulsive seizures stood at 7 % for a simple operation and 47% for several operations (Freeman, 1953).

 The first half of the 20th century saw the hospitalization (or jailing) of people with schizophrenia (Lavretsky, 2008). Because the disease was seen as untreatable, patients were essentially checked into the hospital for long periods of time, where they were abused and treated terribly (Lavretsky, 2000).  Patients acting in ways deemed socially unacceptable were given what was known as a “chemical cosh” (Lavretsky, 2008). Cosh is derived from British slang, which means to bludgeon. Patients were heavily sedated to calm them, but it served no benefit in reducing symptoms (Lavretsky, 2008). The only effect was a temporary peace (Lavretsky, 2008. Another issue with treating people with schizophrenia in the asylums was that the treatment did not include any preparation for patients to enter the real world (Lavretsky, 2008). Patients’ symptoms would improve but in a context isolated from daily life (Lavretsky, 2008). There was not much improvement. In the 1930s, the Third Reich of Nazi Germany wished to eliminate schizophrenia (Lavretsky, 2008). It was done by euthanasia, which consisted of firstly lethal injection and later gas chambers (Lavretsky, 2008).

The middle of the 20th century brought typical antipsychotics through trying to create antihistamine drugs (Tandon, 2011). Typical antipsychotics are also known as neuroleptics (Tandon, 2011). The neuroleptics cause neurolepsis, a syndrome with the intended effect of psychomotor slowing, emotional quieting, and affective indifference (Tandon, 2011). Paul Charpentier, who experimented with phenothiazine derivatives, hoped to find properties in the compounds that helped with allergies (Ramachandraiah et al., 2009). Then in 1949, Henri-Marie Laborit, a French army surgeon, used promethazine, a phenothiazine derivative, on patients and saw that patients were much calmer and more cooperative (Ramachandraiah et al., 2009). Later on, a chlorinated derivative of phenothiazine was discovered by Laborit called chlorpromazine (Ramachandraiah et al., 2009). He claimed that this substance would be great therapy for patients with mental illnesses (Ramachandraiah et al., 2009). However, his colleagues met him with skepticism, and chlorpromazine was never introduced (Ramachandraiah et al., 2009). Jean Delay and Deniker’s study on 38 patients proved chlorpromazine an effective treatment, after which typical antipsychotics were introduced to the market (Ramachandraiah et al., 2009). Fast forward, and now atypical antipsychotics are dominating the medical world. Starting from the 1980s, they began a further diversification of treatment for people with schizophrenia (Abou-Setta et al., 2012). Second-generation antipsychotics include one of the most effective treatments for schizophrenia which is clozapine (Lieberman, 1996; Nuera, 2020). Clozapine, while being very effective, has a dangerous risk of Agranulocytosis. This disease means the body does not make enough of neutrophils, a type of white blood cell (Clevelandclinic, 2020). These days, even more, great atypical drugs such as risperidone, olanzapine, sertindole, quetiapine, and ziprasidone have shown up-and-coming prospects (Nuera, 2020). The advances made from the late 1800s to the 21st century have been incredible, but one must understand the neurobiology behind schizophrenia when trying to decipher the most globally effective treatment. 

Neurobiology of Schizophrenia

Schizophrenia deals with chemical imbalances that influence the functioning of a person who is susceptible to schizophrenia. Imbalances of dopamine, glutamate, GABA, acetylcholine, and serotonin are believed to be essential contributors to schizophrenia (Brisch et al., 2014). These all are neurotransmitters that essentially control our physical nature. Dopamine is a neurotransmitter that regulates movement and emotion and is essential for the normal functioning of a person. If one’s dopamine is hypoactive or hyperactive, it can be detrimental to one’s health. What has been said about dopamine pertaining to schizophrenia is hyperactive dopamine transmission in the mesolimbic areas and hypoactive dopamine transmission in the prefrontal cortex in schizophrenia patients (Brisch et al., 2014). In addition to the mesolimbic brain areas, dopamine dysregulation is also seen in brain regions, including the amygdala and prefrontal cortex, necessary for emotional processing (Brisch et al., 2014). To put this more simply, the mesolimbic areas are our reward pathway activated by things like sugar we ingest (Adinoff, 2004). The region allows the processing of what is real or not as well. Hyperactive dopamine transmission in this region results in positive symptoms, such as hallucinations and delusions (Brisch et al., 2014). The prefrontal cortex (PFC) at this time is said to develop memory, perception, and many cognitive functions such as attention, impulse inhibition, prospective memory, and cognitive flexibility (Pryor & Veselis, 2006). Parts of our PFC help us perform tasks while other parts help us to take in information. The PFC, when it has hypoactive dopamine transmission, leads to negative symptoms, which means it lacks something that should be present (e.g., ability to communicate) (Shultz et al., 2007; Siddiqui & Goyal, 2008 ). We currently know this about schizophrenia from a chemical standpoint and what has been used to prescribe people with schizophrenia best.

Schizophrenia Treatment Globally

Treatment that is just as diverse worldwide as our understanding of the disease itself. Different medicinal regulations and different practices formed different treatments across regions. Nigeria’s residents have been more inclined to use more traditional medication to treat schizophrenia (Ayonrinde et al., 2004). Herbalists, traditional healers, and spiritual healings are all commonly sought out to treat schizophrenia (Adewuya, 2015). In contrast to Canada, Bermuda and the United States prefer antipsychotics (Crockford & Addington, 2017). A research paper notes on a study that included Nigeria that a systematic review of the effectiveness of traditional healers in treating mental disorders concluded that people with acute relapses improve. In contrast, in the care of traditional healers, improvements could not be established, however, as any different than the regular illness route (Endale, 2020). They are illustrating the effective properties herbs may have while also highlighting their ineffectiveness as a mainline treatment. Through my background on schizophrenia, one can see the advancement of antipsychotics and how it has helped people with schizophrenia the most out of all treatments. Looking into herbs for treating schizophrenia, the US National Library of Medicine National Institutes of Health has said herbs to be beneficial with regular antipsychotics, and this belief is not synonymous with this organization (Chengappa, 2018). K.N. Roy Chengappa, M.D. A professor of psychiatry remarks similar thoughts stating herbs can reduce worsening symptoms but should be taken along with antipsychotics (Chengappa, 2018).


This paper aimed to review the literature on the history of schizophrenia and the development of the most effective treatment. Schizophrenia is classified by positive and negative symptoms and is affected by the lack of or overabundance of dopamine transmission. It took many years and will take more to discover a more effective treatment for schizophrenia. Schizophrenia is a complicated disease that has stumped a generation and left us in mystery. We entertain the fruits of life in hopes of striking a discovery and coming one step closer to curing schizophrenia. Although more definitive answers would have been preferable, we are left with a scramble of highly educated guesses in science’s beginning and forward-moving end.

William Onubogu, Youth Medical Journal 2022


Abou-Setta, A. M., Mousavi, S. S., Spooner, C., Schouten, J. R., Pasichnyk, D., Armijo-Olivo, S., & Hartling, L. (2012). First-generation versus second-generation antipsychotics in adults: comparative effectiveness. University of Alberta Evidence-based Practice Center: Rockville, MD, USA.

 Adinoff B. (2004). Neurobiologic processes in drug reward and addiction. Harvard Review of Psychiatry, 12(6), 305–320.

Ayonrinde, O., Gureje, O., & Lawal, R. (2004). Psychiatric research in Nigeria: bridging tradition and modernisation. The British Journal of Psychiatry, 184(6), 536-538.

Brisch, R., Saniotis, A., Wolf, R., Bielau, H., Bernstein, H. G., Steiner, J., Bogerts, B., Braun, K., Jankowski, Z., Kumaratilake, J., Henneberg, M., & Gos, T. (2014). The role of dopamine in schizophrenia from a neurobiological and evolutionary perspective: old fashioned, but still in vogue. Frontiers in Psychiatry, 5, 47.

Crockford, D., & Addington, D. (2017). Canadian schizophrenia guidelines: schizophrenia and other psychotic disorders with coexisting substance use disorders. The Canadian Journal of Psychiatry, 62(9), 624-634.

Ernest, D., Vuksic, O., Shepard-Smith, A., &amp; Webb, E. (2017). Schizophrenia: An information guide. Centre for Addiction and Mental Health.

Freeman, W. (1953). Hazards of lobotomy: Study of two thousand operations. Journal of the American Medical Association, 152(6), 487-491.

Lavretsky, H. (2008). History of Schizophrenia as a Psychiatric Disorder. Clinical Handbook of Schizophrenia. 25-29. DOI

Lieberman J. A. (1996). Atypical antipsychotic drugs as a first-line treatment of schizophrenia: a rationale and hypothesis. The Journal of Clinical Psychiatry, 57 Suppl 11, 68–71.

López-Muñoz, F., Ucha-Udabe, R., & Alamo, C. (2005). The history of barbiturates a century after their clinical introduction. Neuropsychiatric Disease and Treatment, 1(4), 329–343.

Ögren, K., & Sandlund, M. (2007). Lobotomy at a state mental hospital in Sweden. A survey of patients operated on during the period 1947–1958. Nordic Journal of Psychiatry, 61(5), 355-362.

Patel, K. R., Cherian, J., Gohil, K., & Atkinson, D. (2014). Schizophrenia: overview and treatment options. P &  : A peer-reviewed Journal for Formulary Management, 39(9), 638–645.

Pryor, K.O. & Veselis, R. A. (2006). Chapter 29 – Consciousness and cognition. Foundations of Anesthesia (Second Edition), 349-359.

Ramachandraiah, C. T., Subramaniam, N., & Tancer, M. (2009). The story of antipsychotics: Past and present. Indian Journal of Psychiatry, 51(4), 324–326.

Schwartz, R. C., & Blankenship, D. M. (2014). Racial disparities in psychotic disorder diagnosis: A review of empirical literature. World Journal of Psychiatry, 4(4), 133–140.

Sedhai, Y. R., Lamichhane, A., & Gupta, V. (2021). Agranulocytosis. StatPearls [Internet].

Shultz, S. H., North, S. W., & Shields, C. G. (2007). Schizophrenia: A Review. American Family Physician.75(12), 1. DOI 

Siddiqui, A., & Goyal, N. (2008). Neuropsychology of prefrontal cortex. Indian Journal of Psychiatry, 50(3), 202–208.

(Siddiqui, 2008)

Tandon, R. (2011). Antipsychotics in the treatment of schizophrenia: an overview. The Journal of Clinical Psychiatry, 72(suppl 1), 0-0.

Terrier, L. M., Lévêque, M., & Amelot, A. (2019). Brain Lobotomy: A Historical and Moral Dilemma with No Alternative?. World Neurosurgery, 132, 211-218.

Health and Disease

Management of Hypertensive Disorders in Pregnancy to Prevent Maternal and Foetal Mortality


Hypertensive complications in pregnancy are increasing in prevalence and often cause significant impairments in maternal and foetal mortality and morbidity. Managing and treating these disorders aims to prevent serious cerebrovascular and cardiovascular effects in the mother without compromising foetal well-being.


In the United States of America alone, hypertension occurs in roughly 6-8% of pregnancies among women ages 20-44. Associated complications of hypertension in pregnancies, including pre-eclampsia, eclampsia, and end-organ damage, are leading causes of maternal and foetal morbidity and mortality worldwide.  The main strategy in the treatment of hypertension in pregnancy is to prevent any cerebrovascular and cardiac complications for the mother whilst preserving the uteroplacental and foetal circulation and, limiting medication toxicity to the foetus.

Classification of Hypertension Disorders

These hypertensive pregnancy disorders are diagnosed using a variety of tests including blood pressure monitoring, PIGF (placental growth factor) test and urine tests for proteinuria (increased levels of protein in the urine). Blood pressure readings which are higher than 140/90 mm Hg must also be monitored closely.

  Preeclampsia and Eclampsia

High blood pressure and proteinuria of over 300mg, after 20-week gestation, are both characteristics of these disorders. The difference between eclampsia and preeclampsia is that eclampsia is a convulsive, more life-threatening form of pre-eclampsia, which affects 0.1% of all pregnancies. The disorder is thought to be caused by placental malperfusion resulting from an abnormal modelling of the maternal spiral arteries.

–   Gestational Hypertension

This disorder is diagnosed by measuring high blood pressure for the first time from a patient, after 20-week gestation alongside the absence of proteinuria. Gestational hypertension is significantly less dangerous than preeclampsia/eclampsia since the patient has not developed renal impairment, hence the absence of proteinuria.

–   Chronic Hypertension

Chronic Hypertension in Pregnancy is defined as blood pressure greater than 140mm Hg systolic and/0r 90 mm Hg diastolic, before pregnancy – however many women seek care for chronic hypertension only after becoming pregnant, before 20 weeks of gestation. This disorder is estimated to be present in approximately 3 – 5% of pregnancy and is increasingly more commonly encountered.

The 2 main risk factors contributing to this increasing prevalence of chronic hypertension include obesity and old age, which are also of increasing prevalence in pregnancy. Although many women with chronic hypertension remain stable during pregnancy and delivery, they are at a greater risk of several pregnancy complications, particularly superimposed preeclampsia, placental abruption, and preterm birth.

–   Chronic Hypertension with superimposed preeclampsia

This hypertension disorder is the new onset of proteinuria in the setting of hypertension before 20 weeks of gestation. Although, similar to chronic hypertension, this disorder is categorised separately due to the onset of proteinuria which is drastically increases the patients’ risk of HELLP syndrome (Haemolysis, Elevated Liver Enzymes and Low Platelets – a rare liver and blood clotting disorder that can affect pregnancy women).

Overall, there are four main organ systems which can suffer from possible acute complications of hypertensive pregnancies: Cardiovascular, Renal, Hepatic and the Central Nervous System (CNS). For mild to moderate hypertension in pregnancy, maternal risks are small, although they may be adverse consequences of high blood pressure in foetal cerebrovascular development. In contrast, early-onset and severe preeclampsia have a significant risk of later cardiovascular and renal morbidity and mortality, particularly for the mother.

Managing Hypertensive Pregnancy Disorders

Non-Pharmacological Approaches

In non-pregnant hypertensive patients, lifestyle changes and interventions, including weight loss and reducing salt intake, are often the course of treatment. However, currently, there is no evidence to show that these approaches, such as an exercise and diet programme, is effective in preventing and managing hypertension in pregnancy. A 2010 study concluded that exercise training could reduce preeclamptic features in animal models, both before and after gestation, however human randomised, double-blind trials have not had similar results. Similarly, although obesity is a contributing risk factor for gestational hypertension, no evidence institutes that weight loss interventions could prevent hypertensive disorders in pregnancy.

There are very few non-pharmacological approaches available for managing hypertensive pregnancy disorders, particularly the lack of evidence supporting such approaches. But, bed rest continued to be the most frequent advice for patients with preeclampsia, which has shown to lower blood pressure, promote renal function and, which all will prevent dire complications during delivery. Nevertheless, since the progression of preeclampsia to eclampsia is sudden and without prediction, patients with this condition will be admitted to hospital for observation, where pharmacological approaches are often used due to the severity of the condition.

The only definitive therapy for acute hypertensive syndromes (preeclampsia and eclampsia) is delivery. This is especially when urgent control of blood pressure is necessary, or when the risk of harm to the foetus and/or the mother is significantly high. Delivery must be postponed for as long as possible, to enable foetal maturation, particularly of the respiratory system – Premature babies often have underdeveloped lungs, where not enough surfactant has been produced, which can lead to lung collapse and respiratory distress. The decision for the time of delivery also determines the extent of preeclampsia and the risk of complications, dictated by the current gestational period, liver and renal function tests, coagulation tests, etc. Although delivery is seen as a definitive treatment, expectant management and close observation is appropriate, particularly for patients before 32 weeks gestation as the foetus will be underdeveloped and risk of mortality is high.

Pharmacological Approaches

The aim of pharmacological approaches during pregnancy is to prevent progression to severe hypertension and maternal complications, and to improve foetal development by prolonging the pregnancy.

The two main pharmacological agents to treat hypertensive pregnancies include anti-hypertensive agents and beta blockers. Anti-hypertensive agents are a class of drugs that are used to treat hypertension, which can be vasodilators or inhibitors of noradrenaline release. However, these medications should be ceased if diastolic blood pressure falls too low, which can result in maternal ischemia and potentially heart failure: low diastolic BP can restrict foetal blood supply, threatening dangerously low oxygen saturation levels.  Anti-hypertensive drugs are successful in reducing blood pressure but must be monitored closely to prevent low diastolic BP and to limit the rate of foetal growth restriction. In comparison, beta blockers medications are most preferred for the treatment of hypertension in pregnancies due to its proven safety and efficacy, and no association to adverse maternal or foetal outcomes.

However, the pharmacological management of hypertension in pregnancies remains controversial and understudied, particularly due to the various and complex factors affecting maternal and foetal wellbeing. Furthermore, increases in diversity and variability across patients’ clinical responses to medications require individualised assessments for dosing.


Hypertensive disorders is a common complication of pregnancy and due to the significant risk of morbidities and mortalities, the main issue with managing these disorders is identifying a balance between the maternal benefits from BP control and the foetal risks caused by intrauterine mediation toxicity and potential growth restriction. The treatment of hypertension may improve the risk profile for the mother and baby, and therefore delay delivery to increase survival rates for the foetus, but it does not cure hypertension, and preeclampsia, nor does it delay the progression to preeclampsia.


Brown, M.A. (2018). Hypertensive Disorders of Pregnancy. Hypertension, [online] 72(1), pp.24–43. Available at: [Accessed 28 Oct. 2021].

CDC (2019). High Blood Pressure during Pregnancy. [online] Centers for Disease Control and Prevention. Available at: [Accessed 28 Oct. 2021].

Deak, T.M. and Moskovitz, J.B. (2012). Hypertension and Pregnancy. Emergency Medicine Clinics of North America, [online] 30(4), pp.903–917. Available at: [Accessed 28 Oct. 2021].

Easterling, T.R. (2014). Pharmacological management of hypertension in pregnancy. Seminars in Perinatology, [online] 38(8), pp.487–495. Available at: [Accessed 28 Oct. 2021].

Falcao, S. and Bisotto, S. (2010). Exercise training can attenuate preeclampsia-like features in an animal model. Journal of Hypertension, [online] 28(12), pp.2446–2453. Available at: [Accessed 28 Oct. 2021].

Hladunewich, M., Karumanchi, S.A. and Lafayette, R. (2007). Pathophysiology of the Clinical Manifestations of Preeclampsia. Clinical Journal of the American Society of Nephrology, [online] 2(3), pp.543–549. Available at: [Accessed 28 Oct. 2021].

Image 1 –

Kattah, A.G. and Garovic, V.D. (2013). The Management of Hypertension in Pregnancy. Advances in Chronic Kidney Disease, [online] 20(3), pp.229–239. Available at: – R1 [Accessed 28 Oct. 2021].

NHS Choices (2019). Newborn Respiratory Distress Syndrome. [online] National Health Service. Available at: [Accessed 28 Oct. 2021].

NICE. (n.d.). BNF is only available in the UK. [online] Available at: [Accessed 28 Oct. 2021].

Preeclampsia Foundation (2020). Signs and Symptoms. [online] Preeclampsia Foundation. Available at: [Accessed 28 Oct. 2021].Vatish, M. (2016). Improving the Prediction of Preeclampsia. [online] Nuffield Department of Women’s & Reproductive Health: Medical Sciences Division. Available at: [Accessed 28 Oct. 2021].