Biomedical Research

AI and Moral Status


The question of moral status has been one that has plagued humans since Aristotle in 300 BCE [1]. In broad terms, it is the status attributed to an entity if and only if their “interests morally matter to some degree for the entity’s own sake”[2]. In other words, an entity of moral status will have certain considerations that need to be taken into account when deciding what is owed to them. Moral status has become increasingly important in areas such as bioethics, medical ethics, and even environmental ethics. For instance, is it morally correct to perform experiments on mice for the sake of scientific advancement? Are we right in consuming farmed animal products? Does a human embryo hold any moral status, and how does this play a role in abortion rights? These are all questions worth debating, especially considering that as humans, we only make up 0.01% of the Earth [3]. Yet until recently, these questions have only been limited to biological beings. However, as advancements in science and technology continue to increase, philosophers and scientists are expanding their scope to consider novel and unfamiliar beings, namely, artificial intelligence. 

Artificial Intelligence

Artificial Intelligence (AI) is a branch of computer science that concerns creating smart machines with human capabilities. AI already exists in today’s world, from Netflix recommendations to Alexa and Siri. But when it comes to moral status, experts are focusing on a type of AI called Artificial General Intelligence (AGI). AGI introduces the capacity to perform any intellectual task with the efficiency of a human, including  thinking rationally, and acting humanly. While currently non-existent, there are a number of companies and researchers working on creating such an entity [4]. 

Since the beginning of AI as a whole, academic scholars were already explaining that the brain could be simplified to a systematically engineered structure of complex nonlinear systems [5]. Considering the fact that neuroscientists and engineers work closely together on technology, this doesn’t come as a surprise. For instance, the processes AI use have become more  human-like – with the way they integrate sensory information and learn from previous mistakes. Moreover, numerous neurological terminologies, such as neural networks and input/output channels, imply that AI is becoming more and more anthropomorphic [6].

Moral Status

Currently, our conception of moral status remains largely binary – an entity is either deemed worthy, or not. However, the absence of a universal criteria has made it difficult for experts to determine which entities, including AGI, deserve this significant label. Most scientists and philosophers name sentience and sapience as the two main factors to consider. Loosely defined, sentience is the capacity for qualia, meaning the  capacity to experience pain and suffering. Sapience, on the other hand, is a set of abilities attributed to higher intelligence, including reasoning, responsiveness, and self-awareness [7]. 

Nevertheless, the only model of sentience and sapience that we have is that of ourselves, making it challenging to fathom what these phenomena  might look like in other beings [8]. Therefore, while research is focusing on attempts to understand sentience, sapience, moral status, and how the three intertwine,  it is equally important for experts to consider the public when forming their decision. 

The Importance of Public Opinion

Unfortunately, there is very little research regarding the public’s opinion about an AGI, or any similar entity, with moral status. In fact, a recent research paper outlined the lack of information, finding only 294 relevant research or discussion pieces on the topic [1]. This is extremely concerning. A moral status in AGI may initiate changes into the social and legal systems, meaning that society would have to increase their interactions with AGI immensely. These interactions may appear or feel oddly intimate [7].  Not knowing where the public stands can lead to future problems.  

Figure 1: Sophia the robot [10]

Hanson Robotics’s Sophia serves as an example. In 2016, Hanson Robotics launched Sophia the robot, who eventually became the first robot in history to achieve citizenship in a country. Sophia is capable  of engaging in general conversation, human-like movement, and the expression of emotion. While this may seem like AGI behavior, numerous experts have confirmed that this isn’t true, because technology isn’t there yet. But her appearance and dialect continue to fool the public, many of whom describe Sophia’s citizenship with adjectives  like ‘weird’ and ‘creepy’. Others have brought up that Sophia received better rights than women in Saudi Arabia because she didn’t have to wear a headscarf, upon numerous other reasons [9]. Similarly, the situation with citizenship also brought up a discussion of whether to refer to Sophia as “she” or “it”, suggesting that the line between humans and robots is only getting blurrier [10].   

This example goes to show the importance of gauging public opinion. Especially in the future, experts must consider the public’s opinion before declaring AGI with moral status. Additionally, previous studies have shown that attitudes towards proposed technologies can be influenced by the way they are framed, in terms of context [11]. In order to lessen the chance of potential public outrage, professionals must also consider the way they introduce moral status in AGI into society. 


With the rapid advancements in neuroscience and technology, it seems almost inevitable that Artificial General Intelligence will come to be. As a result, moral status needs to be considered; it influences the way we interact with our environment and ourselves, ultimately contributing to our definition of morality. Thus, it is extremely important that experts factor in the public’s opinion when making their decision. After all, the introduction of AGI into society will majorly affect society. 

Saanvi, Youth Medical Journal 2022


[1] Harris, Jamie et al. (20/07/2021). The Moral Consideration of Artifcial Entities: A Literature Review. Retrieved: 06/20/22.

[2] Stanford Encyclopedia of Philosophy (03/03/2021) The Grounds of Moral Status. Retrieved June 20, 2022, 

[3] Ritchie, Hannah (24/04/2019) Humans make up just 0.01% of Humans make up just 0.01% of Earth’s life – what’s the rest?, Retrieved: 06/20/22

[4] JavaPoint Types of Artificial Intelligence  Retrieved: 06/20/22

[5] Long, N. Lyle, Troy D. Kelley (02/2010) Review of Consciousness and the Possibility of Conscious Robots Retrieved: 06/20/22

[6] Tyagi, Neelam (27/02/2022) When Artificial Intelligence (AI) And Neuroscience Meet Retrieved: 06/22/22

{7] Hurley, M. (2021). Should AI Have Moral Status? The Importance of Gauging Public Opinion. The Neuroethics Blog Retrieved: 07/01/22

[8] Bostrom, Nick, Eliezer Yudkowsky (2011) The Ethics of Artificial Intelligence Retrieved: 07/01/22

[9] Skynet Today (2016) Sophia the Robot, More Marketing Machine Than AI Marvel Retrieved: 07/06/22

[10] Weller, Chris (26/10/2017) We couldn’t figure out whether to call the first robot citizen ‘she’ or ‘it’ — and it reveals a troubling truth about our future Retrieved: 07/06/22

Biomedical Research Health and Disease

The Evolution of Sulfonylureas as Hypoglycaemic Drugs Over Time, Their Mechanisms and how they Treat Symptoms of Type II Diabetes Mellitus.


Type 2 diabetes mellitus can be a difficult disease to live with and can severely affect one’s quality of life. Diabetes mellitus is a chronic condition in which your body cannot regulate your blood glucose levels, the two main types being type 1 and type 2. These are due to either an inability to produce insulin (type 1) or when the insulin produced is ineffective (type 2). Type 2 diabetes, or non-insulin dependent diabetes mellitus, can occur as a result of lifestyle factors, such as diet and obesity. These lead to insulin resistance or the inability to produce enough insulin as necessary. Currently, there are 4.1 million people in the UK with diabetes, with 90% of these cases due to type 2 diabetes. It is estimated that 1 in 10 adults will develop type 2 diabetes by 2030 (Lacobucci, 2021)

One treatment for type 2 diabetes is the use of sulfonylureas – a group of oral drugs with hypoglycaemic effects (ability to lower blood glucose levels). Since their discovery in the 1940’s, medicinal chemists have changed the structure of these drugs, to make them more effective for clinical use. These modifications have led to more favourable properties in metabolism, potency, efficacy and safety, which have made the drugs a more effective, safe and convenient treatment for type 2 diabetes mellitus. These will be discussed later on in the article.

This article will explain the chemistry of sulfonylureas, the pharmacology behind them and how they have changed over time to make them more effective in the treatment of type 2 diabetes mellitus.

Type 2 Diabetes Mellitus Cause

Type 2 diabetes occurs when there is a deficiency in insulin secretion by the β-cells in the pancreas, or when cells develop a resistance to insulin action (Galicia-Garcia, et al., 2020). This is usually due to obesity and an unhealthy lifestyle, including lack of exercise, and a high fatty and sugar diet. Insulin is a peptide hormone that is secreted by β-cells in the pancreas. It is responsible for lowering blood glucose levels by stimulating the conversion of glucose in the blood into glycogen to be stored in muscle, fat, and liver cells. When there is a deficiency or resistance of insulin it leads to hyperglycaemia (high blood glucose levels), due to the reduced ability to convert glucose into glycogen. This would lead to symptoms such as vomiting, dehydration, confusion, increased thirst, and blurred vision to name a few.

Physiology Behind Insulin Secretion and Structure

To understand the pharmacology of the sulfonylurea compounds, one must first understand the physiology behind the secretion of insulin.

As stated above, insulin is a peptide hormone, so it is made from a polypeptide chain. Transcription of the insulin gene (found on chromosome 11) occurs and the resulting mRNA strands are translated to produce two peptide chains. These chains are held together in a quaternary structure by two disulfide bonds to form the hormone insulin (Brange & Langkjoer, 1993).

Insulin secretion must be tightly controlled to maintain efficient glucose homeostasis. To do so, the secretion of insulin is regulated precisely to meet its demand. The β-cells of the pancreas contain glucose transporter 2, a carrier protein that allows facilitated diffusion of glucose molecules across a cell membrane. These transporters allow glucose to be detected and enter the β-cells. Upon cytoplasmic glucose levels rising, the pancreatic β-cells respond by increasing oxidative metabolism, leading to increased ATP in the cytoplasm (Fridlyand & Philipson, 2010). The ATP in the cytoplasm of the β-cells, can bind to ATP sensitive K+ channels on the cell membrane, causing them to close. This leads to a build up of K+ ions within the cell as they are unable to leave the cell, leading to the depolarisation of the cell. The increasing positive membrane potential, leads to the opening of voltage gated Ca2+ channels, leading to an influx of Ca2+ ions.  This further depolarises the cell, which triggers the release of insulin from the cell, packaged in secretory vesicles, by exocytosis (Fu, et al., 2013).

Pharmacology of Sulfonylureas

Sulfonylurea’s act inside the pancreatic β-cells. On the ATP sensitive K+ channel, there are sulfonylurea receptors to which the drug binds, causing them to close. The cascade of events that follows leads to the release of insulin by the pancreatic β-cell. This mimics the activity that occurs when glucose is taken into the cell, as mentioned earlier. (Panten, et al., 1996). (possibly delete this instead as it is repeated)

This process allows more insulin to be released, to lower blood glucose levels when insufficient insulin is produced naturally. Sulfonylureas are only effective in type 2 diabetes, since insulin production is not impaired (as in type 1 diabetes), rather the release of or resistance to insulin is affected.

Common Chemistry of all Sulfonylureas

All the sulfonylurea drugs are characterised by their common sulfonylurea group. This functional group allows this unique group of compounds to bind to SUR on ATP sensitive K+ channels, giving it its hypoglycaemic properties. The common structure of sulfonylureas is shown in figure 1 (Fvasconcellos, 2011), with the blue R groups indicating replaceable side chains, which fluctuates between each drug development over time giving slightly different properties between the drugs. Over time, scientists have improved the drugs efficacy by changing the side compounds. Additionally, scientific research has led to development of other drugs from the same pharmacological group, but with altered side chains (again, giving them different properties) which have also improved the efficacy of the drug. These changes have altered properties of the drug such as potency, metabolism, half-life, tolerance and safety, to make the drug more effective for clinical use.

A screenshot of a video game

Description automatically generated with low confidence

Figure 1 Sulfonylurea functional group

History and development of the drugs and their chemical structure

Sulfanilamide and IPTD

In 1935, a French research term discovered the active chemical in the antibiotic prontosil, known as sulfanilamide (Sorkhy & Ghemrawi, 2020). Sulfanilamide was found to be a poor antibiotic and so derivatives of it were synthesised and tested. These compounds, such as such as p-amino-sulfonamide-isopropylthiodiazole (IPTD), which was used as an antibiotic for the treatment of typhoid in 1942, revealed unexpected hypoglycaemic side effects. These were discovered by French physician, Marcel Janbon (Quianzon & Cheikh, 2012). However, scientists could not identify how these side effects were caused.

In 1946, Auguste Loubatières, investigated the effect of IPTD on dogs. He administrated the drug to fully pancreatectomized and partially pancreatectomized dogs and found that the drug was ineffective in the fully pancreatectomized ones but effective in the partially pancreatectomized ones. This later lead to his conclusion that the drugs’ hypoglycaemic property was due to its ability to stimulate insulin secretion directly in the pancreatic β-cells (Loubatières-Mariani, 2007).


The first sulfonylurea to be marketed as a drug for diabetes was Carbutamide. It was synthesised in East Germany by Ernst Carstens and in the early 1950’s, clinical trials for this sulfanilamide derivative Carbutamide were carried out, by Hellmuth Kleinsorge, for the treatment of urinary tract infections. However, during treatment, side effects of hypoglycaemia were also noted (Kleinsorge, 1998) – similar to those experienced by patients treated with IPTD for typhoid in 1942.

These findings were presented to Erich Haak, of the East German Ministry of Health, in 1952, which ultimately culminated in the ban of the drug. Haak later moved to West Germany where he patented the drug to be tested for antibacterial use, without disclosing the side effects of hypoglycaemia. Karl Joachim Fuchs, a doctor who was part of this drug testing, noticed symptoms of ravenous hunger and euphoria upon taking the drug himself, which were found to be due to hypoglycaemia. Following this, studies were undertaken, and a general conclusion was that Carbutamide was most effective in people over 45 years of age, who had had diabetes for less than 5–10 years and had not used insulin for more than 1–2 years (Tattersall, 2008). The use of Carbutamide was short lived as it was found to have fatal side effects in a small number of people, including toxic effects on bone marrow (National Center for Biotechnology, 2005).

The structure of Carbutamide is shown in figure 2 (Anon., 2021). It can be seen, attached to the benzene ring on the left-hand side of the sulfonylurea functional group, can be seen an amine group. Attached to a second amine group on the right side of the functional group is a four-carbon chain. As mentioned previously, it is the sulfonylurea functional group that gives rise to the drugs hypoglycaemic effects. This is the first drug to contain the sulfonylurea functional group (seen in figure 1) and the beginning of many discoveries into the treatment of non-insulin dependent diabetes mellitus.

Figure 2 Structure of Carbutamide


After the discovery of the fatal side effects of Carbutamide, the next sulfonylurea drug to be synthesised was Tolbutamide; it was one of the first sulfonylureas to be marketed for controlling of type 2 diabetes, in 1956 in Germany (Quianzon & Cheikh, 2012). There were minimal changes to the chemical structure in this next development of the sulfonylureas. The amine group on the left hand side of Carbutamide was swapped for a methyl group to give Tolbutamide, shown in figure 3 (Anon., 2021), which helped reduce the toxicity of the drug. However, as a result tolbutamide was subsequently being metabolised too quickly (Monash University, 2021), which led to low levels of the (active) drug in the blood. The drugs efficacy was therefore lower than expected, resulting in it having to be administered twice a day, which was an inconvenience for patients.


Figure 3 Structure of Tolbutamide


It was soon discovered that the methyl group attached to the benzene ring in Tolbutamide was the site of its metabolism (Monash University, 2021) and so it was replaced by medicinal chemists with a chlorine atom in the next drug, Chlorpropamide (see figure 4 ), (Anon., 2021). This helped reduce metabolism, giving the drug a longer half-life, so it was not cleared as quickly from the body. Indeed, a University of Michigan study found that chlorpropamide serum concentration declined from about 21 mg/100ml at 15 min to about 18 mg/100ml at 6 hours, whereas the tolbutamide serum concentration fell more rapidly from about 20 mg mg/100ml at 15 min to about 8 mg/100ml at 6 hours. Therefore, it could be seen that under experimental conditions, tolbutamide disappeared from the blood approximately 8 times faster than chlorpropamide (Knauff, et al., 1959). This would mean less frequent dosing with chlorpropamide, which would make the drug much more convenient for patients to treat type 2 diabetes. However, further research subsequently revealed that, due to the longer half-life of chlorpropamide, the hypoglycaemic effects were compounded and lasted longer than previously expected (Sola, et al., 2015). This meant that Chlorpropamide could not be administered for the safe treatment of type 2 diabetes.


Description automatically generated

Figure 4 Structure of Chlorpropamide


Glibenclamide is the first of what is known as the second-generation sulfonylureas. Introduced for use in 1984, these mainly replaced the first-generation drugs (Carbutamide, Tolbutamide, Chlorpropamide etc) in routine use to treat type 2 diabetes. Due to their increased potency and shorter half-lives, lower doses of these drugs could be administered and only had to be taken once a day (Tran, 2020).  These second-generation sulfonylureas have a more hydrophobic right-hand side, which results in an increase in their hypoglycaemic potency (Skillman & Feldman, 1981). In Glibenclamide, the left-hand side of the drug changed drastically from chlorpropamide, as seen in figure 5 (Anon., 2021). This suggested to medicinal chemists, an innumerable number of possible changes that could be made to the drug, simply by changing the left and right-hand sides, resulting in better potency, safety, efficacy and convenience (Monash University, 2021). Consequently, the metabolism of the drug varied between patients, and this in addition to increased hypoglycaemia and increased incidence of Cardiovascular events (Scheen, 2021), meant that the drug is not a first choice in recommendation to treat type 2 diabetes.


Figure 5 Structure of Glibenclamide


Glipizide, figure 6 (Anon., 2021), shares the same hydrophobic structure on the right-hand side as Glibenclamide, however a few changes have been made to the left-hand group, resulting in faster metabolism. Although it has similar potency to that of Glibenclamide; however, the duration of its effects was found to be much shorter (Brogden, et al., 1979). Glipizide has the lowest elimination half-life of all the sulfonylureas, reducing the risk of the long-lasting hypoglycaemic side effects found in previous developments (Anon., 2022).


Figure 6 Structure of Glipizide


Gliclazide is the most common sulfonylurea used in current medicine for the treatment of non-insulin dependent diabetes mellitus; it is part of the World Health Organisation’s most recent list of essential medicines (World Health Organisation, 2021). The chemical structure of Gliclazide can be seen in figure 7 (Anon., 2021). Fascinatingly, medicinal chemists returned to the use of a methyl group on the left-hand side of the drug, which was last seen in Tolbutamide. As mentioned before, the left-hand group on the drug, attached to the benzene ring, is responsible for the metabolism of the compound. Returning to the use of a methyl group, allows for a faster metabolism of the drug, which helped to remove the unwanted longer hypoglycaemic side effects, especially for use with elderly patients (Monash University, 2021). The right-hand group of gliclazide is comprised of two hydrophobic rings which, as mentioned previously, are responsible for its increased potency. Gliclazide has also been shown to be one of the most effective sulfonylureas. According to Harrower, three studies carried out concluded that gliclazide is a potent hypoglycaemic agent, which compares favourably with others of its type (Harrower, 1991).


Figure 7 Structure of Gliclazide


Sulfonylureas are one of several groups of drugs used to treat type 2 diabetes. Through research and trials, they have developed significantly over time, to become one of the most prescribed medications in the effective treatment of type 2 diabetes. 

The sulfonylureas discussed above represent significant developments in physiology and pharmacology of the group, since their initial discovery. Other sulfonylurea drugs have been synthesised and tested over the years, such as tolazamide and acetohexamide, however these are less commonly prescribed because of their disadvantages in potency and safety.  The discovery of the ability to modify the left and right sides of the drug’s common structure has led to many new forms within this class, with varying properties in potency, metabolism, efficacy, and safety. The experimentation of the chemical structures over time has led to the production of more effective treatments for the disease. Currently, Glipizide and Gliclazide are the two most commonly prescribed sulfonylureas, due to their high potencies and suitable half-lives, while maintaining minimal side effects. These now provide an effective treatment in helping reduce the symptoms of type 2 diabetes and thus improving quality of life for those suffering with the disease.

AliMahdi Meghji, Youth Medical Journal 2022


Anon., 2021. Carbutamide. [Online]
Available at:
[Accessed 27 March 2022].

Anon., 2021. Chlorpropamide. [Online]
Available at:
[Accessed 29 March 2022].

Anon., 2021. Gliclazide. [Online]
Available at:
[Accessed 30 March 2022].

Anon., 2021. Glipizide. [Online]
Available at:
[Accessed 29 March 2022].

Anon., 2021. Glyburide. [Online]
Available at:
[Accessed 29 March 2022].

Anon., 2021. Tolbutamide. [Online]
Available at:
[Accessed 29 March 2022].

Anon., 2022. Glipizide. [Online]
Available at:,glucose%2Dlowering%20therapy%20following%20metformin.
[Accessed 29 March 2022].

Brange, J. & Langkjoer, L., 1993. Insulin structure and stability, Bagsvaerd: Novo Research Institute.

Brogden, R. N. et al., 1979. Glipizide: a review of its pharmacological properties and therapeutic use. Drugs , 18(5), pp. 329-353.

Fridlyand, L. E. & Philipson, L. H., 2010. Glucose sensing in the pancreatic beta cell: a computational systems analysis. Theoretical Biology and Medical Modelling, 7(1), p. Article 15.

Fu, Z., Gilbert, E. R. & Liu, D., 2013. Regulation of Insulin Synthesis and Secretion and Pancreatic Beta-Cell Dysfunction in Diabetes. Current Diabetes Reviews, 9(1), pp. 25-53.

Fvasconcellos, 2011. General structural formula of a sulfonylurea, highlighting the functional group that gives the class its name and the side chains that distinguish its various members., s.l.: Wikipedia.

Galicia-Garcia, U. et al., 2020. Pathophysiology of Type 2 Diabetes Mellitus. International Journal of Molecular Sciences, 30 August, 21(17), p. 2.

Harrower, A. D., 1991. Efficacy of gliclazide in comparison with other sulphonylureas in the treatment of NIDDM. Diabetes research and clinical practice , 14(2), pp. 65-67.

Kent, M., . Advanced Biology. ed. (): Oxford University Press.

Kleinsorge, H., 1998. Carbutamide–the first oral antidiabetic. A retrospect. Experimental and clinical endocrinology & diabetes : official journal, German Society of Endocrinology [and] German Diabetes Association, 106(2), pp. 149-151.

Knauff, R. E., Fajans, S. S., Ramirez, E. & Conn, J. W., 1959. Metabolic studies of chlorpropamide in normal men and in diabetic subjects.. Annals of the New York Academy of Sciences , 74(3), pp. 603-617.

Lacobucci, G., 2021. The British Medical Journal. [Online]
Available at:
[Accessed 2 March 2022].

Loubatières-Mariani, M.-M., 2007. The discovery of hypoglycemic sulfonamides. Journal de la Société de Biologie, 201(-), pp. 121-125.

Monash University, 2021. The Science of Medicines MOOC, Melbourne: Future Learn.

National Center for Biotechnology, 2005. PubChem Compound Summary for CID 9564, Carbutamide. [Online]
Available at:
[Accessed 18 March 2022].

Panten, U., Schwanstecher, M. & Schwanstecher, C., 1996. Sulfonylurea receptors and mechanism of sulfonylurea action.. Experimental and clinical endocrinology & diabetes : official journal, German Society of Endocrinology [and] German Diabetes Association, 104(1), pp. 1-9.

Quianzon, C. C. L. & Cheikh, I. E., 2012. History of current non-insulin medications for diabetes mellitus. Journal of Community Hospital Internal Medicine Perspectives , 2(3), p. 19081.

Scheen, A. J., 2021. Sulphonylureas in the management of type 2 diabetes: To be or not to be?. Diabetes Epidemiology and Management, Volume 1, p. Article 100002.

Skillman, T. G. & Feldman, J. M., 1981. The pharmacology of sulfonylureas. The American journal of medicine, 70(2), pp. 361-372.

Sola, D. et al., 2015. Sulfonylureas and their use in clinical practice. Archives of medical science , 11(4), pp. 840-848.

Sorkhy, M. A. & Ghemrawi, R., 2020. Treatment: Projected Modalities for Antimicrobial Intervention. Microbiomics – Dimensions, Applications, and Translational Implications of Human and Environmental Microbiome Research, -(-), pp. 279-298.

Tattersall, R., 2008. Discovery of the sulphonylureas. TATTERSALL’S TALES, 7(2), p. 74.

Tran, D., 2020. Oral Hypoglycemic Agent Toxicity. [Online]
Available at:
[Accessed 27 March 2022].

World Health Organisation, 2021. WHO model list of essential medicines – 22nd list, 2021. [Online]
Available at:
[Accessed 30 March 2022].

Biomedical Research

Dostarlimab: Hope or Hype?


Cancer is very often placed at the forefront of medical research and with an estimated 1 in 2 people expected to develop cancer at some point in their lives1 it is becoming increasingly important that novel drugs and therapies are discovered to mitigate the impacts. Over the years, we have seen the development of powerful treatment methods, from chemotherapy to radiotherapy, however more recently there has been a rise in the use of immunotherapy. One recent form of immunotherapy, a drug called Dostarlimab, has taken the medical world by storm after a small study reported a 100% complete clinical response. 

How does it Work?

The drug works by enhancing the body’s immune response against tumour cells. It does this through two types of proteins PD-L1 and PD-L2 (programmed death ligand 1 and 2) which typically play a role in weakening our immune response when bound to a complementary receptor on a T-cell. This plays an important physiological role in preventing excessive destruction to non-harmful cells, as well as preventing the onset of autoimmune diseases.2 

However, some tumour cells express these proteins on their surface, which when bound to a T-cell inhibits the cell-mediated immune response, and the cancer cells remain undestroyed.3 Dostarlimab is a monoclonal antibody which binds to PD-1, the complementary receptor on the T-cell, and therefore prevents the interaction between the tumour and T-cells. This enables both identification and reactivates cytotoxic activity, allowing the cancer cells to be attacked.

Fig: Images showing the interactions without (left) and with (right) Dostarlimab3

Usage of the Drug

Although the drug has only recently risen to fame in mainstream media, the drug had already begun rollout across the NHS in February 2022 as a treatment method for endometrial cancer.4 Alternative options such as surgery and chemotherapy tend to be more invasive and often leave patients with a poor prognosis, which is why Dostarlimab serves as an innovative drug. It requires only four half-an-hour sessions over a 12 week-period, offering patients quicker, safer and more effective treatment.

It was only more recently that a small trial involving 12 rectal cancer patients saw a 100% remission rate.5 The patients involved suffered from a particular subset of rectal cancer caused by mismatch repair deficiency (cells with many DNA mutations), which is affected by blockage of the PD-1 receptor on T-cells caused by Dostarlimab. Despite the small sample size, a high confidence interval of 95% and no severe side effects suggest that the drug holds a lot of potential. 

Limitations of the Drug

Despite the fact that the drug has only been proven effective on one particular form of the disease, it is estimated that 5-10% of rectal cancers are due to mismatch repair deficiency6. With over 700,000 people diagnosed with rectal cancer each year7, even a small proportion of those cases being treated represent a significant triumph. 

However, the results of this trial must not be taken as a definitive yes for the use of Dostarlimab, as a follow-up study with a larger sample size would increase the validity and reliability of the study. Additionally, the patients were followed up for between 6 to 25 months 5 in order to assess any recurrence, but ideally, longer follow-up times would allow researchers to better ascertain the long term efficacy. A further obstacle which may hinder large scale roll out is cost, which is particularly a challenge in countries where private healthcare is dominant. According to the New York Times8, each dose cost $11,000 and with several doses required over a 6 month period, the drug may prove to be unaffordable for many. 

Such limitations are not completely restricting, as numerous solutions exist to tackle them. For example, subsidies from the government would not only allow for larger studies to be completed, but also increase research in cost reduction. Whilst this presents an opportunity cost to a country’s government, extra funding for the healthcare sector leads to better survival rates, which benefits the economy, hence creating a positive multiplier effect.


The future of Dostarlimab seems to be exciting and may change the way in which we treat rectal cancer. Not only is it an innovative way in which to treat cancer, it’s potential benefits to the fields of endometrial and mismatch repair deficiency cancers are immense. However, in the near future, further trials, or extensions of ongoing ones, are warranted in order to successfully determine whether the drug is a viable treatment method, as well as solutions which address cost reduction. 

The unprecedented results of the trial have been groundbreaking for the medical sector, and provide a great sense of hope that we will continue to discover cancer treatments. Nonetheless, whether it proves to be a miracle cure or not, it is fair to say that immunotherapy in itself has been revolutionary to the world of medicine, and the research gained from such studies conducted will prove to be valuable in the long term. 

Nyneisha Bansal, Youth Medical Journal 2022


1. Cancer [Internet]. 2022 [cited 15 June 2022]. Available from:

2. Touboul R, Bonavida B. YY1 expression and PD-1 regulation in CD8 T lymphocytes. YY1 in the Control of the Pathogenesis and Drug Resistance of Cancer. 2021;:289-309.

3. How JEMPERLI works [Internet]. Jemperli. 2022 [cited 15 June 2022]. Available from:

4. England N. NHS England » New life-extending drug for advanced womb cancer to be rolled out on the NHS [Internet]. 2022 [cited 15 June 2022]. Available from:

5. Cercek A, Lumish M, Sinopoli J, Weiss J, Shia J, Lamendola-Essel M et al. PD-1 Blockade in Mismatch Repair–Deficient, Locally Advanced Rectal Cancer. New England Journal of Medicine. 2022;.

6. Promising rectal cancer study [Internet]. ScienceDaily. 2022 [cited 15 June 2022]. Available from:,a%20subtype%20of%20rectal%20cancer.

7. Colorectal Cancer – Statistics [Internet]. Cancer.Net. 2022 [cited 15 June 2022]. Available from:,with%20colorectal%20cancer%20in%202020.

8. Kolata G. A Cancer Trial’s Unexpected Result: Remission in Every Patient [Internet]. 2022 [cited 15 June 2022]. Available from:

Biomedical Research Commentary

CRISPR Gene Editing: From novel treatment to reality

Originally released in the year 2000, the Marvel blockbuster film series features a team of six genetically enhanced beings called the X-Men. It appears that every time a new X-Men movie is released on the big screens, the world looks to science to answer the age-old question: “Is the creation of such mutants a possibility?”. 

With the endless developments in genetic engineering and the discovery of CRISPR-Cas9 gene editing in 2020, it is difficult not to wonder if the creation of such mutants in our reality is possible. Yet, much sooner than we expected, these so-called “superhumans” are already walking amongst us, with a range of unbelievable powers including super-strength, super-speed, and mind-blowingly high brain power that increasingly mirror superhuman powers seen on the big movie screens.

To understand the science behind these superhumans, we must first understand the basis of gene editing, which forms the foundation and function of CRISPR-Cas9.

What is CRISPR-Cas9?

CRISPR-Cas9 is a new and unique form of gene editing that allows medical scientists to edit parts of the genome by removing, adding, or altering sections of the DNA sequence [1]. Discovered back in 2021, it has been one of the frontiers of genomic research and a common hot topic within the medical community due to its simplicity, versatility, and precise method of genetic manipulation. The cheap price associated with CRISPR has ultimately made it more desirable than previous methods of DNA editing available on the market including transcription activator-like effector nucleases (TALENs) and zinc-finger nucleases (ZFNs), which are much less cost-effective and accessible [2].

So why is CRISPR-Cas9 gene editing relevant to us right now?

The answer lies in the enormous potential of CRISPR gene editing for treating a wide range of life-threatening medical conditions that have a genetic basis and foundation such as cancer, hepatitis B, and high cholesterol. For example, the excess fatty deposits in major blood vessels causing high cholesterol can be resolved through genetic engineering techniques that “turn off” the genes that regulate cholesterol levels in our body [6]. A new study conducted by Nature 2021 revealed that knocking out the protein PCSK9 with CRISPR significantly reduced LDL cholesterol in monkeys by around 60% for at least 8 months [3]. Although it is likely to be many years before any testing for CRISPR technology can be carried out on humans, this kind of breakthrough within our own genus is impressive. As much current research is focused specifically on ex-vivo or animal models, the intention is to use the technology to routinely treat diseases in humans that can’t be addressed through routine drugs and medications.

How does this form of gene editing work?

The foundation of CRISPR-Cas9 is formed from two key molecules that introduce a mutation into the targeted DNA: the Cas9 enzyme and guide RNA (gRNA). The guide RNA has RNA bases that are complementary to the target DNA sequence in the original genome, and this helps the gRNA bind to the correct region within DNA. The Cas9 enzyme follows the gRNA and essentially acts as small scissors that make incisions within both strands of the DNA, allowing for sections of DNA to be added or removed [1][4]

At this point, the cell recognises the damage within the DNA and works to repair it, allowing scientists to use external machinery to introduce one or more new genes to the genome. This causes the genetic makeup to differ from the “normal” human genome, causing mutations and noticeable changes in the phenotype to occur such as the “super-variants” including super-sprinter variant (ACTN3), super-sleeper mutation (hDEC2), and the super-taster variant (TAS2R38) [5][7].

There is also extensive research being put into eliminating the “off-target” effects of CRISPR, where the Cas9 enzyme makes cuts at a site other than the intended one, introducing a mutation in the wrong region. Whilst some of these changes are inconsequential to the overall phenotype, they may affect the structure and function of another part of the genome. It is suggested that the use of Cas9 enzymes that only cut a single strand of target DNA as opposed to the double-strand may be the solution to eliminate this problem [4].

The next generation of enhanced individuals?

Though the alteration of the human genome is very much already a reality, the creation of ‘mutant’ individuals with more fantastical powers such as Wolverine’s special healing and animal keen senses, and the Scarlet Witch’s telekinesis and matter manipulation remains purely fictional. As of right now, the use of CRISPR in medicine is solely therapeutic, used for repairing or altering innate mutations, as opposed to creating them. Yet, it can be argued that these genetic changes allow the patient to have better DNA than the one they were born with, making them the first generation of genetically modified humans to walk the earth – mutants indeed.

In the X-Men franchise, all mutants carry an ‘X-gene’ which bestows upon them their aforementioned abilities. Unfortunately, no such gene exists – our phenotype arises from a much more complicated relationship between genes and presenting characteristics, and the effects of current gene editing pale in comparison to what is shown in blockbuster movies. That said, hope is not lost: extensive research and development within this field continually offer the possibility of giving individuals similar ‘powers’ to those of the X-Men and Professor X on an increasingly real scale. 

Below, are some examples of X-Men’s superpowers against their real-world human genetic mutation counterparts [5][7]:

X-Men abilityExisting human genetic variation
Animal-keen senseshDEC2 (super-sleeper mutation)  
Super-speedACTN3 (super-sprinter variant) 
Super-strengthLRP5 (unbreakable bone mutation)  
Enhanced sensesTAS2R38 (super-taster variant)  

Do scientists think it is possible for some of these powers to be attributed to genetic mutation? The simple answer is yes. But unsurprisingly, the uncertainty and unpredictable nature surrounding new treatments will always generate some degree of ethical controversy in the scientific community, and CRISPR is no different. The use of CRISPR technology in medicine will undoubtedly become more mainstream in the near future, and once the door is open for genetic modifications to embryos, babies, and adults alike, there is no going back. As with many medical technologies in the past, human health and safety may fail to be at the forefront of CRISPR’s use, leading to all kinds of unnecessary complications. The impact that CRISPR-Cas9 will have on the medical field, now and in the next generation, is undeniable – whether it’s curing a rare form of cancer or creating the first generation of real-life X-Men [8].

There are many unanswered questions surrounding this topic, and this is unlikely to change. But as the research continues and our questions go on, I would like to leave you with only one… What would your superpower be?


Works Cited

1. 2022. What is CRISPR-Cas9?. [online] Available at: [Accessed 15 March 2022].

2. Beumer, K.J., Trautman, J.K., Christian, M., Dahlem, T.J., Lake, C.M., Hawley, R.S., Grunwald, D.J., Voytas, D.F. and Carroll, D. (2013). Comparing Zinc Finger Nucleases and Transcription Activator-Like Effector Nucleases for Gene Targeting in Drosophila. G3: Genes|Genomes|Genetics, [online] 3(10), pp.1717–1725. doi:10.1534/g3.113.007260.

3. 2022. [online] Available at: [Accessed 15 March 2022].

4. 2022. [online] Available at: [Accessed 15 March 2022].

5. Business Insider. 2022. 8 genetic mutations that can give you ‘superpowers’. [online] Available at: [Accessed 15 March 2022].

6. 2022. Gene tweak creates supermouse – and prevents diabetes | New Scientist. [online] Available at: [Accessed 15 March 2022].

7. Business Insider. 2022. 8 genetic mutations that can give you ‘superpowers’. [online] Available at: [Accessed 15 March 2022].

8. Pinkstone, J., 2022. Human beings could achieve immortality by 2050. [online] Mail Online. Available at:￾immortality-2050.html [Accessed 15 March 2022].

Biomedical Research Commentary Health and Disease

Behind the Controversial and Forbidden Technique of Gene Editing


Gene editing has been one of the biggest names in the biotechnology industry. On the surface, it seems like a tool that can help prevent the anomaly of genetic diseases, however when you dive deeper into it, it is something that can be very unpredictable and can cause abnormal and irregular outcomes in subjects. This technique, classified as forbidden in many parts of the world, is highly controversial and users of such tech have been imprisoned. Many nations around the world are still researching gene editing, so perhaps one day it could be a safe and reliable tool to bring an end to the rise of genetic diseases.

History Behind Gene Editing

To fully understand how the concept of gene editing was first derived, you have to look into its history. Research into genetics started in the 1950s and 1960s; discoveries in this time period paved the way for future study of genetics and biotechnology

It all started from the discovery of the double helix structure of DNA in 1953 by scientists James Watson and Francis Crick, based on the work of their colleague Rosalind Franklin. The discovery of the double helix structure was an important moment in the history of innovation in the field of genetics. It was followed by the first isolation of DNA in a test tube in 1958 by scientist Arthut Kornberg. He isolated DNA polymerase from bacterial extracts and within a year he was able to successfully synthesize DNA in vitro for the first time.

Leading into the 1960s, genetic engineering innovation shifted to Silicon Valley in 1962 through the work of scientist Osamu Shimomura and researchers Martin Chalfie and Roger Tsien. The gene coding for the green fluorescent protein (GFP) present in Aequorea Victoria jellyfish was successfully fused with another gene that produces the protein of interest (POI). This allowed researchers to track which cells produced that POI as the GFP protein when exposed to blue wavelength glows. This revealed the location of the POI, thus allowing the tracking of it in cells.

Following this, the discovery of DNA ligase in 1967 was a pivotal point in molecular biology since DNA ligase is essential for the repair and replication of DNA in all organisms, which is what gene editing is based on. This was soon followed by the discovery of restriction enzymes which identify and cut foreign DNA.

It wasn’t until the 1970s though that genetic engineering took off. Throughout the 1970s, Paul Berg accomplished creating recombinant DNA from more than one species, this became known as the “cut and splice” technique. DNA was cut from 2 viruses creating sticky ends, then the DNA was incubated, the ends would anneal on their own, and the addition of DNA ligase would seal the sticky ends together. This period of time formed the understanding of how restriction enzymes cut DNA, and how host DNA works to protect itself is the basis for the modern genetic engineering therapies that are being developed, for example CRISPR which we will dive deeper into in this article.

Innovation and Controversy Behind CRISPR Gene Editing

CRISPR gene editing is based on the CRISPR-Cas systems, such as CRISPR-Cas9. These are adaptive immune response systems that protect prokaryotes from bacteriophages. They work by splitting the nucleic acids of invading bacteriophages such as viruses, thus protecting prokaryotes from viral infections. Over time the use of CRISPR-Cas9 turned to gene editing. This technique was thrust into the spotlight in 2012 when George Church, Jennifer Doudna, Emmanuelle Charpentier, and Feng Zhang modified targeted regions of genomes using gene editing

CRISPR stands for clustered regularly interspaced short palindromic repeats, which are repeating DNA sequences in the genomes of prokaryotes. They were first identified in the bacteria E.coli in 1987. When these CRISPR systems were first discovered they were only thought to have applications in repairing DNA in prokaryotes to create defense mechanisms against bacteriophages. However in 2012, the same scientists discovered that by designing “guide” RNA, a specific region in a genome could be targeted. It was found that the CRISPR-Cas9 system could be used as a cut and paste tool to modify genomes. This system could be used to introduce new genes, and even remove old genes. It could also be used to activate and silence genes.

CRISPR-Cas9 has been used to switch off genes that limit the production of lipids in microalgae, leading to increased lipid production and higher yields of biofuel. This technique has the ability in the near future to even cure genetic disorders such as sickle-cell anemia and cystic fibrosis. There is already a wide range of applications of CRISPR-Cas9 in diseases such as cancer.

Even though this system has lots of positive and revolutionary applications in the field of healthcare, there is still a lot of controversy related to the system’s ethicalness surrounding it. One of the controversies around CRISPR is based around the fact that this new technology is powerful and very vulnerable to misuse. For example, Chinese scientist He Jiankui announced that he had genetically modified twins before birth using CRISPR which made them resistant to HIV, resulting in a three year imprisonment. The effects of such technology are far too uncertain and should not be used to make heritable changes to a human’s DNA, though non-heritable changes could be argued for. This method was also known to be medically unnecessary as there were already much safer and certain methods to prevent certain diseases. So given gene editing’s unpredictable and unknown effects it is logical and ethical to be wary of it.

Why is CRISPR Gene Editing Forbidden + Discussion

So why is this revolutionary technique forbidden throughout the world? The main reason for this is simply that the technique is too risky in embryos targeted for implantation. The technology would still only be permitted for certain circumstances even if it ever gets approved.
Although CRISPR can precisely edit the genome of an individual, it has been seen that many unwanted changes have occurred in the genes of the subject which resulted in many unpredictable outcomes among the cells in the embryo. So the question stands, is this method of gene editing necessary? Is it an ethical solution to preventing genetic diseases despite its uncertainty and unpredictability? These questions are the exact roadblocks on the journey to the future of gene editing.


“Full Stack Genome Engineering.” Synthego,,genetics%20for%20all%20future%20scientists.

Ng, Written by Daphne. “A Brief History of CRISPR-Cas9 Genome-Editing Tools.” Bitesize Bio, 29 Apr. 2021,

Hunt, Katie. “What Is CRISPR and Why Is It Controversial?” CNN, Cable News Network, 7 Oct. 2020,

Human Germline and Heritable Genome Editing: The Global Policy …
Ledford, Heidi. “’CRISPR Babies’ Are Still Too Risky, Says Influential Panel.” Nature News, Nature Publishing Group, 3 Sept. 2020,,a%20high%2Dprofile%20international%20commission.

Biomedical Research

Antibiotic Resistance: The Quiet Crisis


Since the inception of the first penicillin drug in 1928 by Alexander Fleming, antibiotics have systematically changed and revolutionized the field of medicine. These antibiotics drugs or antimicrobial substances are widely used throughout medical treatment to prevent infections by inhibiting the growth and survival of bacteria. However, as the use of antibiotics continues to become mainstream, reaching even consumer shelves in what are now known as “over-the-counter medicine”, so does the risk of bacteria gaining resistance to these antibiotics.


Pioneered by Sir Alexander Fleming in 1928, the penicillin “Wonder-Drug” transformed modern medicine and saved millions of lives. These antibiotics were first prescribed during the World War 2 era to control infections on wounded soldiers. However, only years later penicillin resistance became a massive problem in many clinics and health organizations. In response to the new penicillin-resistant bacteria epidemic; a new line of beta-lactam antibiotics were created, restoring confidence in antibiotics across the country. Antibiotics have not only played a pivotal role in saving patients’ lives, but have also aided in key medical and surgical breakthroughs. They’ve successfully prevented or treated infections in patients undergoing procedures such as chemotherapy, who have chronic diseases such as end-stage renal disease, or rheumatoid arthritis, or who have undergone complex procedures including organ transplants or cardiac surgery. 

The Quiet Crisis

The world was warned of the imminent antibiotic resistance crisis as early as 1945. Sir Fleming expressed his concerns about an age of antibiotic abuse, “[the] public will demand [the drug and] … then will begin an era … of abuses.” (Ventola 2015) Despite the pleas of Fleming, as well as many other scientists, antibiotics still continue to be overused worldwide. The CDC has already classified hundreds of bacteria that continue to pose concerning threats towards our healthcare systems and their patients. 

Additionally, resistance genes from bacteria can easily be spread from one species to another through a method known as Horizontal Gene Transfer (HGT). As the primary mechanism for spreading resistance, HGT is defined as the, “movement of genetic information between organisms”. Due to HGT and the hereditary passing of genetic information to offspring (Vertical Gene Transfer) eliminating bacteria with resistance genes has become a seemingly impossible problem for healthcare professionals to deal with. In third-world countries such as India, the antibiotic resistance crisis has become so bad that many simple wounds lead to deadly infections.

The crisis is further perpetuated through problems such as inappropriate prescribing, extensive agricultural use, and the availability of few new antibiotics. Antibiotics that are given incorrectly continue to corroborate the spread of microbial resistance. In a recent study, Ventola expresses, “Studies have shown that treatment indication, choice of agent, or duration of antibiotic therapy is incorrect in 30% to 50% of cases.”. Antibiotics administered inappropriately have limited medical benefits and expose patients to antibiotic-related risks, such as drug-induced liver injury. Such antibiotic administrations can lead to genetic alterations within the bacteria such as changes in gene expression and HGT. These alterations promote increased bacterial virulence and resistance.

Furthermore, Antibiotics are largely utilized in animals to stimulate growth and prevent infection, accounting for over 80% of antibiotics sold in the United States. Antimicrobial treatment of livestock is supposed to improve the animals’ overall health, resulting in increased yields and a higher-quality output. Bacteria found inside of these livestock gain resistance to the antibiotics being ingested by the cattle, which is then transferred to the humans who eat the meat of the newly butured cattle. Antibiotic use in agriculture has an impact on the microbiome in the environment. Drugs administered to livestock are expelled in urine and stool in up to 90% of cases, and afterwards broadly disseminated by fertilization, freshwater, and runoffs. This approach also exposes bacteria in the surrounding area to development-inhibiting substances, affecting the ecology of the environment by raising the ratio of resistance against vulnerable bacteria.


Although the antibiotics resistance crisis seems to be unsolvable, many of the world’s citizens can play their part through less consumption of antibiotics and only using them when need be. Additionally, a new micro-organism, known as “Bacteriophages” seems to be a promising alternative that could help alleviate the stress on the antibiotic resistance crisis.

Works Cited

  1. Bohan, J. G. B., Cazabon, P. C., Hand, J. H., Entwisle, J. E., Wilt, J. K. W., & Milani, R. V. M. (2019, February 13). Reducing inappropriate outpatient antibiotic prescribing: normative comparison using unblinded provider reports. PubMed. Retrieved February 25, 2022, from
  2. Romero-Calle, D. R., Benevides, R. G. B., Góes-Neto, A. G., & Billington, C. B. (2019, September 4). Bacteriophages as Alternatives to Antibiotics in Clinical Care. PubMed. Retrieved February 25, 2022, from
  3. Ventola, C. L. V. (2015, April). The Antibiotic Resistance Crisis. PubMed. Retrieved February 25, 2022, from,incentives%20and%20challenging%20regulatory%20requirements.
  4. World Health Organization. (2020, July 31). Antibiotic resistance. World Health Organization. Retrieved February 25, 2022, from
Biomedical Research

What would we do without anaesthetics

General anaesthesia(scientifically known as narcosis) is the act of putting a patient to sleep, which induces complete unconsciousness. Its function is to prevent the patient’s awareness during surgery by suppressing reflex activity, which causes surgical interventions to be easier, and ultimately allows comfort to the patient. The development of anaesthetics has a unique and fascinating history.

William Morton, a dentist who was residing in Boston, US, was behind this turning point in surgery. He allows his patient Edward Abbot, who had a tumour in his neck,  to inhale diethyl ether. When he was asleep, the surgeon John Warren removed the tumour.

However despite such revolutionary  surgery, there was a long period of time before general anaesthesia was fully adopted into surgeries. At the time many surgeries were opposed to anaesthetics as believed they were dangerous or possibly a waste of time and ‘quick surgeons’ did not require them.  Before the discovery of anaesthetics, surgeons were required to act with haste and create incisions quickly to reduce the duration of pain for the patient. However this all changed after Queen Victoria had anaesthesia during the birth of her eighth child-Prince Leopald. The anaesthetist responsible was called John Snow. John Snow had already written a book beforehand about ether and chloroform and he had designed a mask which can be used to administer chloroform, which was not permitted to be used in the Queen’s pregnancy. Therefore Snow laid a clean handkerchief on top of her nose and used a pipette to release chloroform drop by drop onto the handkerchief. He released chloroform until she indicated that she felt no pain. He gave 15 drops of chloroform on the handkerchief with every contraction. Snow recorded that ‘Her Majesty expressed a great relief from the application’ and ‘the pains being trifling during the uterine contractions, and whilst between the periods of contraction there was complete ease.’ The Queen described it as ‘soothing and delightful beyond measure’. (LAAR, 2019)After this, anaesthesia grew in popularity all over Europe. However despite its continent wide fame, the Lancet, a prominent medical journal, criticised the use of chloroform in Queen Victoria’s pregnancy. (Anesthesia and Queen Victoria, 2022)

The anaesthetic procedure

It is no surprise that we no longer use ether on a handkerchief as an anaesthetic like John Snow did 200 years ago. Some may assume that a narcotic is sufficient for the pain to be completely repressed, however a narcotic does not prevent increased heartbeat. Therefore analgesics are also given to the patient(usually opium derivatives as they tend to be the most powerful kind). A muscle relaxant is also administered to prevent muscles from tensing during the operation. A ventilator is used and a tracheal tube is inserted via the nose, so it can pass through the trachea. The anaesthetist also checks for more factors such as urine production, oxygen content in the blood, carbon dioxide content in the exhaled air(this is done via a blood pressure band and electrodes placed in the chest and finger) and blood sugar level.

Although many years have passed since William Morton’s discovery , the complete mechanism of anaesthesia is still not fully understood. The anaesthetic state consists of  components  such as unconsciousness, immobility and analgesia.  

There are two types of receptors which are responsible for anaesthetic action: transmitters and ion channels. Cells in the brain communicate via neurotransmitters: neurotransmitters which respond to electrical signals are released into synapse. Based on their function, they can be excitatory neurotransmitters or inhibitory neurotransmitters. Excitatory inhibitors such as glutamate cause depolarisation. Depolarisation is when a gated sodium ion channel opens without warning and allows sodium ions from outside of the membrane to enter the cell.  Inhibitory transmitters such as glycine  causes postsynaptic activity. The type of neurotransmitter with the most significant role in the functional site of anaesthetics is the GABA A receptors. Activation of GABAA receptors leads to hyper-polarisation of the brain, which reduces the excitability of the neurons. GABAA is the major inhibitor receptor in the CNS. GABA receptor has 5 subunits which merge together to form a chlorine channel. Volatile anaesthetic agents have an agonistic  effect (a drug that binds to the receptor, producing a similar response to the intended chemical), however ketamine has an antagonistic effect(unintended effect) on GABA receptors. Glycine receptors are another type of receptor which are located in the CNS, specifically in the spinal cord. When the inhalation anaesthetics bind to glycine receptors in the spinal cord, the inflow of chloride ions is increased so that the painful stimulus is reduced. Another example of receptors are serotonin receptors that lead to membrane depolarisation and cause the excitability of neurons. The activation of serotonin receptors by anaesthetic agents lead to an altered state of consciousness. (Son, 2010)

In conclusion, the discovery of anaesthesia was a significant revolution in surgery as  surely it must have been very difficult for a surgeon to operate on a screaming patient with tensing muscles. Before anaesthesia, surgery was only done for those living in excruciating pain who are close to death. Surgeons must have been disturbed by the patients in agony and therefore operations were done very rapidly. The only advantage of this was that a quicker operation reduces the level of infection. Due to this prominent discovery in 1846, many more operations have been performed, as a result of the elimination of one of mankind’s greatest fears-pain. 


1. 2022. Anesthesia and Queen Victoria. [online] Available at: <; [Accessed 24 March 2022].

2. Son, Y., 2010. Molecular mechanisms of general anesthesia. Korean Journal of Anesthesiology, 59(1), p.3.

3. Van de Laar, A., 2019. Under the Knife. pp.133-137.

Biomedical Research Health and Disease

Is stem cell treatment a viable option in restorative dentistry?

By: Arya Bhatt

Stem cells have been at the forefront of scientific research and have been an invaluable tool in the scientific field due to their fantastic properties. Their ability to divide over and over again to produce many new cells whilst specialising in the different types of cells the body requires has enabled researchers to explore this phenomenon and implement it into a wider variety of scenarios, most of which deal with the treatment of diseases which were thought to be incurable. Now with stem cell research expanding and its uses becoming more prevalent, one growing use of it is within restorative dentistry. Restorative dentistry involves the study, diagnosis and integrated management of diseases of the oral cavity, the teeth and supporting structures.1 One of the many treatment options within this field explores the uses of stem cells to provide optimal patient care.  Within the stem cell field, there are many different types of stem cells and each has its different sources and implementations. There are totipotent stem cells that can differentiate into any of the 220 cell types of the body, including the placental cells. Embryonic cells in their early stages are totipotent. Pluripotent stem cells give rise to all cell types except placental cells and multipotent stem cells develop into a certain limited number of cell types.2 Stem cells can be further categorised depending on their source. Eg. Adult stem cells, also called tissue-specific stem cells, can differentiate into different cell types for the specific tissue they are extracted from.3 For example, stem cells in the bone marrow generate different blood cells but not other cells of the body. Mesenchymal stem cells, also called MSC, refers to cells isolated from the stroma which is the connective tissue surrounding tissues and organs. These were first discovered in the bone marrow and have capabilities to make bone, cartilage and fat cells.3 These stem cells may be a useful tool in restorative dentistry. Furthermore, induced pluripotent stem cells (iPS) are stem cells that have been engineered in the lab to behave like pluripotent stem cells.3 Extensive research is still taking place and what different properties these cells have and how these can be implemented. Stem cells have these fantastic uses to generate new cells but are these viable in restorative dentistry?

First, a clear idea of what restorative dentistry revolves around has to be explored so an idea can be formed on how stem cell treatment can be used to treat patients who want to receive restorative treatment. Restorative dentistry refers to a wide variety of treatments including crowns, bridges, fillings, veneers and more. These treatments provide functional restoration as well as cosmetic satisfaction for the patient. The important factors to consider are whether the treatments can be replaced by stem cell treatment and where the stem cells are obtained from.

For example, using stem cell treatment to provide treatment to patients for veneers wouldn’t be theoretically advantageous due to the nature of a veneer. A patient’s veneers have to be strong, thin, durable as well as aesthetic and there is no requirement to explore whether stem cells could be used for veneers as the current veneers available are highly successful and suitable for patients. There is no real requirement for stem cell research or other research to take place to explore other alternatives to veneers. On the other hand, a restorative procedure such as an implant may have better alternatives if stem cell research has developed and a method to ‘regrow’ cells is a viable option with benefits.

Another factor to consider involves the source of the stem cells. For instance, mesenchymal stem cells can be sourced from multiple parts of the body, like the bone marrow. Accessing stem cells is one of the limitations and ethical debates regarding the use but new sources are being discovered. For example, stem cells, whether they are utilised in restorative dentistry or not can be extracted from the dental pulp. Stem cells can also be extracted from human exfoliated deciduous teeth (SHED). These stem cells are from the same source as dental pulp stem cells but from primary teeth which will fall out by around the age of 12.  One of the benefits of these cells is that they have been seen to have the capability to produce dentine and induce bone formation.4 Other stem cells that can be obtained from the oral cavity include periodontal ligament stem cells, root apical papilla stem cells and dental follicle stem cells. Obtaining stem cells from the oral cavity for treatment later also in the oral cavity is an advantage. 

Within restorative dentistry, some possible benefits of using these stem cells from the oral cavity include the regeneration of periodontal tissue and the regeneration of dental pulp which would otherwise remain dead after root canal treatment.4  Furthermore, there are prospects to using non-dental stem cells for dental application including the use of urine to regrow teeth. 5 Researchers in China have harvested pluripotent stem cells derived from human urine and they were able to generate tooth-like structures. Even though large-scale testing has not been completed yet, the advantages of this include the low cost, non-invasive and the fact that these cells can be used rather than being ended up as waste. Also, these urine-derived stem cells do not form tumours in the body and rejection is highly unlikely. However, drawbacks include the fact that these generated teeth were only one third the hardness of normal human teeth.

One use of stem cells may be used as an alternative for root canal treatment. When dental fillings are required due to caries (cavity) being present, a dental professional will fill caries to prevent further tooth degradation and to protect the dental pulp. The dental pulp has to be protected to prevent pain and the eventual loss of the tooth. However, with severe caries, the pulp can be infected and root canal treatment has to take place to protect the tooth.  A perfectly healthy tooth has had to undergo invasive treatment and will fall out quickly due to infection which may be avoided using stem cells. The same invasive procedure would occur but instead of filling with cement, stem cells can be utilised to stimulate the regrowth of dentine and pulp. In 2016, scientists at the University of Nottingham and Harvard University designed synthetic biomaterials to be used in conjunction with stem cells to encourage the new growth of cells within the dentine and pulp layer of the mouth. 6 This would allow patients to regrow teeth that have been damaged through dental disease and the tooth can remain healthy for a much longer period within the patient. Stem cells evidently provide an alternative treatment plan for a patient but whether this should be implemented more has to be evaluated.

Moreover, stem cell treatment may be an option in regards to a replacement for dental implants. Dental implants act as new teeth and tooth roots within a patient as old teeth may be extracted due to a variety of complications such as disease. These implants ensure the jaw bones and teeth structure in the mouth remains stable whilst allowing the patient to acquire cosmetic advantages. Overall, a patient’s satisfaction is improved with implants as comfort, speech, as well as appearance, are enhanced. 7 So why should stem cells be considered when dental implants are seen to be successful? Despite the great benefits seen, the disadvantages include that the healing process of implants can be very long. These implants are only pieces of titanium and cannot adapt to how the jaw grows. Also, careful cleaning and monitoring have to be completed to ensure that further infection does not occur. 8 As an alternative option that is being researched, the utilisation of stem cells may create a better solution to implants and whole teeth may be able to be regrown. At King’s College London, human gum tissue and stem cells from mice teeth were able to undergo tooth formation. 9 With whole tooth production possible outside the human body, these teeth may be able to be grown and used as natural implants. The advantage to this would involve a natural tooth within the mouth with its blood supply. But is this a safe option to go towards?

As this essay has explored, restorative dentistry encompasses a wide variety of treatments that patients can receive and stem cells do provide an alternative plan of action for a patient. Even though the whole of restorative dentistry is so vast, certain aspects may benefit from such research. The main benefits include the natural approach to providing care for a patient and allowing a patient to preserve their teeth for a longer duration of time. At the same time, all stem cell use would have to be researched thoroughly before its usage increases within the community but researchers and scientists are hopeful this can happen soon as the current research is pointing in the right direction. The current models of research are mainly confined to animal models and the current state of the likelihood of immune rejection within the oral cavity is not completely known.5 Despite the current risks that are unknown, with continued thorough research within this area the whole field of dentistry can be benefitted. Whilst being wary of the risks involved, the future of restorative dentistry seems highly promising and stem cell therapies could be a fantastic tool in dental practices.


  1. n.d. Restorative dentistry. [online] Available at: 
  2. MacDonald, A., 2018. Cell Potency: Totipotent vs Pluripotent vs Multipotent Stem Cells. [online] Cell Science from Technology Networks. Available at: 
  3. A Closer Look at Stem Cells. n.d. Types of Stem Cells. [online] Available at:
  4. Ratan-NM and Pharm, M., 2020. Repairing Teeth using Stem Cells. [online] Available at: 
  5.  Jain, A. and Bansal, R., 2015. Current overview on dental stem cells applications in regenerative dentistry. [online] National Library of Medicine. Available at: 
  6. Cuthberton, A., 2016. Dental fillings heal teeth with stem cells. [online] Newsweek. Available at: 
  7. Frisbee, E., 2021. Dental Implants. [online] WebMD. Available at: 
  8. Shapiro, J., 2022. What are Stem cell dental implants ?. [online] Available at:,used%20to%20regrow%20teeth%20completely 


This article explores how continued research within the area of stem cell treatment may change the way restorative dental treatments are carried out.

Biomedical Research Commentary

Machine Learning in Medicine – The Next Revolutionary Technology?


Machine Learning is increasingly becoming utilised in various sectors from engineering to psychology, and new successful developments of machine learning indicate that these technologies could be beneficial in medical settings. However, the viability of these technologies is questioned, given the ethical and logistical difficulties in medicine.


On the surface, machine learning (ML) is one of many branches of artificial intelligence (AI), whereby AI refers broadly to the development of machine capabilities. Arthur Samuel – an American pioneer in machine intelligence – defined machine learning in 1959 as: 

 ‘a field of study that gives computers the ability to learn without being explicitly programmed.’ [1]

Here, the anthropomorphised term ‘learning’ in machine learning refers to the desire to create models that can learn like human beings, through experiences and evaluations, achieving objectives and creating outputs with minimal human assistance.

Structure of Machine Learning

In ML, unlike traditional computer programming technologies, there is no manual coding and once the framework is built for an ML model, it can learn patterns and rules independently, similarly to a human.

For example, to create a ML model to derive a differential diagnosis for abdominal pain, the series of decisions are not explicitly written into the computer. Instead, input-output data pairs (e.g., right lower quadrant pain is suggestive of appendicitis) are passed into the ML model, which learns the relationship between input and outputs. The feedback leads to a model that learns the important features automatically and generates the desired output. The result is the automated diagnosis of abdominal pain. The ability to adjust the function is the most notable aspect of ML models. The algorithm repeats evaluating and adapting its function and updates its rules autonomously until the required accuracy is met – this is how the automated learning occurs.

Previous Usage of Machine Learning

In the early 1970s, an ML system named MYCIN was developed to identify disease-causing bacteria and recommend antibiotics with dosages dependent on patients’ body weight. This was a significant breakthrough in medical ML, with higher accuracy and performance achieved than expected, however MYCIN was never used in practice.[2] Despite the development of successful prototype systems, most clinicians were reluctant to use these systems. Contributing factors included general distrust, concerns around accountability, and the great effort needed to keep the ML knowledge updated and current with the relevant science and clinical practices. Furthermore, the AI winter in 1970-1980 resulted in reduced funding and interest and subsequently fewer significant developments.[3]

Even after several innovations within other disciplines outside of medicine including the ‘first electronic person’ and the ‘first chatbot’, medicine was very slow to adopt AI. However, to establish the foundation for ML development, clinical information and medical records were first developed and digitalised.[4] The development of other sophisticated medical technologies, such as various imaging machinery and constant patient monitoring, has also led to an increase in the quantity of data from each patient. Even though medical recording systems are in place, healthcare systems struggle to integrate and analyse these datasets due to the increasing global population, resource shortages, and the sheer size and complexity of healthcare datasets. This contributes to an increase in medical diagnostic errors which are significant source of morbidities and mortalities, and unnecessary costs.[5]

Future Outlook

ML development has been markedly rapid in the recent years to aid medicine by analysing and classifying this clinical data, whilst outsourcing everyday tasks in medicine to technologies. ML technologies are driven by the expanding power of computer processing, the availability of large datasets and the financial input from private companies and governmental sources.[6] However, although ML is currently developing rapidly, there remains numerous unresolved challenges – some which have been identified in previous ML developments, as well as possible new challenges, such as data privacy and public bias. 

It is important to identify the realistic potential of ML in medical diagnostics in modern clinical settings, as developments in the ML field have drastically progressed since the discovery of ML in the 1950s and challenges have not been thoroughly realised to date. Therefore, new studies should highlight the value of ML in diagnostics within the future and evaluate the true potential of ML benefits. With AI and ML becoming more available and capable, there is a need for further research into this topic, evaluating all aspects of ML in diagnostics in a broader viewpoint with acknowledgement of different stakeholders. The human and systematic effects are closely linked and identifying contributing factors will ensure that ML benefits can truly benefit patients and clinicians, whilst avoiding unnecessary costs and patient harm. The regulatory and ethical frameworks for ML must also be clarified so ML can reach clinical settings quickly and safely.

Swetha Babu, Youth Medical Journal 2022


[1] Samuel, A.L., (1959).

[2] Trivedi, M.C. (2014). 

[3] Kaul, V., Enslin, S. and Gross, S.A. (2020). 

[4] Kaul, V., Enslin, S. and Gross, S.A. (2020).

[5] Institute of Medicine (US) Committee on Data Standards for Patient Safety, et al. (2004). 

[6] Barber, J. (2012). 

Biomedical Research Health and Disease

Medicinal Cannabis


Cannabis, also known as weed, is a type of marijuana. Cannabis sativa, Cannabis indica, and Cannabis ruderalis are three plants in this category that exhibit mood-altering and hallucinogenic properties (Kayser, 2017). People use cannabis for relaxing purposes; yet, several countries have enacted legislation making cannabis illegal since it is deemed a narcotic. In the United States, however, marijuana is legalised for medical and economic reasons, such as treating chronic diseases and increasing work prospects. The significance of cannabis to current health care will be discussed in this article, as well as whether or not cannabis should be legalised globally


There are many misconceptions about CBD and THC. They both impact mood, but THC’s effects are more severe than CBD’s since THC gets you high. They share the same chemical formula, as indicated in the image below, therefore they are isomers, but the atom configurations differ. THC is prohibited in many places throughout the world because it generates a high, whereas CBD is utilised by healthcare professionals to treat anxiety, depression, and other conditions.

Tetrahydrocannabinol (THC) and cannabidiol (CBD) are the two main psychotropic substances found in cannabis products, as illustrated below:

(Atakan, 2012)

These drugs have a negative impact on the neurological system. As these molecules have a structure that is quite similar to another brain chemical, some receptors in the nervous system may mistake them for other regular brain chemicals. They bind to cannabinoid receptors on neurons, which are part of the endocannabinoid system (“How does marijuana work,” 2020), which employs cannabinoid neurotransmitters to send and receive messages. Overall, they have an effect on the hippocampus of the brain, affecting the person’s ability to build new memories and control their emotions, as well as their ability to learn and accomplish activities. It also affects the cerebellum and basal ganglia, which are parts of the brain that control gesture, balance, and posture (“How does marijuana,” 2020). As a result, cannabis users will appear to have a slower response.


Shen Nung, the inventor of Chinese medicine, first recorded cannabis in his pharmacopoeia in around 500 BC. Cannabis was first cultivated in Central Asia or the west of China. Cannabis has also been documented in Indian, Assyrian, Greek, and Roman literature. Cannabis was described in this literature as having the ability to treat depression, asthma, and pain. 

CBD had then been introduced to the western world, offering medical benefits such as mood enhancers and the prevention of convulsions in children. In a 1936 film, it was discovered that CBD is a highly addictive chemical that causes mental disease and violence. Marijuana has recently gained widespread acceptance as a treatment for patients suffering from mental illnesses, and CBD has been approved for medical usage in most parts of the world (“History of cannabis”). While studying in India in the 1930s, an Irish doctor named Sir Willian Brooke O’Shaughnessy discovered that cannabis can help with stomach pain. Then, people started to notice the effects of THC, which is the psychoactive ingredient in cannabis, which led to a lot of debate on CBD. 

Medicinal Uses

As previously mentioned, CBD has pain-relieving and antipsychotic properties. It also has the ability to alleviate cancer symptoms, protect nerve cells, and aid the heart.

In terms of pain relief, CBD inhibits the activation of endocannabinoid receptors, blocking them from accepting endocannabinoids and so preventing the regulation of sleep, appetite, pain, and the immune system. Some of these painkillers operate best when combined with THC, the psychoactive ingredient in marijuana. CBD was studied to see if it could help with the symptoms of fibromyalgia, a disorder that causes widespread discomfort. The study looked at 2701 people with fibromyalgia and found that using CBD helped them feel better (Kubala, 2021).

CBD oil can also aid with anxiety, insomnia, and PTSD in terms of mental health. They assist the body to metabolise serotonin by targeting 5-HT1A, a serotonin receptor, so that serotonin levels rise and a person’s mood is lifted (Leasca, 2019). CBD can lessen anxiety during a test, according to a study involving 57 males who consumed CBD 90 minutes before the test (Leasca, 2019).

CBD oil has been demonstrated in certain studies to shrink cancer tumour size and improve heart failure symptoms, but current trials are not well-designed and the data is insufficient. A woman in the United Kingdom was diagnosed with a 41mm tumour, but she refused chemotherapy and radiotherapy. Regular CT scans every 3-6 months revealed that the tumour was diminishing, and she confessed that she had been taking CBD oil (“Daily usage of Cannabidiol,” 2021). In another study, nine healthy males were given 600mg of CBD oil before participating in a stress test that raised blood pressure. It was discovered that these males had a lower increase in blood pressure, implying that CBD can help to lower blood pressure (Kubala, 2021).

Criticism Around the World

Medicinal cannabis is generally accepted in many nations throughout the world, including the United Kingdom, New Zealand, Poland, and others. Canada, the United States (certain states), and South Africa are among the countries that have legalised recreational marijuana, while others, such as China, Japan, and Indonesia, still consider it illegal. 

Cannabis is regarded as less hazardous than strong amounts of alcohol, and countries that make it legal strengthen control over crime and the cannabis trade while also allowing cannabis to be more widely accessible for medicinal uses. Companies must have a licence to sell cannabis in the United States, and it is also taxed in states that have legalised it, such as Washington, which has a 37 per cent excise tax on those sales.

Cannabis is now classified as a Class C narcotic in nations that have legalised it, such as in the United Kingdom, so that maximum punishments for supply can be reduced and policy can be focused on more serious offences. Medicinal cannabis, on the other hand, has been demonstrated to provide medical benefits, such as pain relief, as previously indicated. As a result, private doctors in the United Kingdom who are registered with the General Medical Council are permitted to prescribe medical cannabis if other treatments have failed.

Some countries have legalised cannabis to make it easier for authorities to assess and control the substance; nonetheless, this could lead to issues such as widespread usage of the drug and harmful repercussions such as violence and mental illness. In general, medicinal cannabis is beneficial in the treatment of patients and the alleviation of pain. If it is carefully handled, it will be helpful to society.

Mary Ho Yan Mak, Youth Medical Journal 2022


Healthline. 2020. What Is Cannabis? Facts About Its Components, Effects, and Hazards. [online] Available at: <; [Accessed 16 April 2022]. 2022. Medical cannabis (cannabis oil). [online] Available at: <; [Accessed 16 April 2022].

National Institute on Drug Abuse. 2020. How does marijuana produce its effects? | National Institute on Drug Abuse. [online] Available at: <; [Accessed 16 April 2022].

The University of Sydney. 2022. History of cannabis. [online] Available at: <,father%20of%20Chinese%20medicine)%20pharmacopoeia.> [Accessed 16 April 2022].

Women’s Health, Leasca. 2019. Wait, Can CBD Legit Help With Anxiety?. [online] Available at: <; [Accessed 16 April 2022]. 2021. Daily use of cannabidiol (‘CBD’) oil may be linked to lung cancer regression | BMJ. [online] Available at: <; [Accessed 16 April 2022].

Science Direct, Kayser. 2017. Cannabaceae. [online] Available at: <; [Accessed 16 April 2022].