It has been estimated that over 34 million Americans have been diagnosed with diabetes. An ambitious research team in ETH Zurich, Switzerland, noted the rise in fitness trackers’ popularity and created a new treatment of diabetes by utilizing the green light from an everyday smartwatch.
Fitness trackers like the Apple Watch shown below emit a non-invasive green light-emitting diodes (LED) light that goes through our skin to measure our heart rate, sleep cycle, and blood pressure.
Dr. Fussenegger, the leader of the research team, developed a molecular system named “Glow Control” within human cells that can be activated by the green light from a smartwatch. They experimented with HEK293 (Human Embryonic Kidney) cells, which are commonly used in laboratories around the world. In order to test their hypothesis, the researchers implanted HEK293 cells into pork rinds and mice as shown in Figures A and C below. Then, they used the green LED light from Apple Watches to activate these cells and produce human Glucagon-Like Peptide 1 (hGLP-1), which is responsible for the production of insulin. Many medications today mimic the effects of this peptide to control glucose levels in patients with Type-2 diabetes.
The Type-2 diabetic mice displayed increased levels of hGLP-1 and lower levels of glucose compared to the mice that weren’t exposed to green LED light in the control group. Over a 12-day treatment period, the mice in the experimental group showed reduced body weight gain and insulin resistance.
The Glow Control system can easily be implemented in our everyday lives. Its reliance on green LED lights from commercially available smartwatches such as the Fitbit or Apple Watch makes it accessible and eliminates the need for patients to purchase a special medical device. Artificial pancreas, one form of treatment for diabetes, is invasive and requires constant glucose monitoring. In comparison, the Glow Control system is non-invasive and compatible with various smartwatches that can download medical software for monitoring and treatment. Dr. Fussenegger states that the smartwatch still has at least ten more years before it can be used in clinical practice. Multiple clinical phases for this smartwatch ensure that it is a safe, effective, and ethical product for patients to use.
ScienceDaily, “Controlling insulin production with a smartwatch”, 7 June 2021
The idea that an entire brain could be transplanted into another person remains a very controversial topic in the medical sciences. Although there has historically been no record of an entire brain transplant (also known as a head transplant) into a living human unlike what is depicted in some films, there has already been an allegedly successful human brain transplant between two human corpses in 2017. A team of surgeons have even claimed to perform an estimated 1,000 head transplants on mice! Hence, human brain transplants may just be possible, although still highly unlikely in the near future. In what situations would they be considered, and what is the controversy behind transplanting a human brain? These are some of the questions which I will explore, through three considerations: the difficulties of the procedure, the possible benefits, and of course, the ethical implications.
The Difficulties and Risks of a Brain Transplant
First of all, it should be established that a brain transplant would be extremely difficult to perform. The reason for this is that transplanting an organ from one donor to a patient should be done only when both parties are immunologically compatible with each other. This concept of “immune compatibility” is an essential consideration when transplanting a heart, liver, or kidney to another patient. As long as there isn’t an immune response in light of a foreign organ in the patient’s body, the patient should be fine. Doctors identify immune compatibility based on the similarity of genes of the human leukocyte antigens and major histocompatibility complex, which are both involved in triggering an immune response. After, immune-suppressors are injected to ensure that an immune response really won’t occur during and after the transplant. However, scientists are not very familiar with antigens located on neurons and other glial cells which poses a huge risk on patients that want to get a brain transplant.
Another factor to consider is that the brain is highly sensitive to changes in the environment, and neurons can readily die because of lack of oxygen during the transplant. In order for a brain transplant to successfully occur, the surgical room environment must be highly controlled, specifically accommodating the needs of a living brain exposed to the open air. Furthermore, the surgeons performing the transplant would need to be very careful when reconnecting the blood vessels between the implanted brain and the patient’s peripheral nervous system. They would also need to be wary of connecting the spinal cord to the brain.
Figure 1: Surgical Operation, sourced from Verywell Mind
Why Carry Out a Brain Transplant?
According to Dr. Sergio Canavero, an Italian neuroscientist, head transplants would be used on untreatable neurological disorders that cause significant harm to the patient (e.g. muscle-wasting). On the other hand, head transplants may also be used when a patient has an unhealthy body that is beyond repair or management (essentially inhabitable) but an otherwise perfectly functional brain, in order to give them another chance at life. There are many problems with this though, and one of them is the fact that a healthy and uninhabited body is needed in preparation for the operation (which, as one could imagine, entails several ethical considerations in itself). Furthermore, there is the question of whether a person’s soul can be preserved once a transplant has been done – as in their personality and memories.
Therefore, with the large area of uncertainty towards the effectiveness of a brain transplant, brain transplants seem like a possibility that shouldn’t be taken just yet.
Ethical Implications of Brain Transplants
There are lots of ethical considerations made for brain transplants. Firstly, there is the ethical consideration of consent, which may entail asking questions such as “who is going to donate a healthy body for a terminally ill patient with an otherwise healthy brain?” or “how does the patient feel about getting a brain transplant?”. These questions are key to answer before a brain transplant can even be performed, however they are also difficult both on the doctor’s and patient’s part to answer. The patient would have to be aware of all possible risks in going through with the operation (which could be a myriad of things), and would have to really consider what life would be like given a successful brain transplant.
The possibility of life after a brain transplant may be a scary thought, especially given that there is no telling whether the patient’s “soul” would still exist in a new body. After all, are an individual’s personality, opinions, memories, and essentially every characterization about themselves localized in the mind, or does one’s body also pose a significant influence on one’s “soul”? The interconnectivity of human bodily systems and the fact that scientists may not completely understand the complexity of interactions underlying behaviour (not just in the brain but in the entire body itself) makes this a very mind-bending (if at all philosophical) question to consider. Given that research has found links between one’s mood and emotional behaviour to other bodily systems (take the gut-brain axis as an example), how much of that behaviour would be altered as a result of such a risky operation? If other bodily systems do influence behaviour, then that would imply that the patient would exhibit different behaviours than before surgery due to being in a foreign body of differing chemical composition and stature. Hence, doctors may have to consider more than just immune compatibility when planning a brain transplant. The amount of research needed to clarify the uncertainties present in doing a brain transplant is therefore profound, such that there is very little chance for a brain transplant to actually happen in the near future.
The brain, while known to be the most complex object we’ve ever known in the observable universe, is also a very fragile organ which must be dealt with utmost care. The prospect of brain transplants leaves for very interesting philosophical discussions about soul preservation and human behaviour, while it also uncovers how much humans don’t yet know about the human body and behaviour. However, it is precisely because there is so much that society doesn’t know about the human body yet and what could happen after an operation that this operation seems highly unlikely to be carried out soon.
Melle Hsing, Youth Medical Journal 2021
Appleton J. (2018). The gut-brain axis: Influence of microbiota on mood and mental health. Integrative medicine (Encinitas, Calif.), 17(4), 28–32.
Bioelectronic Medicine consists of implantable devices that will use electricity to regulate biological processes, treat diseases, and restore lost functionality. These devices would work in a manner in which they will induce, block, and sense electrical activity by taking into account the Peripheral Nervous System at the centre to progress with advances mainly concerned with chronic diseases and their control. It will be attached to individual peripheral nerves, thereby deciphering and modulating neural signaling patterns, achieving a therapeutic effect targeting the signal function of a specific organ. These miniaturized device components will create flexible and biocompatible materials. The biggest problem with drugs is that, although they cure diseased cells, they also result in adverse reactions. Bioelectronic Medicine, as compared to drugs, will be designed with more efficiency and will comprise of expandable components for computation and power, thus reducing the side-effects and costs. It will focus on precision to reach specific targeted locations.
The research is ongoing to discover a particular methodology of harnessing the body’s peripheral wiring, which might help in the treatment of acute and chronic diseases. Dysfunctional neural circuits give rise to dysfunctional organs. The aim of bioelectronic medicine is to counteract this condition by restoring electronic impulses with adjustments of neuron firings, thereby changing the neurotransmitter concentrations traveling through those circuits. An alternative way is emerging including neuromodulation, biostimulation, or electroceuticals to deteriorate the expense of costly chemical and biological drugs.
The human body is electric. Peripheral nerves connect all organs to the central nervous system These nerves are packaged in bundles and carry about 100,000 nerve fibers. A peripheral nerve is also the longest nerve in the body, linking the brain to the organs and controlling breathing and heart rate. It goes everywhere to gain access to a bunch of different targets. Some of the researchers believe that this is the greatest promise for bioelectronic medicine, which, by manipulating the vagus nerve, can control inflammation and the immune response and drive most chronic disease. Also, acetylcholine, the principal neurotransmitter that stems from the vagus nerve, inhibits the production of cytokines such as the tumor necrosis factor, an inflammatory molecule involved in rheumatoid arthritis.
These implants are very small in size. Researchers and engineers are into a mindset of not blindly zapping a large bundle of nerves. Instead, they are collaborating on the basic physiology and high-tech tools needed to zero in on the specific subset of fibers known to innervate the organ of interest. Further changes are thought to personalize this device in such a manner as changing the pulse, width, amplitude, or frequency of stimulation. Unlike as with many pharmacologic agents, a plan is executed to be selective with therapeutic effects along with its reduced side effects as well.
Such a device would not only develop connections with the nerves, but also with the rest of the body to decipher and develop the best response in real-time by stimulating or blocking nerve signals. For example, diabetes might involve the pancreas to make more insulin when it’s needed. In the gastrointestinal system, an electrode might sense the motility rate of the gut and then determine the optimal frequency of pulses to speed it up or slow it down. The ultimate goal is to restore a healthy pattern of electrical pulses.
In the upcoming future, the vision of revolutionizing the system of medicines will come to fruition. Bioelectronic medicines hold the promise in achieving the therapeutic intervention by modulating the signalling patterns of the nerves’ impulses. These medicines include devices which are implanted anywhere in the viscera and record the neural activity. However, the upcoming research is thought to focus on three principal areas. Firstly, making of an instinctive nerve atlas is pivotal as this focuses on mapping the innervation of visceral organs such as the lungs, heart, liver, pancreas, kidney, bladder, gastrointestinal tract, lymphoid organs, and reproductive organs. The objective is achieving resolution at the level of nerve fibers and action potentials. Secondly, neural interfacing technology helps in mapping neural signals, which includes ultrasonic and tomography techniques for recording and modulation. Thirdly, when the particular signaling pattern is characterized, the focus drifts towards confirmation of rule, which implies characterizing which neural circuit exerts impact over which disease in the representative animal model. After that, an experimental phase is sought after, which includes developing the correlation of neural signals and biomarkers patterns and also investigating the effect of blocking and stimulating neural activity during established disease. Thus, altogether this revolution in the medical fraternity might bring a change in the society introducing a new class of precision medicine to patients.
3]Ramkissoon, A. C. M., Datta-Chaudhuri, A. T., Authors: Eugenio Redolfi Riva and Silvestro Micera, D. K. J. T., Mehta, A. N., Peng, A. S., Authors: Amparo Güemes and Pantelis Georgiou, D. K. J. T., Addorisio, A. M. E., Bonaz, A. B., Leitzke, A. M., & Bettinger, A. C. J. (2021, May 25). Bioelectronic Medicine. https://bioelecmed.biomedcentral.com/.
Homeopathy is a practice developed in the 1700s, and has become highly popularized over the years. Medicine has changed greatly over time. During the time, homeopathy was developed as bloodletting, which involved the leakage of blood, was a common treatment. People began to turn to other alternative medical practices, such as homeopathy. Since then homeopathy has grown exponentially. Believers claim that homeopathy can be used to address numerous health issues, such as allergies, arthritis, migraines, and other common issues. Others even go as far as to claim that homeopathy can treat serious illnesses like cancer and heart disease. This article will investigate whether or not these claims have any merit?
What is Homeopathy?
Homeopathy is a form of alternative medicine. Alternative medicine includes medical treatments that are not traditionally used. Other examples of alternative medicine include acupuncture, herbal medications, and energy therapy. Usually, alternative medicine is used in addition to traditional medications as they are not usually worthy replacements for regular medication prescribed by a doctor. Alternative medicine practices are very different and distinct from standard practices, which makes them shunned by much of the medical community. People who support homeopathy are among those who argue for more acceptance of alternative medicine as a viable treatment option. There have been people who have been successful with alternative treatment options, such as homeopathy. When considering alternative medicine, you should keep in mind the benefits and risks that correlate with the treatment. Additionally, keep in mind any possible side effects.
Homeopathy operates on the principle that “like cures like”, and that the body will eventually heal itself. In other words, anything that causes symptoms in a healthy person may be used to treat sickness with identical signs in very little quantity. The goal is to activate the body’s immune system. Homeopathic doctors are known as homeopaths, and they create numerous medications to try and treat their patients. Homeopaths hope to create a personalized treatment plan for their patients. This is something good about homeopathic practices that are not found in traditional medicine practices. A homeopath will ask you a series of questions about your cognitive, emotional, and physical health during your consultation. After this, they will give you the medication that best fits all of your symptoms. As a result, the therapy will then be customized for you. To create medication, homeopaths utilize a process called potentization, by which ingredients are weakened through water or alcohol. The notion is that diluting and agitating the chemicals activates and amplifies their curative properties. One part of the solution is combined with nine parts of water, which dilutes the water. This same process of nine parts water and one part ingredient is repeated over and over again until the potency level is reached. Doing this twice would result in a 2x potency, 3 times would be 3x, and so forth. Many homeopathic medicines are created to very high potency. Most homeopathic treatments are so diluted that not a single atom of the active component remains. Over-the-counter homeopathic treatments are also available at certain drugstores and pharmacies. The manufacturer determines the dose and composition of these items.
Homeopathy and The Placebo Effect
Some people believe that homeopathy does help themself. There have also been reports of homeopathy working on younger children. However, much of the scientific community seems to believe that homeopathy working, is simply the placebo effect. The placebo effect is when a certain improvement is seen, but it is not due to the treatment that the person believes. In other terms, someone may believe that something is healing them when their bodies are healing by themselves. Medical placebos are usually identified by a lack of an active ingredient in them. Scientists suggest that a treatment that contains no active component should have no impact on the body. when symptoms improve because you feel the medication is effective rather than because it is. This notion can cause the brain to produce chemicals that temporarily alleviate pain or other symptoms. Since many homeopathic medications are diluted so much that there is barely any trace of an active ingredient left. For this reason, homeopathy is connected to the placebo effect and is commonly believed by medical professionals to not work. Also, homeopathy heavily depends on time, and after a certain amount of time, your body will naturally cure itself of most diseases. The patient may believe their health improved due to homeopathy when in actuality, the body healed itself.
There are certain aspects of homeopathy that are distinct and good. One of these is the level of personalization that the treatment has. A session with a homeopath might last several hours and is usually one on one. This experience creates a level of empathy that is not present in other forms of treatment. The idea that someone cares for your wellbeing impacts the patient greatly. In conclusion, homeopathy may not work from a medical standpoint, but the level of personalization in homeopathic treatment is admirable.
Dietary Supplements are advertised as nutritional additions to your diet. They are products with dietary ingredients such as minerals, vitamins, amino acids, enzymes, botanicals and others. These supplements are available for consumption in pill, gummy, or liquid forms. Their containers are usually labeled with ‘dietary supplement’ on the front panel. Labels also display active ingredients, instructions on how they should be used and their serving size. Generally, dietary supplements allow an individual to obtain essential nutrients, especially if their usual diets do not contain food varieties. Risks of having health problems can be lowered as well through supplements. Supplements should not, however, be used to replace meals. This article will continue to explore dietary supplements by understanding their various uses, and who uses them as well as evaluating their pros and cons.
Who Uses Dietary Supplements?
Half of the United States population takes at least one supplement on a daily basis. According to the CDC between 2017/18, around 57.6% of adults aged 20 years and above had used a dietary supplement in the past 30 days. Of this percentage, 50.8% were reported to be men and 63.8% were reported women. Whether male or female, the use of dietary supplements increased with age. For men between 20 – 39 years, the use of supplements was 35.9% but rose to 67.3% in men aged 60 years and older. Meanwhile in women aged 20 – 39 years old, the use was 49.0% and rose to 80.2% in those 60 and older. All this data shows that the use of dietary supplements is generally higher in women
Many take dietary supplements for different reasons. Aside from maintaining their overall health and wellness, some take them to get in needed nutrients. Others take supplements for energy, some for their bone health and others for heart health. Pregnant women or those trying to become pregnant may take prenatal vitamins such as Folate, which is better known as Folic Acid or Vitamin B9. Taking 400 micrograms of Folate on a daily basis helps to promote genetic material growth and provide protection against birth defects.
People on restricted diets such as vegans, or people with food allergies may also need to take dietary supplements. Supplements provide their bodies with the needed nutrients it may find hard to either digest because of allergies or even get due to their diet. Vitamins such as Calcium and Vitamin D are great supplements for older adults who may need them for bone strength. Other supplements they may need include Vitamin B-12 which helps to maintain red blood cells and nerves and Vitamin B-6 which helps to form red blood cells.
What Are The Benefits And Side Effects Of Their Usage?
Once again, dietary supplements are useful to gain adequate amounts of essential nutrients to the body. Their role could be vital in leading a healthy lifestyle if the consumer is well informed. They can also be used to maintain one’s general health, provide support to one’s immune system and support sports and mental performance.
Not following the instructions printed on the dietary supplement’s container or your doctor’s advice can lead to negative side effects. These include having an upset stomach, experiencing heartburn, having gas and feeling bloated. You may also have more serious consequences such as suffering headaches, feeling nauseated, bleeding internally, having liver damage and more. In a study published in The New England Journal of Medicine, it was observed that unfortunate side effects of dietary supplements accounted for about 23,000 emergency room visits per year. Thus establishing that although supplements are meant to be beneficial to a person, they can still be extremely harmful if not used correctly.
More Information Concerning Supplements
The United States’s Food and Drug Administration (FDA) regulates food, vaccines, cosmetics, drugs, medical machines meant for human use and tobacco products. Dietary supplements are also regulated by this federal agency, although they are treated more like food instead of medication under FDA guidelines. Dietary supplement makers don’t have to prove their products’ effectiveness or show how safe they are before selling them on the market. The manufacturers however, are supposed to follow good manufacturing practices or GMP’s, to confirm the supplements meet specific quality standards. A seal of approval from an organization that tests supplements such as; US Pharmacopeia, ConsumerLa or NSF International; allows you as a consumer to know you are getting a quality product. Look for this seal on the container of the supplements.
Dietary Supplement makers are not allowed to claim that their product prevents treats symptoms, cures symptoms, reduces symptoms or prevents diseases. They need to add a disclaimer on the label if such claims are made. Over the top claims such as a product being ‘Completely Safe!’ or a product is ‘Totally Natural!’ or a ‘Miracle Cure’, are warning signs one should further investigate. Contacting your doctor, pharmacist or the manufacturers to ask which studies have been done to support the extravagant claims made about the supplement.
In conclusion, nutritional additions to a person’s diet are dietary supplements. About ½ of the United States populations takes 1 daily, with a higher percentage of this group being female. They can be great instruments in leading a healthy lifestyle but can also be dangerous if used incorrectly. Despite all their benefits, it is still recommended to try and have a varied diet instead of relying on supplements.
At the end of the day though, how one chooses to take control of their health is entirely up to them. However, if you are contemplating taking supplements; you should consider the dosage, frequency and potential health risks. While clearly following the instructions on their container labels is paramount, you must always consult with your doctor on any queries and concerns, ensuring you receive the best health care possible.
Neonates are newborn infants that are four weeks old or younger. These first four weeks of an infant’s life are when the infant is at highest risk of dying. At this stage in life, neonates do not have fully developed immune systems and are more susceptible to different infections. Of the 5 million infant deaths that occur each year, 1.5 million are due to infections, making it important to understand the developing immune system of neonates (Tregoning).
Part of understanding the immune systems of neonates is first understanding the transition form the sterile womb to an unsterile environment during birth. The fetal immune system is suppressed in the womb in order to limit interference with the mother’s immune system. While this provides stability before birth, the arrangement changes the second after birth, when the newborn enters the unsterile environment of the world. In addition to the risks of being exposed to bacteria, the fetal immune system after birth (which was previously suppressed) is antigenically inexperienced; it does not yet have experience responding to different pathogens, which increases the infants susceptibility to infections (“Development of the Immune System”). Therefore, after birth, neonates depend on “passive immunity” for protection, as their own immune systems develop.
Neonates depend on antibodies from the mother for protection from different antigens. This is called “passive immunity,” as antibodies from the mother are passed down to the baby passively through the placenta, rather than the antibodies being created by the infant themselves. Most of the antibodies produced by the mother’s immune system cross the placenta during the third trimester, which ensures that there are high levels of antibodies after birth. This also explains the low levels of antibodies in premature babies; the timing of the birth does not allow for the same amount of antibodies to be transferred, making premature newborns more vulnerable to infections compared to full-term newborns. Additionally, breastfeeding is another form of passive immunity that allows for the passing of antibodies to infants (“Development of the Immune System”).
Passive immunity only provides short term protection for neonates. The antibodies transferred through the placenta or breast milk are generally immunoglobulin A or G (IgA or IgG). Some of these maternal antibodies protect against measles, mumps, rubella, etc (“Immunity: Active, Passive, and Delayed”). The antibodies transferred passively from the mother to the child either through the placenta or breast milk only protection for the first few months of the infant’s life. This allows the infant’s immune system to develop and start working while keeping the infant protected (“Development of the Immune System”).
The Immune System At This Time
Newborns have a limited quantity of phagocytic cells (types of white blood cells such as neutrophils and macrophages), which are important for innate immunity (the nonspecific immune response immediately after the appearance of an antigen). During an infection, the immune system’s response will be limited by the quantity of neutrophils and macrophages. As a result, the pathogen will commonly overtake the immune system, and the infant will require medical care (“Development of the Immune System”).
In addition, there is also adaptive immunity, which is the specific immune response that occurs after the innate immunity system fails; it is the system that protects the body by remembering and destroying pathogens. As the newborn’s immune system is inexperienced, every pathogen is new, resulting in the immune response taking a longer time to develop. The fact that every pathogen is new also means that there are no memory immune responses, which affects antibody production (“Development of the Immune System”). The process of producing antibodies is less efficient in newborns compared to adults. Some B cell (a type of white blood cell) responses require T cells to produce antibodies. The interactions between T cells, which attack specific antigens, and antigen-presenting cells, which present antigens for recognition, are less effective and stimulating in newborns. There are lower levels of cytokines (which regulate the immune response) produced by T cells. Furthermore, the levels of types of T cells are different in newborns than in adults. For instance, there are lower levels of cytotoxic T cells, which are responsible for killing virus infected cells. These factors influence the levels of antibody production. For B cell responses that don’t involve T cells, B cells recognize the repeating proteins on the surface of a pathogen; this response is also reduced in newborns, resulting in increased susceptibility to bacteria (“Development of the Immune System’).
The reduced immune response of newborns affects the efficacy of vaccines, as there is reduced recognition of vaccine antigens as foreign. Therefore, there are also fewer protective memory responses induced by vaccines, making vaccines themselves less effective in newborns compared to adults with developed immune systems (Tregoning). However, this does not mean that early vaccinations are ineffective. They still aid in protecting against diseases, and they become more effective over time as the newborn’s immune system develops (“Development of the Immune System”).
In fact, as the protection from passive immunity fades over a number of months, vaccinations are required to maintain protection against different antigens. The fading of maternal antibodies is also why there are certain required vaccinations after set periods of times; for instance, the MMR vaccine is required after 1 year of life (“Immunity: Active, Passive, and Delayed”).
The immune systems of neonates are, unsurprisingly, different and less developed than those of adults. As a result, newborns depend on passive immunity (antibodies passed down through the placenta or breast feeding) for protection against infections. The processes in the immune system itself are also different in newborns, which affects the immune system’s capabilities. The increased susceptibility to infections in newborns makes it all the more important to understand the neonatal immune system.
“Immunity: Active, Passive, and Delayed.” World of Microbiology and Immunology, edited by Brenda Wilmoth Lerner and K. Lee Lerner, Gale, 2007. Gale in Context: Science, link.gale.com/apps/doc/CV2644650228/SCIC?u=mlin_m_newtnsh&sid=bookmark-SCIC&xid=bd032b6a. Accessed 31 May 2021.
Things such as peer pressure, depression, and exposure to abuse and trauma can lead to the use of drugs. Using drugs once, telling yourself “just this one time” can lead to a brain disorder called addiction, using things such as alcohol and drugs despite knowing the life-threatening causes. In this review, we get to focus and discuss the main sex differences in the behavioral effects and the atomic structure of psychostimulants such as cocaine and opioids such as morphine and heroin. Some data given in this review allows us to conclude that males (ages 12–25) are more likely to abuse or be dependent upon marijuana or alcohol, and females (ages 12-25) are more likely to abuse or be dependent upon cocaine and psychotherapeutic drugs. As stated in the review, a recent cross-cultural analysis of sex/gender differences in substance use disorders displays a major diversity across cultures, showing that men are more likely to have access to substances than women, this difference in accessibility appears to account for much of the gender difference in the generality of substance use. The goal of this review was for us as the audience to understand the 5 w’s to the differences and similarities between males and females, besides, we cannot assume that females are just males with the letters “f” and “e” attached to the front. After advancing and growing this research to further levels is the only way addictive disorders can be successfully prevented and treated in the entire population.
This paper displays a detailed, informative, and deep understanding of the variations of drug use between males and females. Not only does it allow us, readers, to see these differences between genders, but allows us to see the effects of addiction on chromosomes, autosomes, and hormones in females and males. The sections in this paper allow us the readers to focus on the sex differences on the four neural systems that are the main characters in the addictive process, dopamine, MORs, dynorphin, and BDNF. Neural systems are structures of cells, tissues, and organs that regulate the body’s responses to internal and external stimuli, these systems contain the brain, spinal cord, nerves, ganglia, and parts of the receptor and effector organs. It was an interesting read, with the details included in the paper it enhanced my knowledge further on the contrasts in brain activity among the genders. This study and research have played a role in the contribution of medical knowledge to our society by defining and stating the fact that there are differences in the process of addiction in males and females. These dissimilarities are identified by identifying the gaps in knowledge about how neural systems communicate with each gender and influence addictive behavior, emphasizing throughout that the effect of sex can cause a subtle difference, indicating that male/female drug use data should be recorded despite the outcome.
Evaluation of Methods Used
The techniques used in this paper are effective and can be used to solve other problems in modern-day society related to the neural systems and addiction. Addiction is very common in today’s society, these addictive disorders, these brain disorders are a real problem in today’s society in the entire population, which—at the last U.S. census count (2010)—was 50.8% female. The information and detail are given in this paper plus the addition of more future research can bring us closer to approaching the prevention and treatment of addiction. The researchers and writers of this paper highlighted the missing spaces in knowledge and led us into new research about the mechanisms specifically mediating addiction in both genders. The methods used in this paper led us to the conclusion of the differences and similarities between genders in addiction as well as the characters in the process of addiction and the development of the addiction cycle, three-stage cycle: 1. initiation/escalation, 2. withdrawal/negative affect, and 3. preoccupation/craving. The researchers know that there is a long journey ahead of them to find the treatment of addiction that is effective for each gender, they know that more research needs to be done. Overall the paper did find a conclusion to the connection between the neural systems and addiction but many questions are left unanswered and need to be answered by more research on this brain disorder.
Concerns that can arise from this study include the relationship between animals and humans. Animal testing to find an additional treatment may work but translating this to humans will cause more obstacles. The number of people depending on their gender, female or male that are addicted to the use of drugs varies, comparing data from past years to now indicates that numbers have declined but also increased in some situations. This can cause worries by making sure that both treatments work for each gender, in some cases more females are addicted and in some cases, males are more addicting. It is important to find a treatment for both genders and not one and it is also important that all addiction reports are recorded, every gender, every person counts toward the journey of preventing and treating this disorder. This paper was published in the right journal, neuropsychopharmacology is the study of the effects of drugs on the nervous system and its consequences on the mind and behavior. This journal focuses on this study of the drugs and its connections to the nervous system which is what is displayed in this research paper by connecting addiction in genders with neural systems. The readers and audience of this journal will care for this research since it correlates with the idea of drugs and the nervous system, the main subject of this journal.
Problems and Admirations
The methods and evaluation techniques in this paper are very in-depth, there is lots of information and lots of discussion about every aspect of the influences of drugs on each gender’s hormones, chromosomes, etc. The system of methods allows us to grow our knowledge but gives me individually a hard time to comprehend all of this information at once. The idea is stated clearly, there are and always will be differences and similarities of addiction between genders but the evidence given to back this up is almost too in-depth. All the answers are situated within the paper for readers to find which does not allow us to be curious or do more research of our own. It is almost too long, I think they could’ve been more broad and kept it more simple and short which would allow us the audience to interpret more things and ask more questions. I appreciated the connections made about the organizational vs. activational influences on each gender, they allowed readers to dig deep into the writers’ thoughts and thoughts. The future related to addiction after this research is big and long, the researchers and writers of this paper also have some biological questions that arose from their research such as “Are there sex differences in the efficacy of pharmacotherapy treatments (i.e., methadone, buprenorphine, naloxone) for opioid addiction?”. I think that from the publication of this paper, more researchers will be inspired to join on the journey for finding addiction treatment and to answer the biological question that arose from the framework of this paper. Future researchers will use our knowledge today and answer the question many researchers have today about addiction to approach the decline of addiction.
Living in this advanced world today allows us to have access to new incoming ideas such as the use of various drugs that can have great potential and benefit different diseases such as cancer. In this research journal, they focus on the great potential of cancer immunotherapy using various immuno-oncology drugs. The use of nucleic acid therapeutics on different diseases, specifically cancer, has advanced day by day, showing us how different drugs have unique abilities and how these abilities have advanced in the world we live in today. Through trial and error researchers have figured out problems each therapeutic faces, for example, negative charges and hydrophilicity known as the attraction to water. In this research, we learn about the different drug delivery systems that can safely release and target specific cells. Some of the different drugs that can be used to target specific tissues and cells include small interfering RNA (siRNA) and messenger RNA (mRNA). A few of the discussed delivery systems that can safely transport these drugs to the targeted area includes the nanoparticulate drug delivery system of nucleic acid therapeutics using micelle and the delivery of cGAMP using liposomes (structured cGAMP vs. free cGAMP). After the experimenting and research, the results have displayed that these nucleic acid therapeutics have a remarkable amount of potential for a vast assortment of diseases. Each drug has unique skills and characteristics such as changing gene expression and regulating protein function for immune responses but also has challenges blocking the delivery systems to achieve effective transportation to the targeted cells. Each day new delivery systems are created and modifications are made to different therapeutics which would allow us to issue problems connected to the treatment in the immunotherapy of cancer.
Through the process of reading, analyzing, and critiquing this paper many reasons indicate why this paper makes a good contribution to society. The specifics in this paper allows us the readers and writers to dig deep and think. The use of different words we’ve never heard or seen and the use of deep topics that take time to understand makes the readers ask themselves questions and makes them left wanting more, therefore, resulting in researching deeper into the topic. This paper discussed the different nucleic acid therapeutics, the advanced delivery systems of these therapeutics, and the positive functionalities as well as the challenges of these therapeutics allowing the audience to see these therapeutics from all angles so he can add more to the research. Another reason why this paper is a good read is that the paper provides sufficient proof of how science and medicine have advanced together. The paper provides a visual timeline dating back to 1995 when the first description of CpG-dependent stimulation was created. This paper has contributed to the medical society by bringing in new medical knowledge related to different nucleic acid therapeutics and their delivery systems. This paper brought in new medical information by displaying the pros and cons of each drug and the different delivery systems that can be used to target specific diseases. This paper can allow other researchers to conduct experiments on therapeutics and use their information to see which delivery system will work best with the disease they are targeting. This paper presents proof and factual information about what therapeutics have the potential to use towards cancer immunotherapy, leading other researchers into using their research to help towards another experiment.
Evaluation of Methods Used
The methods utilized in this paper allow for a variety of diseases to be treated, the research was open-ended and shows the potential for each therapeutic and how it can be delivered to target specific tissues and cells. Listing many options for delivery systems and many characteristics for each drug allows other researchers to solve other problems using this research. This research was done on the immunotherapy of cancer using nucleic acid therapeutics meaning it could work on a wide range of cancer types. The researcher’s methods brought proof indicating that these therapeutics could work on the immunotherapy of cancers, they displayed how these drugs can be used and which drugs we should use to be treating this cancer. The treatment of cancer is evolving every day, different medical researchers are looking to find a way to cure cancer and this research could play a big role in finding a sufficient treatment of cancer. As stated in this paper the FDA is starting to approve more studies involving nucleic acids and immunotherapy. Indicating that it will have a significant impact in treating different types of cancers and diseases.
There are negatives and positives to everything, there are side effects but these effects can also result in something life-changing. This paper provides a variety of information indicating the success of using these therapeutics on cancer immunotherapy but there are still concerns. Concerns regarding the different side effects on the human body after immunotherapy using these drugs, the survival rate, and the possibility of taking all or most of cancer away. Could these therapeutics lead to a new disease in the human body, taking away one cancer resulting in new cancer developing? There may be great proof and progress of using these therapeutics for different diseases but the challenges still exist. Challenges such as could this drug be toxic for the human body, the quantity of this drug vs. how much it costs and as well as what happens to the therapeutics inside my body after the treatment. This paper was published in the right journal for the correct audience that would put interest into this topic. This paper shows how the discovery of nucleic acid therapeutics can work towards advancing medicine and cure a life-threatening disease.
Problems and Admirations
In this paper, I enjoyed the different points of view on each drug and delivery system. The method used in this paper allows the readers to deeply think about each topic and step on the journey to treating cancer. I admire the researcher’s evaluation techniques as they went in-depth and provided an outstanding amount of information on this topic, hooking the readers causing them to keep reading and researching more about this topic. The researchers brought hope and showed how advances in medicine today can change anyone’s life. After reading this paper I understood the writer’s thought process, I understood all the different ideas they were thinking and it allowed me to feel grateful for the new innovative ideas we have that can save someone’s life. After this paper is published it will show people the future we have that is full of ideas worthy of doing anything. It will allow others to take the little ideas and thoughts in their heads and turn them into something beautiful. This research will contribute to other studies and allow us to be one step closer on the journey to find a cure for cancer.
Machine learning in modern cancer treatment is a fast-growing field that promises to produce many scientific breakthroughs in the future. This article discusses both the promises and perils that come along with applying artificial intelligence to cancer treatment. With cervical cancer treatment, this growing technology can be used to assist doctors in cancer detection as well as to predict patient survival rates. In lung cancer treatment, artificial intelligence platforms are again used to make predictions for patient health in addition to analyzing images for a more accurate prognosis. Finally, machine learning is also able to predict the survival rate and metastasis for different forms of brain cancer and provide medical students with realistic surgical simulations on how to operate. However, while there are a multitude of promises for the future of AI in medicine, integrating new technologies into a previously established field does have disadvantages. The constant evolution of software and technology means that operators require constant training to be able to handle the tasks Furthermore, the lack of doctor-patient feedback can take a negative toll on patients’ mental health and privacy. The automation of various processes, comes at the cost of various jobs, of people who originally performed these tasks manually. Therefore, when implementing AI into the medical field it is important to acknowledge the great promises the technology has, but also to weigh the negative effects that may result from its application.
One of the most important parts of practicing medicine is decision-making, a skill that relies heavily on judgment. Cancer treatment, or Oncology, is a medical speciality where decision-making is incredibly important because of the unpredictable responses to treatment and change in a patient’s condition. This is where artificial intelligence (AI) comes into play. It is a promising tool that can objectively interpret cancer images and predict a cancer patient’s outcome- essentially mimicking the cognitive functions of humans. Research has shown that AI has the potential to exceed human performance in certain areas of medicine. Multiple examples of useful areas within AI will be discussed in this paper including two main ones. The first is detection or determining which objects are located within the body by analyzing images. The second task is characterization, separating tumours into groups based on physical appearance (Bi et al., 2019). Both of these tasks are a crucial part of making clinical decisions.
Machine learning (ML) is a subset of AI (Fig.1) that has been widely used in current healthcare applications since it uses data to train computational systems without the need for explicit programming. These computer programs can learn and improve from experience, unlike traditional computer programs that require specific instruction at each step, which makes them incredibly useful in the field of science (Ahuja, 2019). Allowing machines to make predictions based on a pattern that they have recognized. With the use of ML, a computer can use previously labeled data or even the pattern found in the data, and make predictions about it. In particular, ML excels at finding indistinct patterns which are undetectable to humans, in larger sets of data. ML also enables an algorithm to perform a task such as making medical decisions or driving a car while also correcting its own mistakes. Deep learning is a subset of ML that uses structures similar to a brain neural network in order to identify patterns within large datasets. CNN’s or convolutional neural networks are other subsets of ML that will be discussed and are generally applied to classification as well as analysis of patient scans (Hashimoto, Rosman, Rus, & Meireles, 2018).
In the future, AI analysis has the potential to work its way into all parts of patient care. Before surgery, it can help track the activity of a patient and access electronic health records. During surgery, it could assist the surgeon in making quick decisions based on the patient’s vital signs. After surgery, it can continue to collect and analyze patient data (Hashimoto, Rosman, Rus, & Meireles, 2018). This paper will discuss the application of AI to cervical, lung, and breast cancer treatment including the use of detection machines, segmentation techniques, and prediction algorithms, as well as weigh the challenges and social aspects of introducing AI to the medical field.
The Application of AI in Cervical Cancer Treatment
When dealing with AI and cancer detection, one of the most prominent issues that comes up is the invasiveness of diagnosis as well as how many cases are missed. While it can be cured if found at an early stage, many women die every year because their cancer was not detected early enough and symptoms did not appear until the cancer was too far advanced to treat. One of the few cancers that will be discussed in this paper is cervical cancer. Cells in the cervix can either be squamous cells, which when infected cause squamous cell carcinoma, or glandular cells which causes adenocarcinoma (P & M, 2018, p. 1). Because cervical cancer is difficult to detect and hard to treat if it has progressed too far, automated machines that can detect cervical cancer could significantly improve the survival rate of women suffering from the disease.
A study performed in 2018 proposed an automatic detection assisted by artificial intelligence which could detect cervical cancer in patients. The first was a preprocessing step and it involved taking a cervical cancer image and enhancing the contrast of it for better visibility using Oriented Local Histogram Equalization. Certain features such as roundness, sides, and circularity were then extracted from the image and used to train the neural network. The features were extracted to discriminate between a healthy cervical image and a cancerous one. The neural network classifier would then identify the cervical image as either benign or malignant by comparing it to the features used for training. For the classification of the tumour, a feed-forward backpropagation neural network was used to make the classification reach the highest possible accuracy. This type of neural network is built using three layers. The input layer accepts the elements of the features that were extracted. Three “hidden” layers in between all with different functions are also used, each of which has a different number of neurons, or calculated inputs from the previous layer. An example of this would be the average of all of the results from the previously hidden layer. The output layer is the response of the neural network and classifies the image as either normal or cancerous (P & M, 2018, p. 1).
Another important part of screening is finding cancerous lesions in images. Segmentation is a difficult but necessary part of this. The most common test for cervical cancer screening is cervical cytology or the pap smear test which screens for malignant tumour cells in the cervix. A positive cytology test can show different types of abnormal epithelial cells such as atypical squamous cells or atypical glandular cells. Segmentation is the process of separating masses in an image and is the most important step of cytology as it identifies cells based on their structures and morphology. In the majority of cervical cancer cases, cell segmentation is followed by abnormality classification which is frequently performed by feature-based machine learning algorithms as well as deep-learning approaches. Feature-based classification is based on feature extraction. Common features include the size, shape, colour, and texture associated with malignant tumours. Once feature extraction is complete, multiple different algorithms can be used for classification. A radial basis function support vector machine was developed that could classify images blocked into six different categories including blocks with many white cells, blocks with normal epithelial cells, and blocks with suspicious epithelial cells. The researchers expressed that the blocks with suspicious cells had a considerably different texture and colour features which set them apart from the others (Conceição, Braga, Rosado, & Vasconcelos, 2019, p. 21). The support vector is different from the layered neural network because instead of passing through a series of layers, there is only one function that separates or classifies the inputs into multiple categories. This method of classification skips the segmentation step entirely and saves a lot of time. Artificial neural networks, an unsupervised classifier, meaning they do not require inputs of labelled data to be trained, are another type of classifier that can study cell images and determine their level of abnormality.
Deep learning classification in the form of a convolutional neural network is a classification that can be performed without segmentation. On the other hand, this type of network does require far more computational time and high numbers of labelled data sets, making them impractical in clinical settings (Conceição, Braga, Rosado, & Vasconcelos, 2019, p. 22).
With the help of such techniques, the survival rate can be increased and the chance of complications occurring can be decreased. These two measurements can also be predicted by Artificial Intelligence to ensure proper treatment and patient comfort. In an experiment to test survival rate prediction, a data set was collected from a total of 102 patients, all with cervical cancer that had already undergone initial surgical treatment. The researchers identified 23 demographic variables including age, BMI, and hormonal status, and 13 tumour-related parameters including tumour size and a number of lymph nodes, to direct the experiment. The computational intelligence methods that were applied had not yet been used to predict patient survival for cervical cancer treated by radical hysterectomy. Six of these were classifiers: Probabilistic neural network (PNN), multilayer perceptron network (MLP), gene expression programming (GEP), Support Vector Machines (SVM), Radial Basis Function Neural Network (RBFNN), and the k-Means method. The prediction ability of these models was determined by measuring accuracy, sensitivity, and specificity. The best results in the prediction of 5-year overall survival in cervical cancer patients who had already undergone radical hysterectomy came from the PNN model (Obrzut, Kusy, Semczuk, Obrzut, & Kluska, 2017, p. 4). The PNN model similar to the feed-forward backpropagation neural network mentioned earlier is made up of an input layer, a pattern layer, a summation layer, and an output layer. The PNN model along with other AI methods can be applied to various medical classification jobs (Fig. 2).
The prediction of complications occurring during or after surgery is also vital to determine a patient’s chance of survival. One study was performed on 107 individuals with cervical carcinoma who had undergone surgery, and a cervical biopsy was taken to determine an AI algorithm’s ability to diagnose cancer. Complications around the time of surgery were evaluated both during the operation and post-operation. The gene expression programming (GEP) algorithm which makes and advances computer programs, was used for this study. The GEP was compared with the multilayer perceptron (MLP), the radial basis function neural network (RBFNN), and the probabilistic neural network (PNN), all of which are feed-forward neural networks. Each of the tested models was ranked based on their specificity, accuracy, and sensitivity. The highest accuracy was found in the MLP neural network. Complications near and around the time of surgery occurred in 47 patients although most of these were minor complications that did not severely harm the patient or put their lives in danger. Other more serious complications were found in 7 of those patients and included pulmonary embolism, or a gastric ulcer rupture (Kusy, Obrzut, & Kluska, 2013, p. 4). This study goes to show that it is imperative to identify any risk factors of surgery and choose the appropriate course of treatment as soon as possible because if products to remove cancerous tissues are postponed, the patient’s chance of survival is likely to decrease.
The Application of AI in Lung Cancer Treatment
Like cervical cancer, lung cancer is life-threatening and is actually one of the leading causes of deaths in the world, therefore accurate diagnosis and treatment planning are extremely important for a patient’s survival. Recent breakthroughs of artificial intelligence, and specifically deep learning algorithms that can solve complex problems by analyzing images, are giving scientists hope. Researchers of one study developed a deep learning model to aid in lung cancer diagnosis to help reduce the work of pathologists. A convolutional neural network was trained to classify small patches of a histopathological image of a lung as either malignant or benign and had an accuracy rate of close to 90% (Wang et al., 2019, p. 8). This method would enhance the diagnosis of lung cancer by allowing for incredibly fast tumour detection when the region being studied is relatively small. Aside from diagnosis, the prognosis is one of the key parts of cancer treatment. Predicting if a tumour will recur and how long a patient will survive are crucial to determining the proper course of treatment. Wang’s team developed yet another CNN model that could segment slide images by the boundaries of the nucleus. Different features of the nucleus were extracted and used in a model that predicted the chance of recurrence (Wang et al., 2019, p. 9).
Furthermore, scientists have found a relationship between a patient’s genetic files and pathological phenotypes, and genetic mutations that cause tumours. Such biomarkers are evolving and can be a useful tool in helping physicians with the screening and detection of lung carcinoma. An ideal biomarker is one that indicates biological, pathogenic, and pharmacologic processes and responses, and can impact clinical decisions in order to benefit a patient. Additionally, when being used for undefined pulmonary nodules, a biomarker should have the ability to predict and anticipate the diagnosis of cancer so that treatment can be administered as soon as possible and overdiagnosis is avoided. Scientists have established a few promising biomarkers (Fig.3), such as urine and saliva, that are currently used. Blood is another biomarker that can be used for lung cancer screening as it can help to identify and study the tumour and the space surrounding it, any metastases, and the patient’s immune response. Sputum, which comes from the airway epithelium can also be used for lung cancer and is able to supply data about any changes in a molecular structure close to the tumour cells. Autoantibodies are another form of biomarker which develop as a result of the formation of a tumour before any symptoms appear on images (Seijo et al., 2019, p.5). These autoantibodies have been discovered in all types of lung cancer, meaning in the future they could be indicators of lung cancer. Further studies are examining the rise of newer biomarkers that can be used alongside AI to decrease lung cancer patient mortality rates. A specific nano-array sensor which runs off of artificial intelligence, and has the potential to distinguish benign tumours from malignant ones, was used in a study to diagnose 17 different diseases from exhaled breath samples and there was an accuracy rate of over 85% (Seijo et al., 2019, p. 8). Other prediction models that use AI were also able to identify malignant tumours from harmless nodules, promising a bright future for AI-based diagnosis
AI platforms that use deep learning are being considered as a tool in fighting lung cancers is deep learning. Deep learning models allow researchers to remove certain characteristics for data that is imputed as well as have many layers and kernels, neurons in the layers between input and output layers, that allow them to perform many functions using the removed characteristics (Wang et al., 2019, p. 5). Deep learning has the ability to recognize complex data patterns, requiring no human input, and systems that use deep learning are not subjective the way human physicians are. A more specific class of deep learning is convolutional neural networks or CNNs. These models learn features from images and can eventually even predict outcomes. CNNs have been used in classification, segmentation, and detection, learning from histopathological and radiographic imaging, showing great potential in both areas. The automated feature extraction that deep learning models can complete is a huge advantage as manually removing features from pathology images is very time-consuming when the problem is challenging and complex or when researchers do not know very much about the input data and their relations with the outcomes that the model will predict.
Like any disease, it is also always helpful if physicians are able to predict the chance of survival of a patient. A recent study was completed on medical images and information about tumours that could be helpful in prognostication efforts. 1194 individuals with NSCLC, who had been treated with either radiation or surgery, had CT scans taken of them and different elements that would determine a prognosis, Kwon as prognostic signatures, were detected using a convolutional neural network (Hosny et al., 2018, p. 1). CNN was highly successful in separating patients based on their chance of mortality. The network was also trained to predict the likelihood of a patient’s survival, 2 years after the start of their treatment. After the experiment was complete, the scientists dove in even deeper to get a better understanding of the different features detected by CNN and found specific areas that had the greatest impact on predictions made by the platform. To understand which regions in the CT images are responsible for the predictions made by the neural network, activation maps were created over the final convolutional layer. The intensity of the gradients in this layer determines how important each node actually was for the prediction. Most of what contributed to the prediction were from large areas of space both within and around the tumour, with higher CT density, and the areas with a lower density, such as the uncommon vessels, did not contribute very much to the predictions (Hosny et al., 2018, p. 12). Normal tissues such as bone tissue, which is of higher density, was ignored by the network as it appeared in most if not all images and had no significance on the tests. All of the actions that such a network takes- extraction, selection, prediction- are automated and have no data to back up why a certain prediction was made which makes it hard to prepare for failure. Although limitations do exist, there are possibilities for tools that can be created. An imaging tool with the ability to classify more specific information and identify treatment pathways would be helpful in managing all patients who suffer from NSCLC.
Lung cancer screening in developed countries is generally carried out with the use of LDCT or low-dose computed tomography. Although LDCT might be the favorable pathway for lung cancer screening and detection in the United States and other developed countries, developing countries face other challenges that makes it harder to integrate technology such as LDCT into routine clinical practice. It is very hard to develop programs that can screen for lung cancer in underdeveloped countries due to the vast amounts of pulmonary tuberculosis and chest infection cases. These conditions have similar symptoms to lung cancer such as fever, anorexia, weight loss, and cough, however individuals with histories of smoking, and a hoarse voice tend to be diagnosed with lung cancer (Shankar et al., 2019, p. 7). One of the most harmful consequences of using LDCT is that benign intrapulmonary lymph nodes and non-calcified granulomas are often hard to distinguish from pulmonary nodules, leading to many false positives, and thus unnecessary radiation which will eventually lead to the actual formation of cancer (Shankar et al., 2019, p. 7). Solutions for this issue include using computer aided diagnosis systems that are more sensitive with their detection, along with biomarkers that were previously mentioned, which can make screening more efficient.
While LDCT is optimal, risks such as radiation and overdiagnosis along with cost make it hard for scientists to introduce at higher levels. Such methods that use AI have hopeful implementations in imaging and radiology, such as cancer detection and assistance making decisions, and the application of AI to pulmonary oncology will open up many pathways for diagnosis and prognosis using clinical, pathological, and morphological features of patient scans.
The Application of AI in Brain Cancer Treatment
Another type of aggressive cancer that is hard to diagnose is brain cancer. When applied to data from MRI scans, AI has great potential in the field of neuro-oncology and is multi-purpose as it can help establish how harmful a tumor is, find invading gliomas, predict the chance of recurrence and survival of patients, assess the physician’s skills, and simulate cranial surgeries to strengthen neurosurgical training.
Like in cervical cancer, segmentation is a large part of diagnosis of brain tumors through radiomics, and can be performed by deep learning AI platforms (Rudie, Rauschecker, Bryan, Davatzikos, & Mohan, 2019, p. 3). From these images, different features are extracted including size, shape, textures, and patterns. Machine learning platforms are then used to find relationships between the features, and determine the prognosis of the tumor. MR spectroscopy which compares the chemical composition of normal brain tissue with abnormal tumor tissue is used to classify gliomas and glioblastomas into different grades, depending on severity, and has the ability to identify regions where lymphocytic cells have invaded the tumor. Once found, the current treatment method for glioblastoma and gliomas is a resection of the tumor along radiation or chemotherapy using a medication called temozolomide (Rudie, Rauschecker, Bryan, Davatzikos, & Mohan, 2019, p. 8). However an effect that radiation and chemotherapy sometimes have is pseudoprogression, an increase in the size of the primary tumor or the appearance of a new lesion.
Machine learning devices that take patient images into account can be used in such instances to predict if a pseudoprogression is likely to occur. Researchers recently performed a study where an AI system was developed to outline characteristics of cancer cells in tissue grafts from patients that came from both the primary tumor and brain metastases, tumors in the brain that have formed from the original tumor. A live cell imaging algorithm was combined with AI and was used to study the movement of cells toward the area with damaged tissue and make out any differences between cells with and without brain metastatic potential. The study presented a device that could make calculations and predictions with the help of AI. The platform would be able to use a 3D measurement of cancer cells behavior in a BBB, or brain-blood barrier model outside of the organism and determine which cells have a brain metastatic characteristics. The visual differences between cancer cells that can form metastasis in the brain and those that cannot are very slight but the studies that used AI to identify these distinctions showed a large difference in the behavior of cancer cells and normal cells when they came across the BBB, making the AI device a very helpful tool for recognizing preseudopregrossions and potentially predicting them (Oliver et al., 2019, p. 4). However there are always limitations that come along with medical breakthroughs such as these. While the ex vivo model in this experiment is able to identify differences in cells that are able to and not able to cross the barrier, the characteristics of cells with metastatic potential are still inconclusive. There are not yet enough features of a cell which platforms can detect that will allow an AI algorithm to accurately predict if a cell will metastasize. Furthermore, a brain cancer patient’s brain will have already changed in some way before diagnosis, making it more prone to the formation of metastases (Oliver et al., 2019, p. 8).
After it is properly developed, the use of AI to detect cells with the potential to metastasize in the brain will increase survival rate. Artificial intelligence can also be used to predict these chances of survival for patients suffering from cancer. A recent study used an artificial intelligence tool called the DeepSurvNet that runs off of neural networks in order to determine brain cancer patients survival rates and sort them into four different classes, based on just their histopathological images. To train the model, researchers used a dataset created using the medical records of brain cancer patients with 4 different types of brain cancer. 4 classes were used to classify patients by the time between their brain cancer diagnosis and death. Multiple regions of interest in the tumors from the imaging slides were also allocated to each of these classes. The model was then tested using completely new sets of data from histopathological slides. Glioblastoma tissue sections stained with H&E dyes of 9 new patients were analyzed. The device classified most patient samples in a single class, which was anticipated as the regions of interest are all taken from the same tissue sample (Zadeh Shirazi et al., 2020, p. 9). With the use of the DeepSurvNet classifier, physicians can use the difference between tumors that allow for different lengths of brain cancer patient survival to create specialized treatments and significantly decrease patient mortality.
In addition to it being hard to diagnose, because neurological cancer is such a rare condition, doctors do not get to see many patients with it and therefore lack training. Artificial Intelligence that can deliver feedback based on users touch, has the potential to create a realistic environment for trainees to practice their surgical skills without having to operate on real patients. Particularly, surgical simulations can be used in training for neurosurgery as the tasks required in the field are very technical and need to be performed under large amounts of pressure since one mistake could lead to severe consequences. A study on The Virtual Operative Assistant, shows the benefits of using AI to conduct training that tests cognitive skills and determines the level of psychomotor expertise that an operator possesses through the use of a surgical simulation. During the experiment, 50 participants, all with differing levels of expertise were recruited and classified into two groups: skilled and novice. The classification was completed after all of the participants were asked to complete a complex virtual reality simulation where they had to remove a brain tumor located beneath the pia mater subpial tissue, a type of connective tissue, using two devices-one in each hand. In order for the Virtual Operative Assistant to perform the classification, over 250 performance metrics were generated that were representative of differing levels of expertise in the surgical field and only 4 metrics with the highest level of accuracy were chosen after careful consideration and selection processes (Mirchi et al., 2020, p. 4). After the machine learning algorithm computed its classification between “skilled” or “novice”, it also gave users a breakdown of its assessment on both the safety and movement metrics, and rather than assessing each metric on its own, the Virtual Operative Assistance included the relationship between metrics, allowing students to recognize that one strong metric may be making up for poor performance in another one. The three forms of feedback that the Virtual Operative Assistant is able to give users, auditory, text, and video-based, is what makes it extremely beneficial in the world of science. This new technology enables scientists to understand the expertise of an individual, identify cognitive expertise in tasks that are much too complex for human teachers to notice, and mimic real life training, all making it the perfect tool for simulation-based learning.
Evolving technologies and Further Use for Medical Education
Simulation-based training systems, such as the Virtual Operative Assistant, are able to develop checklists that evaluate different skills using machine learning algorithms (Sapci & Sapci, 2020). While there are numerous applications for AI cancer treatment: screening and detection, survival rate prediction, and surgical simulations which allow doctors to more efficiently develop surgical skills and treat patients, AI platforms do possess many challenges that come along with using them. Of the different challenges that AI platforms in medical training pose, feedback and liability issues are two of the most prominent. A study done at Mount Sinai Hospital created a type of AI technique known as deep learning using data from 700,000 patients (Paranjape, Schinkel, Nannan Panday, Car, & Nanayakkara, 2019). Their algorithm was highly accurate and was able to diagnose conditions that even experts struggle to diagnose, such as schizophrenia. However, AI systems often lack the ability to provide users with a proper explanation for how a certain answer or prediction was reached (Fig. 4a). These algorithms cannot properly understand the cognitive thinking of learners and therefore cannot properly train them in the actual areas where they need to be trained. This brings about the issue of liability because it becomes very hard for patients to trust a system that cannot provide an explanation for how it reached a diagnosis, and if a calculation were to be made incorrectly that puts a patient in danger, then it is not known whether the doctor, the hospital, or the company that developed the AI device is liable (Paranjape, Schinkel, Nannan Panday, Car, & Nanayakkara, 2019).
Because of their ineptitude to comprehend the emotional reasoning of users and provide appropriate feedback, AI-powered teaching platforms enable students to “cheat” the system. Many algorithms in artificial intelligence teaching tools do not actually train surgeons and increase their skill level. Rather, they make the assumption that a student is skilled in a certain area because they were able to accomplish one certain task. In the case of the Virtual Operative Assistant, this ability to cheat (Fig. 4b) can be credited to the relatively broad parameters that classify students as either skilled or novice (Mirchi et al., 2020, p. 16). In the experiment completed using this specific AI platform, there was a misclassification where 4 participants that were actually at the novice level were labeled as skilled. Such errors make it difficult to trust AI-powered teaching tools and implement them into routine medical practice and surgical training.This is where human expertise comes in and proves essential in the learning process. If AI platforms undergo diligent training alongside human experts that can properly assess the algorithm and recognize specific markers of a good surgeon, then cheating the system would be much less likely. Furthermore, the issue of learners feeling a disconnect from their teacher due to lack of feedback and properly backed up explanations, which can actually damage a students skill level, can be resolved by human interaction (Chan & Zary, 2019). AI could be substantially more useful for tasks such as computerised testing and cancer screening or diagnosis, but if physicians and AI-based machines are able to work in harmony then patient outcomes are guaranteed to improve as AI has the potential to process large amounts of data including medical reports, notes from pharmacists, and genetic reports, as well as analyze all of it. Nonetheless, a major thing that it cannot take the place of is the beauty of doctor-patient and doctor-student interactions (Paranjape, Schinkel, Nannan Panday, Car, & Nanayakkara, 2019).
Doctor-Patient Feedback and Interpretation
AI in the healthcare field is expected to grow rapidly in the years to come, but with growth comes limitations, which is why it is crucial for AI to be implemented into the healthcare system with ethical and legal aspects in consideration. A large imitation is that AI systems do not have feelings and can’t care for or have sympathy for patients in the same way that doctors do. The “quadruple aim” of healthcare consists of improving experience of care, improving health of populations, reducing per capita costs of healthcare, and improving the work life of healthcare workers (Kelly, Karthikesalingam, Suleyman, Corrado, & King, 2019, pg. 1). But healthcare systems are struggling to meet these goals.
The FDA has already cleared close to 40 devices that run off of AI and can be used for medical purposes. one of these is the IDx-DR, a system that can output a screening decision without the help of human interpretation of the image or results. The system then recommends the physician either rescreen or refer them to a specialist (Gerke, Minssen, Cohen, Bohr, & Memarzadeh, 2020, pg. 5). However while AI can improve imaging, diagnosis, and surgery, it will be difficult to manage AI when informed consent is considered. It is a common question whether it is the physician’s responsibility to inform the patient about AI and the way it works, and if the doctors have to even inform the patient at all that AI is being used. Some argue that it doesn’t matter how an AI system reaches its prediction, but more important is if the decision is correct but this can cause an issue in certain cases as many machine learning algorithms are known as “black boxes”, even the inventor does not know how the program reaches its final decision. The datasets being used to train the algorithms also need to be reliable, trustworthy, valid, and effective- the better the training data, the better the accuracy of the AI algorithm. Even after the first model is developed, the program will need further tweaks to be made. This includes data bias. Many AI algorithms have proven that they do have a bias when dealing with ethic origin, gender, age, or disabilities. These biases could lead to false diagnosis and jeopardize the safety of patients by making treatments ineffective (Gerke, Minssen, Cohen, Bohr, & Memarzadeh, 2020, pg.10). If an AI algorithm outputs a recommendation for treatment that a human physician would not have picked, therefore making it wrong, and the physician decides to use it anyways and it harms the patients, then it is likely that the physician is at fault for medical malpractice. It is also important to think about hospitals and if it becomes their fault that they bought and applied the AI systems to their practice, but this is why AI should be used more for assistance with decision making, rather than fully depended on.
For patients suffering from kidney failure, or end-stage renal disease, renal transplants are the best option for a patient to survive, yet dialysis is often a more reasonable choice due to the shortage of organ donors. However, currently dialysis software is not equipped to respond to unanticipated events that can occur during dialysis treatment. Miniature artificial kidneys that can provide a personalized dialysis treatment and are capable of real-time monitoring are currently in the process of being developed (Hueso et al., 2018, pg. 5). Data analytics supplied by fields of artificial intelligence such as machine learning and computational intelligence are expected to principally play a role making sure these new dialysis technologies are both efficient and safe for patients. Due to the complexity of the technologies involved in the creation of these dialysis machines, there are challenges for implementation of the devices in healthcare and biomedicine. Data analytics along with AI provide the baseline for medical decision support systems, but application of these AI algorithms in the medical field has its challenges, the biggest being the ethical issue (Hueso et al., 2018, pg. 2). Although this device will make the jobs of medical professions easier, it may make interactions with patients uncomfortable and lose trust in their doctors. For example, these automated devices do not have the ability to explain the reasoning for their decision and empathize with patients the same way that doctors can, making it hard for patients to understand their own course of treatment. AI algorithms are not able to express the relationship between the data they have observed and the outcomes that have been formed as a result of it.
While it is possible for an algorithm to overcome all of these challenges, human-computer interactions are one of the key aspects in gaining a better understanding of the way algorithms interpret data. Multiple algorithms produce great results but lack the ability to explain why they landed at those particular results. Even if scientists are able to understand the math that is involved in creating the algorithm, it is virtually impossible to determine which model made a specific decision. This is problematic as it has rendered many algorithms as untrustworthy, uninterpretable and unexplainable. Therefore there is a tradeoff between performance and expandability. Deep learning models have a very high level of performance but they are hard to interpret while linear regression models decision trees are relatively to interpret but have poorer decision making skills. Interpretability of AI algorithms is the ability of a human to understand the way it made a connection between features extracted and its predictions. Approaches to solving this interpretability issue can be categorized into two categories: whether they need internal information such as parameters to operate (also known as the level of transparency) or the amount of accessibility there is to the internal information of the model. Methods that require access to the internal information are considered to be working on “white boxes” (Reyes et al., 2020, pg. 2). An example of this is a CNN or convicted neural network where a radiologist uses a given layer of network to create a map which can be laid on an image that shows which regions are important for predicting if the patient has a disease. Black boxes on the other hand do not need access to this internal information and instead work directly with the input and output of the model to analyse how changes to the input will change the output. There are multiple visual techniques that give insights into the way that AI algorithms behave and they arrive at certain decisions. Two of these basic approaches are partial dependence plots and individual conditional expectation plots. Both methods are used to interpret black-box models and show the way a model’s predictions are dependent on the features. This helps predict which features will change the prediction when their value is changed (Reyes et al., 2020, pg. 2).
The goal of interpretability isn’t exactly to understand exactly how an AI system works, but to have enough information to understand it to the best extent possible and it is not always necessary. A wrong diagnosis in radiology can lead to extreme consequences for a patient, but reading images is prone to interpretation errors. Interpretability is a fast evolving field that has been at the center of AI research with great potential for future development of safe AI technologies (Kelly, Karthikesalingam, Suleyman, Corrado, & King, 2019, pg. 1). But before AI can be implemented into various tasks within radiology, task-specific interpretability solutions are required, and if algorithms known as “black boxes” are used in medicine, they need to be used with a great deal of judgement and responsibility. AI developers should be aware of the many consequences of algorithms and unintentionally lead to and make sure they are created with all patients in consideration. Doctors and surgeons being involved in this process can increase its efficiency significantly. If the interpretability of algorithms can be improved, then human-algorithm interaction would be smoother and the future adoption of AI with consideration of data protection, fairness and transparency of algorithms, and safety, would be supported by a large number of physicians.
The Impact of AI on Jobs
While many physicians may support the implementation of AI, no machine can work at its full potential without the presence of doctors, but studies have shown that medical students are not spending enough time getting acquainted with newer technologies that involve Artificial Intelligence. Currently, medical education is centered around 6 major areas: medical knowledge, communication skills, patient care, professionalism, practice based learning, and systems- based practice (Paranjape, Schinkel, Nannan Panday, Car, & Nanayakkara, 2019). Most of this training focuses on taking in large amounts of information and applying this information to patient care- a process based mostly on memorization. In order to improve outcomes in clinical settings, students need to learn how AI functions and how it can augment their work. The many promises of AI include automated image segmentation, detection of cancerous lesions, and comparison of images. While it can be fatiguing for human pathologists to detect small traces of cancer on a slide, AI systems are not affected by this and can scan a number of slides without losing their accuracy. AI can also help physicians to improve the quality of patient care by taking care of repetitive tasks and tedious tasks and managing large amounts of data, in addition to being another opinion for making decisions.
With AI algorithms showing such great amounts of promise in radiology, pathology, and cardiology, a question that arises is while AI algorithms replace human physicians? Recent data expresses that when considering its image and predictive analysis, AI might soon prove to be more efficient than radiologists. However, it is likely that AI will not replace the role of general physicians, but rather augment them. For example, an AI system is able to take over the job of a factory worker who performs a certain task repeatedly, but in the case of replacing medical professionals, AI is unable to engage in interactions with patients that are crucial in gaining their trust, and restoring them. One study on this topic deals with breast cancer. It suggests that digital mammography is not perfect and only has a sensitivity of around 85%. The other 15% that is not detected is a result of human error- what radiologists are able to identify on scans. Furthermore, the question of whether or not this practice is ethical is an important one. While replacing human workers with AI systems, it may benefit the economy as a whole, but the effect that it has on individuals whose jobs have now been taken away, is detrimental.
In most cases, technology is designed to perform a specific task which changes the demand that workplaces have for certain skills. These changes can influence the skill requirements, social well-being of workers, and career mobility, for different occupations. Limitations on data to train AI algorithms will restrict these skill pathways, but scientists can surmount this obstacle by prioritizing data collection that is detailed and responsive to real-time changes in the labor market (Frank et al., 2019). This improved data collection will enable the use of new tools that rely on data, including machine learning systems that more accurately reflect the complexity of labor systems. New data will also lead to new research that will strengthen our understanding of the impact of technology on the supply of and demand for labor
However, AI systems do tend to result in a number of false positives, resulting in extreme measures being taken without certainty of harmful cancer presence in the body. This is where radiologists are still crucial in the medical field, even with the presence of AI. Furthermore, false positives are an issue which can lead to anxiety, along with unnecessary biopsies and tissue removal(Ahuja, 2019). AI has the potential to assist and augment physicians rather than replacing them entirely by combining data and providing help in the decision making process by recommending certain treatment options. AI can also remove some of the burden of work from physicians by performing tedious tasks. Speech recognition is another useful device that can replace keyboards and decision management can help physicians to make more informed decisions that take into account both patient outcome and cost of treatment (Paranjape, Schinkel, Nannan Panday, Car, & Nanayakkara, 2019).
Due to the fact that the field of AI is a relatively new form of technology, it’s implementation in the real world raises a number of questions about the ethical side of the technology. Within the AI algorithms themselves, one prominent issue is model bias. The data that is being used to train such algorithms has the potential to be influenced by multiple outside factors including a bias towards the humans who collected the dacollectors of the data. As a result, an algorithm may be biased towards a specific group when predicting whether or not an individual should receive a certain treatment. It is important for researchers to consider this aspect of AI and work towards mitigating the effect of such biases. Data not representative of a large population can result in a model that is biased to subjects highly prevalent in the data set. In addition, for the highest level of fair and accurate model performance, it is imperative for scientists to split data so that platforms can be tested with images separate from the training data.
The first issue that seems to directly influence patients is how although AI may not entirely replace doctors, it will significantly alter relationships between patients and their physicians and nurses. Many companies that distribute electronic health records however have overlooked this disadvantage and have focused on only the positive aspects including how AI will be able to simplify interactions with complex data and reduce the time it takes to complete tasks. However to many patients, it is incredibly important for their own comfort and satisfaction to maintain relationships with their doctors. If AI algorithms are set to take over scheduling appointments, making payments, and even running follow-up visits, then this doctor-patient interaction time will be compromised. Furthermore, it is important to take into account the immense amount of data that algorithms require access to for training. While the majority of companies are sure to keep their data protected in order to abide by HIPAA, a privacy law that creates national standards to protect personal medical data, some organizations do allow their data to move freely in and out of their company. This sacrifices patient privacy and security in a way that didn’t have much of an effect before AI was implemented into medicine. Finally, the legal responsibilities that come along with having a hospital run by AI are vast. For any negative consequences that could have been fixed previously, oftentimes at first glance it seems that it is the responsibility of the provider of the AI algorithm. Providers do need to be certain that their algorithms they are providing to hospitals use relevant and accurate data that can make decisions in the most beneficial way possible, but questions surrounding this topic remain unanswered. One could also argue that negative outcomes are the doctors’ fault because they relied too heavily on an algorithm instead of using their own expertise to make a decision. In the end, it is the responsibility of contributors- providers of AI, developers of AI, patients, doctors, and all others involved in the process- to make sure artificial intelligence develops in the medical field in a safe and ethical manner.
Through its multitude of uses across the field of medicine and oncology specifically, AI has the potential to transform the way physicians work, and the way patients are treated. Within cervical cancer, lung cancer, and breast cancer, AI algorithms are able to detect lesions in scans and use segmentation techniques to separate masses within such an image and identify cells based on their structures. Furthermore, they have the ability to utilize these images and extracted features in order to classify images and predict the possibility of recurrence and even chance of patient survival. There are currently a number of platforms being developed which can perform these tasks and a number more being developed to teach medical students how to interact with technology and practice their skills in a real life setting before actually doing so. In the future, AI is set to enable faster and more accurate diagnosis, reduce human errors that are a result of fatigue, complete repetitive and tedious labor taste, decrease medical costs, perform minimally invasive surgery, and increase survival rates. Specific examples of prospective applications for AI as the field grows, are in analyzing relationships between patient outcomes and treatment administered to patients, diagnosis, forming protocol for certain treatments, personalized medicine, and patient care. However, despite these fascinating advancements in technology and medicine and the tremendous potential AI has to revolutionize medicine, there are still some things that AI is not able to accomplish. It lacks the ability to have social interactions with patients in a way that humans doctors can, and will continue to take jobs from employees around the world as its role expands. In order to surmount these obstacles, scientists will have to consider how far patients are willing to go in regards to putting trust in their doctor as well as the economic impact of AI and how it could in turn harm the economy by taking away large numbers of jobs. In order to improve success of AI in the fields of cancer detection, diagnosis, and treatment, these factors must be taken into account.
Bi, W., Hosny, A., Schabath, M., Giger, M., Birkbak, N., Mehrtash, A., . . . Aerts, H. (2019, March). Artificial intelligence in cancer imaging: Clinical challenges and applications. Retrieved November 18, 2020, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6403009/
Frank, M., Autor, D., Bessen, J., Brynjolfsson, E., Cebrian, M., Deming, D., . . . Rahwan, I. (2019, April 2). Toward understanding the impact of artificial intelligence on labor. Retrieved November 03, 2020, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6452673/
Gerke, S., Minssen, T., & Cohen, G. (2020). Ethical and legal challenges of artificial intelligence-driven healthcare (1010864836 778175037 A. Bohr & 1010864837 778175037 K. Memarzadeh, Eds.). Retrieved October 27, 2020, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7332220/
Hosny, A., Parmar, C., Coroller, T., Grossmann, P., Zeleznik, R., Kumar, A., . . . Aerts, H. (2018, November 30). Deep learning for lung cancer prognostication: A retrospective multi-cohort radiomics study. Retrieved September 15, 2020, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6269088/
Hueso, M., Vellido, A., Montero, N., Barbieri, C., Ramos, R., Angoso, M., . . . Jonsson, A. (2018, February). Artificial Intelligence for the Artificial Kidney: Pointers to the Future of a Personalized Hemodialysis Therapy. Retrieved October 27, 2020, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5848485/
Kusy, M., Obrzut, B., & Kluska, J. (2013, December). Application of gene expression programming and neural networks to predict adverse events of radical hysterectomy in cervical cancer patients. Retrieved October 02, 2020, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3825140/
Mirchi, N., Bissonnette, V., Yilmaz, R., Ledwos, N., Winkler-Schwartz, A., & Del Maestro, R. (2020, February 27). The Virtual Operative Assistant: An explainable artificial intelligence tool for simulation-based training in surgery and medicine. Retrieved August 10, 2020, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7046231/
Obrzut, B., Kusy, M., Semczuk, A., Obrzut, M., & Kluska, J. (2017, December 12). Prediction of 5-year overall survival in cervical cancer patients treated with radical hysterectomy using computational intelligence methods. Retrieved October 02, 2020, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5727988/
Oliver, C., Altemus, M., Westerhof, T., Cheriyan, H., Cheng, X., Dziubinski, M., . . . Merajver, S. (2019, March 27). A platform for artificial intelligence based identification of the extravasation potential of cancer cells into the brain metastatic niche. Retrieved August 24, 2020, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6510031/
Reyes, M., Meier, R., Pereira, S., Silva, C., Dahlweid, F., Von Tengg-Kobligk, H., . . . Wiest, R. (2020, May 27). On the Interpretability of Artificial Intelligence in Radiology: Challenges and Opportunities. Retrieved October 27, 2020, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7259808/
Seijo, L., Peled, N., Ajona, D., Boeri, M., Field, J., Sozzi, G., . . . Montuenga, L. (2019, March). Biomarkers in Lung Cancer Screening: Achievements, Promises, and Challenges. Retrieved September 15, 2020, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6494979/
Shankar, A., Saini, D., Dubey, A., Roy, S., Bharati, S., Singh, N., . . . Rath, G. (2019, May). Feasibility of lung cancer screening in developing countries: Challenges, opportunities and way forward. Retrieved September 15, 2020, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6546626/
Wang, S., Yang, D., Rong, R., Zhan, X., Fujimoto, J., Liu, H., . . . Xiao, G. (2019, October 28). Artificial Intelligence in Lung Cancer Pathology Image Analysis. Retrieved September 15, 2020, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6895901/ Zadeh Shirazi, A., Fornaciari, E., Bagherian, N., Ebert, L., Koszyca, B., & Gomez, G. (2020, May). DeepSurvNet: Deep survival convolutional network for brain cancer survival rate classification based on histopathological images. Retrieved August 24, 2020, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7188709/
Selman Waksman first used the word antibiotic as a noun in 1941 to describe any small molecule made by a microbe that antagonizes the growth of other microbes. Nearly 80 years later, as of 2019, 123 countries reported the existence of extensive multi-antibiotic resistant tuberculosis. Furthermore, a year prior to this, Isabelle Carnell-Holdaway a cystic fibrosis sufferer was put in ICU after an aggressive infection of Mycobacterium abscessus, a relative of tuberculosis, spread to her liver putting it at risk of failure. With no new classes of antibiotics discovered and available for routine treatment since the 1980s, she was left with a 1% chance of survival. However in under two years, Isabelle went from receiving end-of-life care to preparing to sit her A-levels and learning to drive. It had taken an experimental bacterio-phage therapy treatment instead of antibiotics to save the life of a girl with a seemingly untreatable bacterial infection. This article will explore the factors responsible for hindering the discovery of a possible antibiotic that could have treated Isabelle: antimicrobial resistance, their misuse and the brain drain in research and development due to a failure of sufficient financial incentive for pharmaceutical companies.
HOW DO ANTIBIOTICS WORK?
The first antibiotic was discovered by Alexander Fleming in 1928. Nearly 100 years later, we now have over 100 different antibiotics available which fit into one of two categories: bacterio-static and bactericidal. The former slows the growth of bacteria by interfering with the processes the bacteria need to multiply, and include nucleic acid synthesis and enzyme activity and protein synthesis. The latter, with the example of penicillin, works to directly kill the bacteria for example by interfering with the formation of cell walls.
The main problem that made Isabelle’s treatment so difficult was resistance. Bacteria are termed drug-resistant when they are no longer inhibited by an antibiotic to which they were previously sensitive. At the moment an estimated 700,000 people are estimated to die each year from drug resistant infections, a statistic projected to rise to 10 million by 2050. This resistance can present itself in one of four ways. First, the bacterium can reduce intracellular accumulation of the antibiotic by decreasing permeability and/or increasing efficiency of efflux pumps to pump the antibiotic away. For example, the determinants improve efflux pumps located in the surface of bacterial cells, improving their ability to remove tetracycline. Second, resistance can occur in the method of alternating the target site of an antibiotic that reduces its binding capacity and thus its uptake. An example of this would be the OprD proteins. These are porins, meaning they mediate the uptake of molecules and preferentially block drugs like Imipenem. Moreover, other bacteria can acquire the ability to inactivate or modify the antibiotic. Penicillin’s efficacy can be undermined by the release of beta-lactamase. This is an enzyme produced by the target bacterium which essentially renders penicillin’s action on cell wall synthesis useless. Finally, bacteria can also modify metabolic pathways to circumvent the antibiotic effect. Quinolones target bacterial gyrase genes associated with the supercoiling of DNA. Under normal circumstances when the gyrases are inhibited, the DNA is unable to reorganise itself during cell division. A mutation in a gyrase gene allows for cell division to go on as normal but diminishes the effect of quinolones. Thus, one reason that makes the discovery of new antibiotics so difficult is because bacteria are equipped with several different mechanisms that encode and develop methods undermining the fundamental ways that antibiotics work.
How Is Resistance Acquired?
Resistance arises through the mutation or sharing of DNA using mobile genetic elements. The latter can occur in one of three ways. One way is through the use of viral mobile genetic elements during transduction. This happens when bacterial DNA is accidentally packaged in a bacteriophage capsid after infection. If this capsid binds to a recipient cell, and injects the foreign DNA, leading to the successful recombination of the donor DNA into the genome of the recipient, the bacteriophage can help transfer resistance genes. Another way this transfer can occur is through the use of plasmids during conjugation. Plasmids are extrachromosomal loops of DNA that replicate independents of the bacteria’s genophore and can be transferred when physical contact is made between two cells, followed by the formation of a pilli bridge that enables the transfer of a plasmid (which may also contain a gene for antibiotic resistance). Finally, resistance genes can also be transferred horizontally during transformation. Several antibiotic resistant pathogens are capable of this process, including Escherichia and Klebsiella which are leading causes of antibiotic resistant infections acquired within hospitals. The process of transformation happens when genes are released from nearby microbes and are taken in directly by another. This means that a single bacterium can also lead to other bacteria, previously sensitive to antibiotics, to inherit these mutations without needing to be direct offspring, perhaps ensuring that the whole microbial community is protected from the antibiotic, rendering them useless.
As aforementioned, the reproduction of the mutant resistant bacteria is also paramount in understanding the difficulty of new antibiotic discoveries. Resistance is an adaptation that occurs as a result of directional selection. When antibiotics are introduced into a community of bacteria, a selection pressure is created. Due to initial extensive genetic variation, there will be some bacterial species that inherently have alleles, allowing for resistance, which allows them to survive, to reproduce, and pass on the alleles that code for resistance to their offspring. Those without the allele for resistance die off. Thus, resistance becomes a selective advantage, and the allele frequency increases within the population. In ideal conditions, some bacterial cells can divide by binary fission every 20 minutes. Therefore, after only 8 hours, an excess of 16 million bacterial cells carrying resistance to a given antibiotic can be produced: in the wrong hands, a new antibiotic could be rendered useless overnight. For contrast, millions of years of evolution occurred before primates emerged with an enzyme that could efficiently digest alcohol, and even with this useful mutation, alcohol poisoning is still currently a problem, with alcohol-specific deaths in the UK reaching 11.8 deaths per 100,000 people in 2019. Thus, another reason that can be attributed to the difficulty of antibiotic discovery is the basic biology of bacteria which allows them to adapt to selection pressures and evolve at an exponential rate.
Contextual scenarios in which antibiotics act as a selection pressure is not limited to its use in treating infections in patients, but also within the agricultural industry– something which is becoming a growing hindrance to the efficacy of existing antibiotics and responsible for the rise of superbugs such as MRSA. According to research published by Public Health England, more than 20% of antibiotics prescribed in primary care in England are inappropriate (i.e used in cases where unnecessary, such as treating viral infections).* This statistic demonstrates the need for antimicrobial stewardship in a society that treats this marvel of biology as a limitless commodity. Furthermore, there is a strong link between increasing rates of antibiotic prescription and emergence of resistant bacteria meaning that there is an increasing need for more narrower spectrum drugs to prevent a complete antimicrobial apocalypse.
Linking to this, our dependence on the use of extremely narrow spectrum potent antibiotics is being threatened by the agricultural industry. According to statistics from the UN’s Food and Agriculture Organisation, at any one moment around 20 billion animals are being kept as livestock. To keep maintenance costs cheap, they are often kept in unhygienic and extremely small, cramped spaces: the optimum breeding ground for disease. Antibiotics tend to be used as a catch-all to both treat illness in some and act as a prophylaxis in others. This system has led to increasingly more bacteria that are resistant to antibiotics. Though there are strict rules stipulating the rules of using strong antibiotics against already resistant bacteria to counteract this, it is not enough to keep up with the growing disparity between resistant bacteria and the development of antibiotics against them. In late 2015, China reported the existence of bacteria displaying resistance against colistin. This was a surprise, as the drug was rarely used (as liver damage is a common side effect) up until this point existing only as a good last resort option for complex infections occurring in hospitals. The resistance to colistin came about as a result of millions of farm animals in Chinese pig farms being given colistin over the course of many years. As aforementioned, this acted as a selection pressure, eventually leading to the increase in pigs carrying colistin resistant bacteria, and eventually crossing over to humans through the food chain. Therein lies a huge threat to the discovery of new antibiotics: finding a balance between mitigating side effects to allow safe use in humans and finding one strong enough to deal with strains already resistant to those that are almost too unsafe for human use.
One reason for the decline in antibiotic discovery is a lack of financial incentive for pharmaceutical companies. To refer back to Isabelle’s case, phage therapies are considered to be approximately 50% cheaper than antibiotics. Furthermore, as mentioned in a Ted talk by Gerry Wright, antibiotics have become so unprofitable that only 4 major pharmaceutical companies still have active antibiotic research programmes. Profit margins for antibiotic discovery are low in a pay-per-pill system since good medications will only be used once and in circulation with other ones to combat the possibility of resistance in the long term. As a result, production of treatments to regulate cancer or muscular-skeletal disease symptoms are most prominent in pharmaceuticals due to their repeated, long term use.
FDA drug approvals by classification 2020, courtesy of Nature Reviews, Asher Mullard
In an attempt to shift profits away from the volume of medication sold, in June 2020, UK Health Secretary Matt Hancock announced the adoption of a ‘Netflix Subscription Model’. This scheme attempts to tackle the growing global health threat by de-linking incentive payments to pharmaceutical companies from sales, offering guaranteed income for innovative treatments. Similarly, Germany has implemented a process where higher prices will be awarded for particularly important antibiotics. However, even if incentives such as these help to create new antibiotics, another pivotal question remains: how to ensure that existing and new medicines get to patients in low and middle income countries. With almost 2 billion people without access to antimicrobial treatments (LEDCs being disproportionately represented), failure to improve access for antibiotics, will limit efforts to tackle resistance everywhere.
In summary, the rate of emergence of new pathogenic bacteria greatly surpasses that of antibiotic development. As stated previously, the leading factor behind this issue is the versatile methods bacteria use to develop and spread resistance, something excavated by overprescription and misuse in the agricultural industry. Furthermore, the current economic model for the pharmaceutical industry doesn’t provide enough financial incentive to motivate enough companies to invest in innovations aimed to aid and tackle this problem, leading to some governments potentially turning to an alternative where they “pay more to use less”.
Clardy, Jon, et al. “The Natural History of Antibiotics.” Current Biology, vol. 19, no. 11, 2009, doi:10.1016/j.cub.2009.04.001.
Dedrick, Rebekah M., et al. “Engineered Bacteriophages for Treatment of a Patient with a Disseminated Drug-Resistant Mycobacterium Abscessus.” Nature Medicine, vol. 25, no. 5, 2019, pp. 730–733., doi:10.1038/s41591-019-0437-z.
Lerminiaux, Nicole A., and Andrew D.s. Cameron. “Horizontal Transfer of Antibiotic Resistance Genes in Clinical Environments.” Canadian Journal of Microbiology, vol. 65, no. 1, 2019, pp. 34–44., doi:10.1139/cjm-2018-0275.
Myszka, Kamila, and Katarzyna Czaczyk. “Mechanisms Determining Bacterial Biofilm Resistance to Antimicrobial Factors.” Antimicrobial Agents, 2012, doi:10.5772/33048.
Plackett, Benjamin. “Why Big Pharma Has Abandoned Antibiotics.” Nature, vol. 586, no. 7830, 2020, doi:10.1038/d41586-020-02884-3.
Reygaert, Wanda C. “An Overview of the Antimicrobial Resistance Mechanisms of Bacteria.” AIMS Microbiology, vol. 4, no. 3, 2018, pp. 482–501., doi:10.3934/microbiol.2018.3.482.
Society, Microbiology. “Antibiotics: Microbes and the Human Body.” Microbes and the Human Body | Microbiology Society, microbiologysociety.org/why-microbiology-matters/what-is-microbiology/microbes-and-the-human-body/antibiotics.html.
Verbeken, Gilbert, et al. “Taking Bacteriophage Therapy Seriously: A Moral Argument.” BioMed Research International, vol. 2014, 2014, pp. 1–8., doi:10.1155/2014/621316.
“Http://Ljournal.ru/Wp-Content/Uploads/2016/08/d-2016-154.Pdf.” 2016, doi:10.18411/d-2016-154. HM Government