Homeopathy is a practice developed in the 1700s, and has become highly popularized over the years. Medicine has changed greatly over time. During the time, homeopathy was developed as bloodletting, which involved the leakage of blood, was a common treatment. People began to turn to other alternative medical practices, such as homeopathy. Since then homeopathy has grown exponentially. Believers claim that homeopathy can be used to address numerous health issues, such as allergies, arthritis, migraines, and other common issues. Others even go as far as to claim that homeopathy can treat serious illnesses like cancer and heart disease. This article will investigate whether or not these claims have any merit?
What is Homeopathy?
Homeopathy is a form of alternative medicine. Alternative medicine includes medical treatments that are not traditionally used. Other examples of alternative medicine include acupuncture, herbal medications, and energy therapy. Usually, alternative medicine is used in addition to traditional medications as they are not usually worthy replacements for regular medication prescribed by a doctor. Alternative medicine practices are very different and distinct from standard practices, which makes them shunned by much of the medical community. People who support homeopathy are among those who argue for more acceptance of alternative medicine as a viable treatment option. There have been people who have been successful with alternative treatment options, such as homeopathy. When considering alternative medicine, you should keep in mind the benefits and risks that correlate with the treatment. Additionally, keep in mind any possible side effects.
Homeopathy operates on the principle that “like cures like”, and that the body will eventually heal itself. In other words, anything that causes symptoms in a healthy person may be used to treat sickness with identical signs in very little quantity. The goal is to activate the body’s immune system. Homeopathic doctors are known as homeopaths, and they create numerous medications to try and treat their patients. Homeopaths hope to create a personalized treatment plan for their patients. This is something good about homeopathic practices that are not found in traditional medicine practices. A homeopath will ask you a series of questions about your cognitive, emotional, and physical health during your consultation. After this, they will give you the medication that best fits all of your symptoms. As a result, the therapy will then be customized for you. To create medication, homeopaths utilize a process called potentization, by which ingredients are weakened through water or alcohol. The notion is that diluting and agitating the chemicals activates and amplifies their curative properties. One part of the solution is combined with nine parts of water, which dilutes the water. This same process of nine parts water and one part ingredient is repeated over and over again until the potency level is reached. Doing this twice would result in a 2x potency, 3 times would be 3x, and so forth. Many homeopathic medicines are created to very high potency. Most homeopathic treatments are so diluted that not a single atom of the active component remains. Over-the-counter homeopathic treatments are also available at certain drugstores and pharmacies. The manufacturer determines the dose and composition of these items.
Homeopathy and The Placebo Effect
Some people believe that homeopathy does help themself. There have also been reports of homeopathy working on younger children. However, much of the scientific community seems to believe that homeopathy working, is simply the placebo effect. The placebo effect is when a certain improvement is seen, but it is not due to the treatment that the person believes. In other terms, someone may believe that something is healing them when their bodies are healing by themselves. Medical placebos are usually identified by a lack of an active ingredient in them. Scientists suggest that a treatment that contains no active component should have no impact on the body. when symptoms improve because you feel the medication is effective rather than because it is. This notion can cause the brain to produce chemicals that temporarily alleviate pain or other symptoms. Since many homeopathic medications are diluted so much that there is barely any trace of an active ingredient left. For this reason, homeopathy is connected to the placebo effect and is commonly believed by medical professionals to not work. Also, homeopathy heavily depends on time, and after a certain amount of time, your body will naturally cure itself of most diseases. The patient may believe their health improved due to homeopathy when in actuality, the body healed itself.
There are certain aspects of homeopathy that are distinct and good. One of these is the level of personalization that the treatment has. A session with a homeopath might last several hours and is usually one on one. This experience creates a level of empathy that is not present in other forms of treatment. The idea that someone cares for your wellbeing impacts the patient greatly. In conclusion, homeopathy may not work from a medical standpoint, but the level of personalization in homeopathic treatment is admirable.
Dietary Supplements are advertised as nutritional additions to your diet. They are products with dietary ingredients such as minerals, vitamins, amino acids, enzymes, botanicals and others. These supplements are available for consumption in pill, gummy, or liquid forms. Their containers are usually labeled with ‘dietary supplement’ on the front panel. Labels also display active ingredients, instructions on how they should be used and their serving size. Generally, dietary supplements allow an individual to obtain essential nutrients, especially if their usual diets do not contain food varieties. Risks of having health problems can be lowered as well through supplements. Supplements should not, however, be used to replace meals. This article will continue to explore dietary supplements by understanding their various uses, and who uses them as well as evaluating their pros and cons.
Who Uses Dietary Supplements?
Half of the United States population takes at least one supplement on a daily basis. According to the CDC between 2017/18, around 57.6% of adults aged 20 years and above had used a dietary supplement in the past 30 days. Of this percentage, 50.8% were reported to be men and 63.8% were reported women. Whether male or female, the use of dietary supplements increased with age. For men between 20 – 39 years, the use of supplements was 35.9% but rose to 67.3% in men aged 60 years and older. Meanwhile in women aged 20 – 39 years old, the use was 49.0% and rose to 80.2% in those 60 and older. All this data shows that the use of dietary supplements is generally higher in women
Many take dietary supplements for different reasons. Aside from maintaining their overall health and wellness, some take them to get in needed nutrients. Others take supplements for energy, some for their bone health and others for heart health. Pregnant women or those trying to become pregnant may take prenatal vitamins such as Folate, which is better known as Folic Acid or Vitamin B9. Taking 400 micrograms of Folate on a daily basis helps to promote genetic material growth and provide protection against birth defects.
People on restricted diets such as vegans, or people with food allergies may also need to take dietary supplements. Supplements provide their bodies with the needed nutrients it may find hard to either digest because of allergies or even get due to their diet. Vitamins such as Calcium and Vitamin D are great supplements for older adults who may need them for bone strength. Other supplements they may need include Vitamin B-12 which helps to maintain red blood cells and nerves and Vitamin B-6 which helps to form red blood cells.
What Are The Benefits And Side Effects Of Their Usage?
Once again, dietary supplements are useful to gain adequate amounts of essential nutrients to the body. Their role could be vital in leading a healthy lifestyle if the consumer is well informed. They can also be used to maintain one’s general health, provide support to one’s immune system and support sports and mental performance.
Not following the instructions printed on the dietary supplement’s container or your doctor’s advice can lead to negative side effects. These include having an upset stomach, experiencing heartburn, having gas and feeling bloated. You may also have more serious consequences such as suffering headaches, feeling nauseated, bleeding internally, having liver damage and more. In a study published in The New England Journal of Medicine, it was observed that unfortunate side effects of dietary supplements accounted for about 23,000 emergency room visits per year. Thus establishing that although supplements are meant to be beneficial to a person, they can still be extremely harmful if not used correctly.
More Information Concerning Supplements
The United States’s Food and Drug Administration (FDA) regulates food, vaccines, cosmetics, drugs, medical machines meant for human use and tobacco products. Dietary supplements are also regulated by this federal agency, although they are treated more like food instead of medication under FDA guidelines. Dietary supplement makers don’t have to prove their products’ effectiveness or show how safe they are before selling them on the market. The manufacturers however, are supposed to follow good manufacturing practices or GMP’s, to confirm the supplements meet specific quality standards. A seal of approval from an organization that tests supplements such as; US Pharmacopeia, ConsumerLa or NSF International; allows you as a consumer to know you are getting a quality product. Look for this seal on the container of the supplements.
Dietary Supplement makers are not allowed to claim that their product prevents treats symptoms, cures symptoms, reduces symptoms or prevents diseases. They need to add a disclaimer on the label if such claims are made. Over the top claims such as a product being ‘Completely Safe!’ or a product is ‘Totally Natural!’ or a ‘Miracle Cure’, are warning signs one should further investigate. Contacting your doctor, pharmacist or the manufacturers to ask which studies have been done to support the extravagant claims made about the supplement.
In conclusion, nutritional additions to a person’s diet are dietary supplements. About ½ of the United States populations takes 1 daily, with a higher percentage of this group being female. They can be great instruments in leading a healthy lifestyle but can also be dangerous if used incorrectly. Despite all their benefits, it is still recommended to try and have a varied diet instead of relying on supplements.
At the end of the day though, how one chooses to take control of their health is entirely up to them. However, if you are contemplating taking supplements; you should consider the dosage, frequency and potential health risks. While clearly following the instructions on their container labels is paramount, you must always consult with your doctor on any queries and concerns, ensuring you receive the best health care possible.
Neonates are newborn infants that are four weeks old or younger. These first four weeks of an infant’s life are when the infant is at highest risk of dying. At this stage in life, neonates do not have fully developed immune systems and are more susceptible to different infections. Of the 5 million infant deaths that occur each year, 1.5 million are due to infections, making it important to understand the developing immune system of neonates (Tregoning).
Part of understanding the immune systems of neonates is first understanding the transition form the sterile womb to an unsterile environment during birth. The fetal immune system is suppressed in the womb in order to limit interference with the mother’s immune system. While this provides stability before birth, the arrangement changes the second after birth, when the newborn enters the unsterile environment of the world. In addition to the risks of being exposed to bacteria, the fetal immune system after birth (which was previously suppressed) is antigenically inexperienced; it does not yet have experience responding to different pathogens, which increases the infants susceptibility to infections (“Development of the Immune System”). Therefore, after birth, neonates depend on “passive immunity” for protection, as their own immune systems develop.
Neonates depend on antibodies from the mother for protection from different antigens. This is called “passive immunity,” as antibodies from the mother are passed down to the baby passively through the placenta, rather than the antibodies being created by the infant themselves. Most of the antibodies produced by the mother’s immune system cross the placenta during the third trimester, which ensures that there are high levels of antibodies after birth. This also explains the low levels of antibodies in premature babies; the timing of the birth does not allow for the same amount of antibodies to be transferred, making premature newborns more vulnerable to infections compared to full-term newborns. Additionally, breastfeeding is another form of passive immunity that allows for the passing of antibodies to infants (“Development of the Immune System”).
Passive immunity only provides short term protection for neonates. The antibodies transferred through the placenta or breast milk are generally immunoglobulin A or G (IgA or IgG). Some of these maternal antibodies protect against measles, mumps, rubella, etc (“Immunity: Active, Passive, and Delayed”). The antibodies transferred passively from the mother to the child either through the placenta or breast milk only protection for the first few months of the infant’s life. This allows the infant’s immune system to develop and start working while keeping the infant protected (“Development of the Immune System”).
The Immune System At This Time
Newborns have a limited quantity of phagocytic cells (types of white blood cells such as neutrophils and macrophages), which are important for innate immunity (the nonspecific immune response immediately after the appearance of an antigen). During an infection, the immune system’s response will be limited by the quantity of neutrophils and macrophages. As a result, the pathogen will commonly overtake the immune system, and the infant will require medical care (“Development of the Immune System”).
In addition, there is also adaptive immunity, which is the specific immune response that occurs after the innate immunity system fails; it is the system that protects the body by remembering and destroying pathogens. As the newborn’s immune system is inexperienced, every pathogen is new, resulting in the immune response taking a longer time to develop. The fact that every pathogen is new also means that there are no memory immune responses, which affects antibody production (“Development of the Immune System”). The process of producing antibodies is less efficient in newborns compared to adults. Some B cell (a type of white blood cell) responses require T cells to produce antibodies. The interactions between T cells, which attack specific antigens, and antigen-presenting cells, which present antigens for recognition, are less effective and stimulating in newborns. There are lower levels of cytokines (which regulate the immune response) produced by T cells. Furthermore, the levels of types of T cells are different in newborns than in adults. For instance, there are lower levels of cytotoxic T cells, which are responsible for killing virus infected cells. These factors influence the levels of antibody production. For B cell responses that don’t involve T cells, B cells recognize the repeating proteins on the surface of a pathogen; this response is also reduced in newborns, resulting in increased susceptibility to bacteria (“Development of the Immune System’).
The reduced immune response of newborns affects the efficacy of vaccines, as there is reduced recognition of vaccine antigens as foreign. Therefore, there are also fewer protective memory responses induced by vaccines, making vaccines themselves less effective in newborns compared to adults with developed immune systems (Tregoning). However, this does not mean that early vaccinations are ineffective. They still aid in protecting against diseases, and they become more effective over time as the newborn’s immune system develops (“Development of the Immune System”).
In fact, as the protection from passive immunity fades over a number of months, vaccinations are required to maintain protection against different antigens. The fading of maternal antibodies is also why there are certain required vaccinations after set periods of times; for instance, the MMR vaccine is required after 1 year of life (“Immunity: Active, Passive, and Delayed”).
The immune systems of neonates are, unsurprisingly, different and less developed than those of adults. As a result, newborns depend on passive immunity (antibodies passed down through the placenta or breast feeding) for protection against infections. The processes in the immune system itself are also different in newborns, which affects the immune system’s capabilities. The increased susceptibility to infections in newborns makes it all the more important to understand the neonatal immune system.
“Immunity: Active, Passive, and Delayed.” World of Microbiology and Immunology, edited by Brenda Wilmoth Lerner and K. Lee Lerner, Gale, 2007. Gale in Context: Science, link.gale.com/apps/doc/CV2644650228/SCIC?u=mlin_m_newtnsh&sid=bookmark-SCIC&xid=bd032b6a. Accessed 31 May 2021.
Things such as peer pressure, depression, and exposure to abuse and trauma can lead to the use of drugs. Using drugs once, telling yourself “just this one time” can lead to a brain disorder called addiction, using things such as alcohol and drugs despite knowing the life-threatening causes. In this review, we get to focus and discuss the main sex differences in the behavioral effects and the atomic structure of psychostimulants such as cocaine and opioids such as morphine and heroin. Some data given in this review allows us to conclude that males (ages 12–25) are more likely to abuse or be dependent upon marijuana or alcohol, and females (ages 12-25) are more likely to abuse or be dependent upon cocaine and psychotherapeutic drugs. As stated in the review, a recent cross-cultural analysis of sex/gender differences in substance use disorders displays a major diversity across cultures, showing that men are more likely to have access to substances than women, this difference in accessibility appears to account for much of the gender difference in the generality of substance use. The goal of this review was for us as the audience to understand the 5 w’s to the differences and similarities between males and females, besides, we cannot assume that females are just males with the letters “f” and “e” attached to the front. After advancing and growing this research to further levels is the only way addictive disorders can be successfully prevented and treated in the entire population.
This paper displays a detailed, informative, and deep understanding of the variations of drug use between males and females. Not only does it allow us, readers, to see these differences between genders, but allows us to see the effects of addiction on chromosomes, autosomes, and hormones in females and males. The sections in this paper allow us the readers to focus on the sex differences on the four neural systems that are the main characters in the addictive process, dopamine, MORs, dynorphin, and BDNF. Neural systems are structures of cells, tissues, and organs that regulate the body’s responses to internal and external stimuli, these systems contain the brain, spinal cord, nerves, ganglia, and parts of the receptor and effector organs. It was an interesting read, with the details included in the paper it enhanced my knowledge further on the contrasts in brain activity among the genders. This study and research have played a role in the contribution of medical knowledge to our society by defining and stating the fact that there are differences in the process of addiction in males and females. These dissimilarities are identified by identifying the gaps in knowledge about how neural systems communicate with each gender and influence addictive behavior, emphasizing throughout that the effect of sex can cause a subtle difference, indicating that male/female drug use data should be recorded despite the outcome.
Evaluation of Methods Used
The techniques used in this paper are effective and can be used to solve other problems in modern-day society related to the neural systems and addiction. Addiction is very common in today’s society, these addictive disorders, these brain disorders are a real problem in today’s society in the entire population, which—at the last U.S. census count (2010)—was 50.8% female. The information and detail are given in this paper plus the addition of more future research can bring us closer to approaching the prevention and treatment of addiction. The researchers and writers of this paper highlighted the missing spaces in knowledge and led us into new research about the mechanisms specifically mediating addiction in both genders. The methods used in this paper led us to the conclusion of the differences and similarities between genders in addiction as well as the characters in the process of addiction and the development of the addiction cycle, three-stage cycle: 1. initiation/escalation, 2. withdrawal/negative affect, and 3. preoccupation/craving. The researchers know that there is a long journey ahead of them to find the treatment of addiction that is effective for each gender, they know that more research needs to be done. Overall the paper did find a conclusion to the connection between the neural systems and addiction but many questions are left unanswered and need to be answered by more research on this brain disorder.
Concerns that can arise from this study include the relationship between animals and humans. Animal testing to find an additional treatment may work but translating this to humans will cause more obstacles. The number of people depending on their gender, female or male that are addicted to the use of drugs varies, comparing data from past years to now indicates that numbers have declined but also increased in some situations. This can cause worries by making sure that both treatments work for each gender, in some cases more females are addicted and in some cases, males are more addicting. It is important to find a treatment for both genders and not one and it is also important that all addiction reports are recorded, every gender, every person counts toward the journey of preventing and treating this disorder. This paper was published in the right journal, neuropsychopharmacology is the study of the effects of drugs on the nervous system and its consequences on the mind and behavior. This journal focuses on this study of the drugs and its connections to the nervous system which is what is displayed in this research paper by connecting addiction in genders with neural systems. The readers and audience of this journal will care for this research since it correlates with the idea of drugs and the nervous system, the main subject of this journal.
Problems and Admirations
The methods and evaluation techniques in this paper are very in-depth, there is lots of information and lots of discussion about every aspect of the influences of drugs on each gender’s hormones, chromosomes, etc. The system of methods allows us to grow our knowledge but gives me individually a hard time to comprehend all of this information at once. The idea is stated clearly, there are and always will be differences and similarities of addiction between genders but the evidence given to back this up is almost too in-depth. All the answers are situated within the paper for readers to find which does not allow us to be curious or do more research of our own. It is almost too long, I think they could’ve been more broad and kept it more simple and short which would allow us the audience to interpret more things and ask more questions. I appreciated the connections made about the organizational vs. activational influences on each gender, they allowed readers to dig deep into the writers’ thoughts and thoughts. The future related to addiction after this research is big and long, the researchers and writers of this paper also have some biological questions that arose from their research such as “Are there sex differences in the efficacy of pharmacotherapy treatments (i.e., methadone, buprenorphine, naloxone) for opioid addiction?”. I think that from the publication of this paper, more researchers will be inspired to join on the journey for finding addiction treatment and to answer the biological question that arose from the framework of this paper. Future researchers will use our knowledge today and answer the question many researchers have today about addiction to approach the decline of addiction.
Living in this advanced world today allows us to have access to new incoming ideas such as the use of various drugs that can have great potential and benefit different diseases such as cancer. In this research journal, they focus on the great potential of cancer immunotherapy using various immuno-oncology drugs. The use of nucleic acid therapeutics on different diseases, specifically cancer, has advanced day by day, showing us how different drugs have unique abilities and how these abilities have advanced in the world we live in today. Through trial and error researchers have figured out problems each therapeutic faces, for example, negative charges and hydrophilicity known as the attraction to water. In this research, we learn about the different drug delivery systems that can safely release and target specific cells. Some of the different drugs that can be used to target specific tissues and cells include small interfering RNA (siRNA) and messenger RNA (mRNA). A few of the discussed delivery systems that can safely transport these drugs to the targeted area includes the nanoparticulate drug delivery system of nucleic acid therapeutics using micelle and the delivery of cGAMP using liposomes (structured cGAMP vs. free cGAMP). After the experimenting and research, the results have displayed that these nucleic acid therapeutics have a remarkable amount of potential for a vast assortment of diseases. Each drug has unique skills and characteristics such as changing gene expression and regulating protein function for immune responses but also has challenges blocking the delivery systems to achieve effective transportation to the targeted cells. Each day new delivery systems are created and modifications are made to different therapeutics which would allow us to issue problems connected to the treatment in the immunotherapy of cancer.
Through the process of reading, analyzing, and critiquing this paper many reasons indicate why this paper makes a good contribution to society. The specifics in this paper allows us the readers and writers to dig deep and think. The use of different words we’ve never heard or seen and the use of deep topics that take time to understand makes the readers ask themselves questions and makes them left wanting more, therefore, resulting in researching deeper into the topic. This paper discussed the different nucleic acid therapeutics, the advanced delivery systems of these therapeutics, and the positive functionalities as well as the challenges of these therapeutics allowing the audience to see these therapeutics from all angles so he can add more to the research. Another reason why this paper is a good read is that the paper provides sufficient proof of how science and medicine have advanced together. The paper provides a visual timeline dating back to 1995 when the first description of CpG-dependent stimulation was created. This paper has contributed to the medical society by bringing in new medical knowledge related to different nucleic acid therapeutics and their delivery systems. This paper brought in new medical information by displaying the pros and cons of each drug and the different delivery systems that can be used to target specific diseases. This paper can allow other researchers to conduct experiments on therapeutics and use their information to see which delivery system will work best with the disease they are targeting. This paper presents proof and factual information about what therapeutics have the potential to use towards cancer immunotherapy, leading other researchers into using their research to help towards another experiment.
Evaluation of Methods Used
The methods utilized in this paper allow for a variety of diseases to be treated, the research was open-ended and shows the potential for each therapeutic and how it can be delivered to target specific tissues and cells. Listing many options for delivery systems and many characteristics for each drug allows other researchers to solve other problems using this research. This research was done on the immunotherapy of cancer using nucleic acid therapeutics meaning it could work on a wide range of cancer types. The researcher’s methods brought proof indicating that these therapeutics could work on the immunotherapy of cancers, they displayed how these drugs can be used and which drugs we should use to be treating this cancer. The treatment of cancer is evolving every day, different medical researchers are looking to find a way to cure cancer and this research could play a big role in finding a sufficient treatment of cancer. As stated in this paper the FDA is starting to approve more studies involving nucleic acids and immunotherapy. Indicating that it will have a significant impact in treating different types of cancers and diseases.
There are negatives and positives to everything, there are side effects but these effects can also result in something life-changing. This paper provides a variety of information indicating the success of using these therapeutics on cancer immunotherapy but there are still concerns. Concerns regarding the different side effects on the human body after immunotherapy using these drugs, the survival rate, and the possibility of taking all or most of cancer away. Could these therapeutics lead to a new disease in the human body, taking away one cancer resulting in new cancer developing? There may be great proof and progress of using these therapeutics for different diseases but the challenges still exist. Challenges such as could this drug be toxic for the human body, the quantity of this drug vs. how much it costs and as well as what happens to the therapeutics inside my body after the treatment. This paper was published in the right journal for the correct audience that would put interest into this topic. This paper shows how the discovery of nucleic acid therapeutics can work towards advancing medicine and cure a life-threatening disease.
Problems and Admirations
In this paper, I enjoyed the different points of view on each drug and delivery system. The method used in this paper allows the readers to deeply think about each topic and step on the journey to treating cancer. I admire the researcher’s evaluation techniques as they went in-depth and provided an outstanding amount of information on this topic, hooking the readers causing them to keep reading and researching more about this topic. The researchers brought hope and showed how advances in medicine today can change anyone’s life. After reading this paper I understood the writer’s thought process, I understood all the different ideas they were thinking and it allowed me to feel grateful for the new innovative ideas we have that can save someone’s life. After this paper is published it will show people the future we have that is full of ideas worthy of doing anything. It will allow others to take the little ideas and thoughts in their heads and turn them into something beautiful. This research will contribute to other studies and allow us to be one step closer on the journey to find a cure for cancer.
Machine learning in modern cancer treatment is a fast-growing field that promises to produce many scientific breakthroughs in the future. This article discusses both the promises and perils that come along with applying artificial intelligence to cancer treatment. With cervical cancer treatment, this growing technology can be used to assist doctors in cancer detection as well as to predict patient survival rates. In lung cancer treatment, artificial intelligence platforms are again used to make predictions for patient health in addition to analyzing images for a more accurate prognosis. Finally, machine learning is also able to predict the survival rate and metastasis for different forms of brain cancer and provide medical students with realistic surgical simulations on how to operate. However, while there are a multitude of promises for the future of AI in medicine, integrating new technologies into a previously established field does have disadvantages. The constant evolution of software and technology means that operators require constant training to be able to handle the tasks Furthermore, the lack of doctor-patient feedback can take a negative toll on patients’ mental health and privacy. The automation of various processes, comes at the cost of various jobs, of people who originally performed these tasks manually. Therefore, when implementing AI into the medical field it is important to acknowledge the great promises the technology has, but also to weigh the negative effects that may result from its application.
One of the most important parts of practicing medicine is decision-making, a skill that relies heavily on judgment. Cancer treatment, or Oncology, is a medical speciality where decision-making is incredibly important because of the unpredictable responses to treatment and change in a patient’s condition. This is where artificial intelligence (AI) comes into play. It is a promising tool that can objectively interpret cancer images and predict a cancer patient’s outcome- essentially mimicking the cognitive functions of humans. Research has shown that AI has the potential to exceed human performance in certain areas of medicine. Multiple examples of useful areas within AI will be discussed in this paper including two main ones. The first is detection or determining which objects are located within the body by analyzing images. The second task is characterization, separating tumours into groups based on physical appearance (Bi et al., 2019). Both of these tasks are a crucial part of making clinical decisions.
Machine learning (ML) is a subset of AI (Fig.1) that has been widely used in current healthcare applications since it uses data to train computational systems without the need for explicit programming. These computer programs can learn and improve from experience, unlike traditional computer programs that require specific instruction at each step, which makes them incredibly useful in the field of science (Ahuja, 2019). Allowing machines to make predictions based on a pattern that they have recognized. With the use of ML, a computer can use previously labeled data or even the pattern found in the data, and make predictions about it. In particular, ML excels at finding indistinct patterns which are undetectable to humans, in larger sets of data. ML also enables an algorithm to perform a task such as making medical decisions or driving a car while also correcting its own mistakes. Deep learning is a subset of ML that uses structures similar to a brain neural network in order to identify patterns within large datasets. CNN’s or convolutional neural networks are other subsets of ML that will be discussed and are generally applied to classification as well as analysis of patient scans (Hashimoto, Rosman, Rus, & Meireles, 2018).
In the future, AI analysis has the potential to work its way into all parts of patient care. Before surgery, it can help track the activity of a patient and access electronic health records. During surgery, it could assist the surgeon in making quick decisions based on the patient’s vital signs. After surgery, it can continue to collect and analyze patient data (Hashimoto, Rosman, Rus, & Meireles, 2018). This paper will discuss the application of AI to cervical, lung, and breast cancer treatment including the use of detection machines, segmentation techniques, and prediction algorithms, as well as weigh the challenges and social aspects of introducing AI to the medical field.
The Application of AI in Cervical Cancer Treatment
When dealing with AI and cancer detection, one of the most prominent issues that comes up is the invasiveness of diagnosis as well as how many cases are missed. While it can be cured if found at an early stage, many women die every year because their cancer was not detected early enough and symptoms did not appear until the cancer was too far advanced to treat. One of the few cancers that will be discussed in this paper is cervical cancer. Cells in the cervix can either be squamous cells, which when infected cause squamous cell carcinoma, or glandular cells which causes adenocarcinoma (P & M, 2018, p. 1). Because cervical cancer is difficult to detect and hard to treat if it has progressed too far, automated machines that can detect cervical cancer could significantly improve the survival rate of women suffering from the disease.
A study performed in 2018 proposed an automatic detection assisted by artificial intelligence which could detect cervical cancer in patients. The first was a preprocessing step and it involved taking a cervical cancer image and enhancing the contrast of it for better visibility using Oriented Local Histogram Equalization. Certain features such as roundness, sides, and circularity were then extracted from the image and used to train the neural network. The features were extracted to discriminate between a healthy cervical image and a cancerous one. The neural network classifier would then identify the cervical image as either benign or malignant by comparing it to the features used for training. For the classification of the tumour, a feed-forward backpropagation neural network was used to make the classification reach the highest possible accuracy. This type of neural network is built using three layers. The input layer accepts the elements of the features that were extracted. Three “hidden” layers in between all with different functions are also used, each of which has a different number of neurons, or calculated inputs from the previous layer. An example of this would be the average of all of the results from the previously hidden layer. The output layer is the response of the neural network and classifies the image as either normal or cancerous (P & M, 2018, p. 1).
Another important part of screening is finding cancerous lesions in images. Segmentation is a difficult but necessary part of this. The most common test for cervical cancer screening is cervical cytology or the pap smear test which screens for malignant tumour cells in the cervix. A positive cytology test can show different types of abnormal epithelial cells such as atypical squamous cells or atypical glandular cells. Segmentation is the process of separating masses in an image and is the most important step of cytology as it identifies cells based on their structures and morphology. In the majority of cervical cancer cases, cell segmentation is followed by abnormality classification which is frequently performed by feature-based machine learning algorithms as well as deep-learning approaches. Feature-based classification is based on feature extraction. Common features include the size, shape, colour, and texture associated with malignant tumours. Once feature extraction is complete, multiple different algorithms can be used for classification. A radial basis function support vector machine was developed that could classify images blocked into six different categories including blocks with many white cells, blocks with normal epithelial cells, and blocks with suspicious epithelial cells. The researchers expressed that the blocks with suspicious cells had a considerably different texture and colour features which set them apart from the others (Conceição, Braga, Rosado, & Vasconcelos, 2019, p. 21). The support vector is different from the layered neural network because instead of passing through a series of layers, there is only one function that separates or classifies the inputs into multiple categories. This method of classification skips the segmentation step entirely and saves a lot of time. Artificial neural networks, an unsupervised classifier, meaning they do not require inputs of labelled data to be trained, are another type of classifier that can study cell images and determine their level of abnormality.
Deep learning classification in the form of a convolutional neural network is a classification that can be performed without segmentation. On the other hand, this type of network does require far more computational time and high numbers of labelled data sets, making them impractical in clinical settings (Conceição, Braga, Rosado, & Vasconcelos, 2019, p. 22).
With the help of such techniques, the survival rate can be increased and the chance of complications occurring can be decreased. These two measurements can also be predicted by Artificial Intelligence to ensure proper treatment and patient comfort. In an experiment to test survival rate prediction, a data set was collected from a total of 102 patients, all with cervical cancer that had already undergone initial surgical treatment. The researchers identified 23 demographic variables including age, BMI, and hormonal status, and 13 tumour-related parameters including tumour size and a number of lymph nodes, to direct the experiment. The computational intelligence methods that were applied had not yet been used to predict patient survival for cervical cancer treated by radical hysterectomy. Six of these were classifiers: Probabilistic neural network (PNN), multilayer perceptron network (MLP), gene expression programming (GEP), Support Vector Machines (SVM), Radial Basis Function Neural Network (RBFNN), and the k-Means method. The prediction ability of these models was determined by measuring accuracy, sensitivity, and specificity. The best results in the prediction of 5-year overall survival in cervical cancer patients who had already undergone radical hysterectomy came from the PNN model (Obrzut, Kusy, Semczuk, Obrzut, & Kluska, 2017, p. 4). The PNN model similar to the feed-forward backpropagation neural network mentioned earlier is made up of an input layer, a pattern layer, a summation layer, and an output layer. The PNN model along with other AI methods can be applied to various medical classification jobs (Fig. 2).
The prediction of complications occurring during or after surgery is also vital to determine a patient’s chance of survival. One study was performed on 107 individuals with cervical carcinoma who had undergone surgery, and a cervical biopsy was taken to determine an AI algorithm’s ability to diagnose cancer. Complications around the time of surgery were evaluated both during the operation and post-operation. The gene expression programming (GEP) algorithm which makes and advances computer programs, was used for this study. The GEP was compared with the multilayer perceptron (MLP), the radial basis function neural network (RBFNN), and the probabilistic neural network (PNN), all of which are feed-forward neural networks. Each of the tested models was ranked based on their specificity, accuracy, and sensitivity. The highest accuracy was found in the MLP neural network. Complications near and around the time of surgery occurred in 47 patients although most of these were minor complications that did not severely harm the patient or put their lives in danger. Other more serious complications were found in 7 of those patients and included pulmonary embolism, or a gastric ulcer rupture (Kusy, Obrzut, & Kluska, 2013, p. 4). This study goes to show that it is imperative to identify any risk factors of surgery and choose the appropriate course of treatment as soon as possible because if products to remove cancerous tissues are postponed, the patient’s chance of survival is likely to decrease.
The Application of AI in Lung Cancer Treatment
Like cervical cancer, lung cancer is life-threatening and is actually one of the leading causes of deaths in the world, therefore accurate diagnosis and treatment planning are extremely important for a patient’s survival. Recent breakthroughs of artificial intelligence, and specifically deep learning algorithms that can solve complex problems by analyzing images, are giving scientists hope. Researchers of one study developed a deep learning model to aid in lung cancer diagnosis to help reduce the work of pathologists. A convolutional neural network was trained to classify small patches of a histopathological image of a lung as either malignant or benign and had an accuracy rate of close to 90% (Wang et al., 2019, p. 8). This method would enhance the diagnosis of lung cancer by allowing for incredibly fast tumour detection when the region being studied is relatively small. Aside from diagnosis, the prognosis is one of the key parts of cancer treatment. Predicting if a tumour will recur and how long a patient will survive are crucial to determining the proper course of treatment. Wang’s team developed yet another CNN model that could segment slide images by the boundaries of the nucleus. Different features of the nucleus were extracted and used in a model that predicted the chance of recurrence (Wang et al., 2019, p. 9).
Furthermore, scientists have found a relationship between a patient’s genetic files and pathological phenotypes, and genetic mutations that cause tumours. Such biomarkers are evolving and can be a useful tool in helping physicians with the screening and detection of lung carcinoma. An ideal biomarker is one that indicates biological, pathogenic, and pharmacologic processes and responses, and can impact clinical decisions in order to benefit a patient. Additionally, when being used for undefined pulmonary nodules, a biomarker should have the ability to predict and anticipate the diagnosis of cancer so that treatment can be administered as soon as possible and overdiagnosis is avoided. Scientists have established a few promising biomarkers (Fig.3), such as urine and saliva, that are currently used. Blood is another biomarker that can be used for lung cancer screening as it can help to identify and study the tumour and the space surrounding it, any metastases, and the patient’s immune response. Sputum, which comes from the airway epithelium can also be used for lung cancer and is able to supply data about any changes in a molecular structure close to the tumour cells. Autoantibodies are another form of biomarker which develop as a result of the formation of a tumour before any symptoms appear on images (Seijo et al., 2019, p.5). These autoantibodies have been discovered in all types of lung cancer, meaning in the future they could be indicators of lung cancer. Further studies are examining the rise of newer biomarkers that can be used alongside AI to decrease lung cancer patient mortality rates. A specific nano-array sensor which runs off of artificial intelligence, and has the potential to distinguish benign tumours from malignant ones, was used in a study to diagnose 17 different diseases from exhaled breath samples and there was an accuracy rate of over 85% (Seijo et al., 2019, p. 8). Other prediction models that use AI were also able to identify malignant tumours from harmless nodules, promising a bright future for AI-based diagnosis
AI platforms that use deep learning are being considered as a tool in fighting lung cancers is deep learning. Deep learning models allow researchers to remove certain characteristics for data that is imputed as well as have many layers and kernels, neurons in the layers between input and output layers, that allow them to perform many functions using the removed characteristics (Wang et al., 2019, p. 5). Deep learning has the ability to recognize complex data patterns, requiring no human input, and systems that use deep learning are not subjective the way human physicians are. A more specific class of deep learning is convolutional neural networks or CNNs. These models learn features from images and can eventually even predict outcomes. CNNs have been used in classification, segmentation, and detection, learning from histopathological and radiographic imaging, showing great potential in both areas. The automated feature extraction that deep learning models can complete is a huge advantage as manually removing features from pathology images is very time-consuming when the problem is challenging and complex or when researchers do not know very much about the input data and their relations with the outcomes that the model will predict.
Like any disease, it is also always helpful if physicians are able to predict the chance of survival of a patient. A recent study was completed on medical images and information about tumours that could be helpful in prognostication efforts. 1194 individuals with NSCLC, who had been treated with either radiation or surgery, had CT scans taken of them and different elements that would determine a prognosis, Kwon as prognostic signatures, were detected using a convolutional neural network (Hosny et al., 2018, p. 1). CNN was highly successful in separating patients based on their chance of mortality. The network was also trained to predict the likelihood of a patient’s survival, 2 years after the start of their treatment. After the experiment was complete, the scientists dove in even deeper to get a better understanding of the different features detected by CNN and found specific areas that had the greatest impact on predictions made by the platform. To understand which regions in the CT images are responsible for the predictions made by the neural network, activation maps were created over the final convolutional layer. The intensity of the gradients in this layer determines how important each node actually was for the prediction. Most of what contributed to the prediction were from large areas of space both within and around the tumour, with higher CT density, and the areas with a lower density, such as the uncommon vessels, did not contribute very much to the predictions (Hosny et al., 2018, p. 12). Normal tissues such as bone tissue, which is of higher density, was ignored by the network as it appeared in most if not all images and had no significance on the tests. All of the actions that such a network takes- extraction, selection, prediction- are automated and have no data to back up why a certain prediction was made which makes it hard to prepare for failure. Although limitations do exist, there are possibilities for tools that can be created. An imaging tool with the ability to classify more specific information and identify treatment pathways would be helpful in managing all patients who suffer from NSCLC.
Lung cancer screening in developed countries is generally carried out with the use of LDCT or low-dose computed tomography. Although LDCT might be the favorable pathway for lung cancer screening and detection in the United States and other developed countries, developing countries face other challenges that makes it harder to integrate technology such as LDCT into routine clinical practice. It is very hard to develop programs that can screen for lung cancer in underdeveloped countries due to the vast amounts of pulmonary tuberculosis and chest infection cases. These conditions have similar symptoms to lung cancer such as fever, anorexia, weight loss, and cough, however individuals with histories of smoking, and a hoarse voice tend to be diagnosed with lung cancer (Shankar et al., 2019, p. 7). One of the most harmful consequences of using LDCT is that benign intrapulmonary lymph nodes and non-calcified granulomas are often hard to distinguish from pulmonary nodules, leading to many false positives, and thus unnecessary radiation which will eventually lead to the actual formation of cancer (Shankar et al., 2019, p. 7). Solutions for this issue include using computer aided diagnosis systems that are more sensitive with their detection, along with biomarkers that were previously mentioned, which can make screening more efficient.
While LDCT is optimal, risks such as radiation and overdiagnosis along with cost make it hard for scientists to introduce at higher levels. Such methods that use AI have hopeful implementations in imaging and radiology, such as cancer detection and assistance making decisions, and the application of AI to pulmonary oncology will open up many pathways for diagnosis and prognosis using clinical, pathological, and morphological features of patient scans.
The Application of AI in Brain Cancer Treatment
Another type of aggressive cancer that is hard to diagnose is brain cancer. When applied to data from MRI scans, AI has great potential in the field of neuro-oncology and is multi-purpose as it can help establish how harmful a tumor is, find invading gliomas, predict the chance of recurrence and survival of patients, assess the physician’s skills, and simulate cranial surgeries to strengthen neurosurgical training.
Like in cervical cancer, segmentation is a large part of diagnosis of brain tumors through radiomics, and can be performed by deep learning AI platforms (Rudie, Rauschecker, Bryan, Davatzikos, & Mohan, 2019, p. 3). From these images, different features are extracted including size, shape, textures, and patterns. Machine learning platforms are then used to find relationships between the features, and determine the prognosis of the tumor. MR spectroscopy which compares the chemical composition of normal brain tissue with abnormal tumor tissue is used to classify gliomas and glioblastomas into different grades, depending on severity, and has the ability to identify regions where lymphocytic cells have invaded the tumor. Once found, the current treatment method for glioblastoma and gliomas is a resection of the tumor along radiation or chemotherapy using a medication called temozolomide (Rudie, Rauschecker, Bryan, Davatzikos, & Mohan, 2019, p. 8). However an effect that radiation and chemotherapy sometimes have is pseudoprogression, an increase in the size of the primary tumor or the appearance of a new lesion.
Machine learning devices that take patient images into account can be used in such instances to predict if a pseudoprogression is likely to occur. Researchers recently performed a study where an AI system was developed to outline characteristics of cancer cells in tissue grafts from patients that came from both the primary tumor and brain metastases, tumors in the brain that have formed from the original tumor. A live cell imaging algorithm was combined with AI and was used to study the movement of cells toward the area with damaged tissue and make out any differences between cells with and without brain metastatic potential. The study presented a device that could make calculations and predictions with the help of AI. The platform would be able to use a 3D measurement of cancer cells behavior in a BBB, or brain-blood barrier model outside of the organism and determine which cells have a brain metastatic characteristics. The visual differences between cancer cells that can form metastasis in the brain and those that cannot are very slight but the studies that used AI to identify these distinctions showed a large difference in the behavior of cancer cells and normal cells when they came across the BBB, making the AI device a very helpful tool for recognizing preseudopregrossions and potentially predicting them (Oliver et al., 2019, p. 4). However there are always limitations that come along with medical breakthroughs such as these. While the ex vivo model in this experiment is able to identify differences in cells that are able to and not able to cross the barrier, the characteristics of cells with metastatic potential are still inconclusive. There are not yet enough features of a cell which platforms can detect that will allow an AI algorithm to accurately predict if a cell will metastasize. Furthermore, a brain cancer patient’s brain will have already changed in some way before diagnosis, making it more prone to the formation of metastases (Oliver et al., 2019, p. 8).
After it is properly developed, the use of AI to detect cells with the potential to metastasize in the brain will increase survival rate. Artificial intelligence can also be used to predict these chances of survival for patients suffering from cancer. A recent study used an artificial intelligence tool called the DeepSurvNet that runs off of neural networks in order to determine brain cancer patients survival rates and sort them into four different classes, based on just their histopathological images. To train the model, researchers used a dataset created using the medical records of brain cancer patients with 4 different types of brain cancer. 4 classes were used to classify patients by the time between their brain cancer diagnosis and death. Multiple regions of interest in the tumors from the imaging slides were also allocated to each of these classes. The model was then tested using completely new sets of data from histopathological slides. Glioblastoma tissue sections stained with H&E dyes of 9 new patients were analyzed. The device classified most patient samples in a single class, which was anticipated as the regions of interest are all taken from the same tissue sample (Zadeh Shirazi et al., 2020, p. 9). With the use of the DeepSurvNet classifier, physicians can use the difference between tumors that allow for different lengths of brain cancer patient survival to create specialized treatments and significantly decrease patient mortality.
In addition to it being hard to diagnose, because neurological cancer is such a rare condition, doctors do not get to see many patients with it and therefore lack training. Artificial Intelligence that can deliver feedback based on users touch, has the potential to create a realistic environment for trainees to practice their surgical skills without having to operate on real patients. Particularly, surgical simulations can be used in training for neurosurgery as the tasks required in the field are very technical and need to be performed under large amounts of pressure since one mistake could lead to severe consequences. A study on The Virtual Operative Assistant, shows the benefits of using AI to conduct training that tests cognitive skills and determines the level of psychomotor expertise that an operator possesses through the use of a surgical simulation. During the experiment, 50 participants, all with differing levels of expertise were recruited and classified into two groups: skilled and novice. The classification was completed after all of the participants were asked to complete a complex virtual reality simulation where they had to remove a brain tumor located beneath the pia mater subpial tissue, a type of connective tissue, using two devices-one in each hand. In order for the Virtual Operative Assistant to perform the classification, over 250 performance metrics were generated that were representative of differing levels of expertise in the surgical field and only 4 metrics with the highest level of accuracy were chosen after careful consideration and selection processes (Mirchi et al., 2020, p. 4). After the machine learning algorithm computed its classification between “skilled” or “novice”, it also gave users a breakdown of its assessment on both the safety and movement metrics, and rather than assessing each metric on its own, the Virtual Operative Assistance included the relationship between metrics, allowing students to recognize that one strong metric may be making up for poor performance in another one. The three forms of feedback that the Virtual Operative Assistant is able to give users, auditory, text, and video-based, is what makes it extremely beneficial in the world of science. This new technology enables scientists to understand the expertise of an individual, identify cognitive expertise in tasks that are much too complex for human teachers to notice, and mimic real life training, all making it the perfect tool for simulation-based learning.
Evolving technologies and Further Use for Medical Education
Simulation-based training systems, such as the Virtual Operative Assistant, are able to develop checklists that evaluate different skills using machine learning algorithms (Sapci & Sapci, 2020). While there are numerous applications for AI cancer treatment: screening and detection, survival rate prediction, and surgical simulations which allow doctors to more efficiently develop surgical skills and treat patients, AI platforms do possess many challenges that come along with using them. Of the different challenges that AI platforms in medical training pose, feedback and liability issues are two of the most prominent. A study done at Mount Sinai Hospital created a type of AI technique known as deep learning using data from 700,000 patients (Paranjape, Schinkel, Nannan Panday, Car, & Nanayakkara, 2019). Their algorithm was highly accurate and was able to diagnose conditions that even experts struggle to diagnose, such as schizophrenia. However, AI systems often lack the ability to provide users with a proper explanation for how a certain answer or prediction was reached (Fig. 4a). These algorithms cannot properly understand the cognitive thinking of learners and therefore cannot properly train them in the actual areas where they need to be trained. This brings about the issue of liability because it becomes very hard for patients to trust a system that cannot provide an explanation for how it reached a diagnosis, and if a calculation were to be made incorrectly that puts a patient in danger, then it is not known whether the doctor, the hospital, or the company that developed the AI device is liable (Paranjape, Schinkel, Nannan Panday, Car, & Nanayakkara, 2019).
Because of their ineptitude to comprehend the emotional reasoning of users and provide appropriate feedback, AI-powered teaching platforms enable students to “cheat” the system. Many algorithms in artificial intelligence teaching tools do not actually train surgeons and increase their skill level. Rather, they make the assumption that a student is skilled in a certain area because they were able to accomplish one certain task. In the case of the Virtual Operative Assistant, this ability to cheat (Fig. 4b) can be credited to the relatively broad parameters that classify students as either skilled or novice (Mirchi et al., 2020, p. 16). In the experiment completed using this specific AI platform, there was a misclassification where 4 participants that were actually at the novice level were labeled as skilled. Such errors make it difficult to trust AI-powered teaching tools and implement them into routine medical practice and surgical training.This is where human expertise comes in and proves essential in the learning process. If AI platforms undergo diligent training alongside human experts that can properly assess the algorithm and recognize specific markers of a good surgeon, then cheating the system would be much less likely. Furthermore, the issue of learners feeling a disconnect from their teacher due to lack of feedback and properly backed up explanations, which can actually damage a students skill level, can be resolved by human interaction (Chan & Zary, 2019). AI could be substantially more useful for tasks such as computerised testing and cancer screening or diagnosis, but if physicians and AI-based machines are able to work in harmony then patient outcomes are guaranteed to improve as AI has the potential to process large amounts of data including medical reports, notes from pharmacists, and genetic reports, as well as analyze all of it. Nonetheless, a major thing that it cannot take the place of is the beauty of doctor-patient and doctor-student interactions (Paranjape, Schinkel, Nannan Panday, Car, & Nanayakkara, 2019).
Doctor-Patient Feedback and Interpretation
AI in the healthcare field is expected to grow rapidly in the years to come, but with growth comes limitations, which is why it is crucial for AI to be implemented into the healthcare system with ethical and legal aspects in consideration. A large imitation is that AI systems do not have feelings and can’t care for or have sympathy for patients in the same way that doctors do. The “quadruple aim” of healthcare consists of improving experience of care, improving health of populations, reducing per capita costs of healthcare, and improving the work life of healthcare workers (Kelly, Karthikesalingam, Suleyman, Corrado, & King, 2019, pg. 1). But healthcare systems are struggling to meet these goals.
The FDA has already cleared close to 40 devices that run off of AI and can be used for medical purposes. one of these is the IDx-DR, a system that can output a screening decision without the help of human interpretation of the image or results. The system then recommends the physician either rescreen or refer them to a specialist (Gerke, Minssen, Cohen, Bohr, & Memarzadeh, 2020, pg. 5). However while AI can improve imaging, diagnosis, and surgery, it will be difficult to manage AI when informed consent is considered. It is a common question whether it is the physician’s responsibility to inform the patient about AI and the way it works, and if the doctors have to even inform the patient at all that AI is being used. Some argue that it doesn’t matter how an AI system reaches its prediction, but more important is if the decision is correct but this can cause an issue in certain cases as many machine learning algorithms are known as “black boxes”, even the inventor does not know how the program reaches its final decision. The datasets being used to train the algorithms also need to be reliable, trustworthy, valid, and effective- the better the training data, the better the accuracy of the AI algorithm. Even after the first model is developed, the program will need further tweaks to be made. This includes data bias. Many AI algorithms have proven that they do have a bias when dealing with ethic origin, gender, age, or disabilities. These biases could lead to false diagnosis and jeopardize the safety of patients by making treatments ineffective (Gerke, Minssen, Cohen, Bohr, & Memarzadeh, 2020, pg.10). If an AI algorithm outputs a recommendation for treatment that a human physician would not have picked, therefore making it wrong, and the physician decides to use it anyways and it harms the patients, then it is likely that the physician is at fault for medical malpractice. It is also important to think about hospitals and if it becomes their fault that they bought and applied the AI systems to their practice, but this is why AI should be used more for assistance with decision making, rather than fully depended on.
For patients suffering from kidney failure, or end-stage renal disease, renal transplants are the best option for a patient to survive, yet dialysis is often a more reasonable choice due to the shortage of organ donors. However, currently dialysis software is not equipped to respond to unanticipated events that can occur during dialysis treatment. Miniature artificial kidneys that can provide a personalized dialysis treatment and are capable of real-time monitoring are currently in the process of being developed (Hueso et al., 2018, pg. 5). Data analytics supplied by fields of artificial intelligence such as machine learning and computational intelligence are expected to principally play a role making sure these new dialysis technologies are both efficient and safe for patients. Due to the complexity of the technologies involved in the creation of these dialysis machines, there are challenges for implementation of the devices in healthcare and biomedicine. Data analytics along with AI provide the baseline for medical decision support systems, but application of these AI algorithms in the medical field has its challenges, the biggest being the ethical issue (Hueso et al., 2018, pg. 2). Although this device will make the jobs of medical professions easier, it may make interactions with patients uncomfortable and lose trust in their doctors. For example, these automated devices do not have the ability to explain the reasoning for their decision and empathize with patients the same way that doctors can, making it hard for patients to understand their own course of treatment. AI algorithms are not able to express the relationship between the data they have observed and the outcomes that have been formed as a result of it.
While it is possible for an algorithm to overcome all of these challenges, human-computer interactions are one of the key aspects in gaining a better understanding of the way algorithms interpret data. Multiple algorithms produce great results but lack the ability to explain why they landed at those particular results. Even if scientists are able to understand the math that is involved in creating the algorithm, it is virtually impossible to determine which model made a specific decision. This is problematic as it has rendered many algorithms as untrustworthy, uninterpretable and unexplainable. Therefore there is a tradeoff between performance and expandability. Deep learning models have a very high level of performance but they are hard to interpret while linear regression models decision trees are relatively to interpret but have poorer decision making skills. Interpretability of AI algorithms is the ability of a human to understand the way it made a connection between features extracted and its predictions. Approaches to solving this interpretability issue can be categorized into two categories: whether they need internal information such as parameters to operate (also known as the level of transparency) or the amount of accessibility there is to the internal information of the model. Methods that require access to the internal information are considered to be working on “white boxes” (Reyes et al., 2020, pg. 2). An example of this is a CNN or convicted neural network where a radiologist uses a given layer of network to create a map which can be laid on an image that shows which regions are important for predicting if the patient has a disease. Black boxes on the other hand do not need access to this internal information and instead work directly with the input and output of the model to analyse how changes to the input will change the output. There are multiple visual techniques that give insights into the way that AI algorithms behave and they arrive at certain decisions. Two of these basic approaches are partial dependence plots and individual conditional expectation plots. Both methods are used to interpret black-box models and show the way a model’s predictions are dependent on the features. This helps predict which features will change the prediction when their value is changed (Reyes et al., 2020, pg. 2).
The goal of interpretability isn’t exactly to understand exactly how an AI system works, but to have enough information to understand it to the best extent possible and it is not always necessary. A wrong diagnosis in radiology can lead to extreme consequences for a patient, but reading images is prone to interpretation errors. Interpretability is a fast evolving field that has been at the center of AI research with great potential for future development of safe AI technologies (Kelly, Karthikesalingam, Suleyman, Corrado, & King, 2019, pg. 1). But before AI can be implemented into various tasks within radiology, task-specific interpretability solutions are required, and if algorithms known as “black boxes” are used in medicine, they need to be used with a great deal of judgement and responsibility. AI developers should be aware of the many consequences of algorithms and unintentionally lead to and make sure they are created with all patients in consideration. Doctors and surgeons being involved in this process can increase its efficiency significantly. If the interpretability of algorithms can be improved, then human-algorithm interaction would be smoother and the future adoption of AI with consideration of data protection, fairness and transparency of algorithms, and safety, would be supported by a large number of physicians.
The Impact of AI on Jobs
While many physicians may support the implementation of AI, no machine can work at its full potential without the presence of doctors, but studies have shown that medical students are not spending enough time getting acquainted with newer technologies that involve Artificial Intelligence. Currently, medical education is centered around 6 major areas: medical knowledge, communication skills, patient care, professionalism, practice based learning, and systems- based practice (Paranjape, Schinkel, Nannan Panday, Car, & Nanayakkara, 2019). Most of this training focuses on taking in large amounts of information and applying this information to patient care- a process based mostly on memorization. In order to improve outcomes in clinical settings, students need to learn how AI functions and how it can augment their work. The many promises of AI include automated image segmentation, detection of cancerous lesions, and comparison of images. While it can be fatiguing for human pathologists to detect small traces of cancer on a slide, AI systems are not affected by this and can scan a number of slides without losing their accuracy. AI can also help physicians to improve the quality of patient care by taking care of repetitive tasks and tedious tasks and managing large amounts of data, in addition to being another opinion for making decisions.
With AI algorithms showing such great amounts of promise in radiology, pathology, and cardiology, a question that arises is while AI algorithms replace human physicians? Recent data expresses that when considering its image and predictive analysis, AI might soon prove to be more efficient than radiologists. However, it is likely that AI will not replace the role of general physicians, but rather augment them. For example, an AI system is able to take over the job of a factory worker who performs a certain task repeatedly, but in the case of replacing medical professionals, AI is unable to engage in interactions with patients that are crucial in gaining their trust, and restoring them. One study on this topic deals with breast cancer. It suggests that digital mammography is not perfect and only has a sensitivity of around 85%. The other 15% that is not detected is a result of human error- what radiologists are able to identify on scans. Furthermore, the question of whether or not this practice is ethical is an important one. While replacing human workers with AI systems, it may benefit the economy as a whole, but the effect that it has on individuals whose jobs have now been taken away, is detrimental.
In most cases, technology is designed to perform a specific task which changes the demand that workplaces have for certain skills. These changes can influence the skill requirements, social well-being of workers, and career mobility, for different occupations. Limitations on data to train AI algorithms will restrict these skill pathways, but scientists can surmount this obstacle by prioritizing data collection that is detailed and responsive to real-time changes in the labor market (Frank et al., 2019). This improved data collection will enable the use of new tools that rely on data, including machine learning systems that more accurately reflect the complexity of labor systems. New data will also lead to new research that will strengthen our understanding of the impact of technology on the supply of and demand for labor
However, AI systems do tend to result in a number of false positives, resulting in extreme measures being taken without certainty of harmful cancer presence in the body. This is where radiologists are still crucial in the medical field, even with the presence of AI. Furthermore, false positives are an issue which can lead to anxiety, along with unnecessary biopsies and tissue removal(Ahuja, 2019). AI has the potential to assist and augment physicians rather than replacing them entirely by combining data and providing help in the decision making process by recommending certain treatment options. AI can also remove some of the burden of work from physicians by performing tedious tasks. Speech recognition is another useful device that can replace keyboards and decision management can help physicians to make more informed decisions that take into account both patient outcome and cost of treatment (Paranjape, Schinkel, Nannan Panday, Car, & Nanayakkara, 2019).
Due to the fact that the field of AI is a relatively new form of technology, it’s implementation in the real world raises a number of questions about the ethical side of the technology. Within the AI algorithms themselves, one prominent issue is model bias. The data that is being used to train such algorithms has the potential to be influenced by multiple outside factors including a bias towards the humans who collected the dacollectors of the data. As a result, an algorithm may be biased towards a specific group when predicting whether or not an individual should receive a certain treatment. It is important for researchers to consider this aspect of AI and work towards mitigating the effect of such biases. Data not representative of a large population can result in a model that is biased to subjects highly prevalent in the data set. In addition, for the highest level of fair and accurate model performance, it is imperative for scientists to split data so that platforms can be tested with images separate from the training data.
The first issue that seems to directly influence patients is how although AI may not entirely replace doctors, it will significantly alter relationships between patients and their physicians and nurses. Many companies that distribute electronic health records however have overlooked this disadvantage and have focused on only the positive aspects including how AI will be able to simplify interactions with complex data and reduce the time it takes to complete tasks. However to many patients, it is incredibly important for their own comfort and satisfaction to maintain relationships with their doctors. If AI algorithms are set to take over scheduling appointments, making payments, and even running follow-up visits, then this doctor-patient interaction time will be compromised. Furthermore, it is important to take into account the immense amount of data that algorithms require access to for training. While the majority of companies are sure to keep their data protected in order to abide by HIPAA, a privacy law that creates national standards to protect personal medical data, some organizations do allow their data to move freely in and out of their company. This sacrifices patient privacy and security in a way that didn’t have much of an effect before AI was implemented into medicine. Finally, the legal responsibilities that come along with having a hospital run by AI are vast. For any negative consequences that could have been fixed previously, oftentimes at first glance it seems that it is the responsibility of the provider of the AI algorithm. Providers do need to be certain that their algorithms they are providing to hospitals use relevant and accurate data that can make decisions in the most beneficial way possible, but questions surrounding this topic remain unanswered. One could also argue that negative outcomes are the doctors’ fault because they relied too heavily on an algorithm instead of using their own expertise to make a decision. In the end, it is the responsibility of contributors- providers of AI, developers of AI, patients, doctors, and all others involved in the process- to make sure artificial intelligence develops in the medical field in a safe and ethical manner.
Through its multitude of uses across the field of medicine and oncology specifically, AI has the potential to transform the way physicians work, and the way patients are treated. Within cervical cancer, lung cancer, and breast cancer, AI algorithms are able to detect lesions in scans and use segmentation techniques to separate masses within such an image and identify cells based on their structures. Furthermore, they have the ability to utilize these images and extracted features in order to classify images and predict the possibility of recurrence and even chance of patient survival. There are currently a number of platforms being developed which can perform these tasks and a number more being developed to teach medical students how to interact with technology and practice their skills in a real life setting before actually doing so. In the future, AI is set to enable faster and more accurate diagnosis, reduce human errors that are a result of fatigue, complete repetitive and tedious labor taste, decrease medical costs, perform minimally invasive surgery, and increase survival rates. Specific examples of prospective applications for AI as the field grows, are in analyzing relationships between patient outcomes and treatment administered to patients, diagnosis, forming protocol for certain treatments, personalized medicine, and patient care. However, despite these fascinating advancements in technology and medicine and the tremendous potential AI has to revolutionize medicine, there are still some things that AI is not able to accomplish. It lacks the ability to have social interactions with patients in a way that humans doctors can, and will continue to take jobs from employees around the world as its role expands. In order to surmount these obstacles, scientists will have to consider how far patients are willing to go in regards to putting trust in their doctor as well as the economic impact of AI and how it could in turn harm the economy by taking away large numbers of jobs. In order to improve success of AI in the fields of cancer detection, diagnosis, and treatment, these factors must be taken into account.
Bi, W., Hosny, A., Schabath, M., Giger, M., Birkbak, N., Mehrtash, A., . . . Aerts, H. (2019, March). Artificial intelligence in cancer imaging: Clinical challenges and applications. Retrieved November 18, 2020, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6403009/
Frank, M., Autor, D., Bessen, J., Brynjolfsson, E., Cebrian, M., Deming, D., . . . Rahwan, I. (2019, April 2). Toward understanding the impact of artificial intelligence on labor. Retrieved November 03, 2020, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6452673/
Gerke, S., Minssen, T., & Cohen, G. (2020). Ethical and legal challenges of artificial intelligence-driven healthcare (1010864836 778175037 A. Bohr & 1010864837 778175037 K. Memarzadeh, Eds.). Retrieved October 27, 2020, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7332220/
Hosny, A., Parmar, C., Coroller, T., Grossmann, P., Zeleznik, R., Kumar, A., . . . Aerts, H. (2018, November 30). Deep learning for lung cancer prognostication: A retrospective multi-cohort radiomics study. Retrieved September 15, 2020, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6269088/
Hueso, M., Vellido, A., Montero, N., Barbieri, C., Ramos, R., Angoso, M., . . . Jonsson, A. (2018, February). Artificial Intelligence for the Artificial Kidney: Pointers to the Future of a Personalized Hemodialysis Therapy. Retrieved October 27, 2020, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5848485/
Kusy, M., Obrzut, B., & Kluska, J. (2013, December). Application of gene expression programming and neural networks to predict adverse events of radical hysterectomy in cervical cancer patients. Retrieved October 02, 2020, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3825140/
Mirchi, N., Bissonnette, V., Yilmaz, R., Ledwos, N., Winkler-Schwartz, A., & Del Maestro, R. (2020, February 27). The Virtual Operative Assistant: An explainable artificial intelligence tool for simulation-based training in surgery and medicine. Retrieved August 10, 2020, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7046231/
Obrzut, B., Kusy, M., Semczuk, A., Obrzut, M., & Kluska, J. (2017, December 12). Prediction of 5-year overall survival in cervical cancer patients treated with radical hysterectomy using computational intelligence methods. Retrieved October 02, 2020, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5727988/
Oliver, C., Altemus, M., Westerhof, T., Cheriyan, H., Cheng, X., Dziubinski, M., . . . Merajver, S. (2019, March 27). A platform for artificial intelligence based identification of the extravasation potential of cancer cells into the brain metastatic niche. Retrieved August 24, 2020, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6510031/
Reyes, M., Meier, R., Pereira, S., Silva, C., Dahlweid, F., Von Tengg-Kobligk, H., . . . Wiest, R. (2020, May 27). On the Interpretability of Artificial Intelligence in Radiology: Challenges and Opportunities. Retrieved October 27, 2020, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7259808/
Seijo, L., Peled, N., Ajona, D., Boeri, M., Field, J., Sozzi, G., . . . Montuenga, L. (2019, March). Biomarkers in Lung Cancer Screening: Achievements, Promises, and Challenges. Retrieved September 15, 2020, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6494979/
Shankar, A., Saini, D., Dubey, A., Roy, S., Bharati, S., Singh, N., . . . Rath, G. (2019, May). Feasibility of lung cancer screening in developing countries: Challenges, opportunities and way forward. Retrieved September 15, 2020, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6546626/
Wang, S., Yang, D., Rong, R., Zhan, X., Fujimoto, J., Liu, H., . . . Xiao, G. (2019, October 28). Artificial Intelligence in Lung Cancer Pathology Image Analysis. Retrieved September 15, 2020, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6895901/ Zadeh Shirazi, A., Fornaciari, E., Bagherian, N., Ebert, L., Koszyca, B., & Gomez, G. (2020, May). DeepSurvNet: Deep survival convolutional network for brain cancer survival rate classification based on histopathological images. Retrieved August 24, 2020, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7188709/
Selman Waksman first used the word antibiotic as a noun in 1941 to describe any small molecule made by a microbe that antagonizes the growth of other microbes. Nearly 80 years later, as of 2019, 123 countries reported the existence of extensive multi-antibiotic resistant tuberculosis. Furthermore, a year prior to this, Isabelle Carnell-Holdaway a cystic fibrosis sufferer was put in ICU after an aggressive infection of Mycobacterium abscessus, a relative of tuberculosis, spread to her liver putting it at risk of failure. With no new classes of antibiotics discovered and available for routine treatment since the 1980s, she was left with a 1% chance of survival. However in under two years, Isabelle went from receiving end-of-life care to preparing to sit her A-levels and learning to drive. It had taken an experimental bacterio-phage therapy treatment instead of antibiotics to save the life of a girl with a seemingly untreatable bacterial infection. This article will explore the factors responsible for hindering the discovery of a possible antibiotic that could have treated Isabelle: antimicrobial resistance, their misuse and the brain drain in research and development due to a failure of sufficient financial incentive for pharmaceutical companies.
HOW DO ANTIBIOTICS WORK?
The first antibiotic was discovered by Alexander Fleming in 1928. Nearly 100 years later, we now have over 100 different antibiotics available which fit into one of two categories: bacterio-static and bactericidal. The former slows the growth of bacteria by interfering with the processes the bacteria need to multiply, and include nucleic acid synthesis and enzyme activity and protein synthesis. The latter, with the example of penicillin, works to directly kill the bacteria for example by interfering with the formation of cell walls.
The main problem that made Isabelle’s treatment so difficult was resistance. Bacteria are termed drug-resistant when they are no longer inhibited by an antibiotic to which they were previously sensitive. At the moment an estimated 700,000 people are estimated to die each year from drug resistant infections, a statistic projected to rise to 10 million by 2050. This resistance can present itself in one of four ways. First, the bacterium can reduce intracellular accumulation of the antibiotic by decreasing permeability and/or increasing efficiency of efflux pumps to pump the antibiotic away. For example, the determinants improve efflux pumps located in the surface of bacterial cells, improving their ability to remove tetracycline. Second, resistance can occur in the method of alternating the target site of an antibiotic that reduces its binding capacity and thus its uptake. An example of this would be the OprD proteins. These are porins, meaning they mediate the uptake of molecules and preferentially block drugs like Imipenem. Moreover, other bacteria can acquire the ability to inactivate or modify the antibiotic. Penicillin’s efficacy can be undermined by the release of beta-lactamase. This is an enzyme produced by the target bacterium which essentially renders penicillin’s action on cell wall synthesis useless. Finally, bacteria can also modify metabolic pathways to circumvent the antibiotic effect. Quinolones target bacterial gyrase genes associated with the supercoiling of DNA. Under normal circumstances when the gyrases are inhibited, the DNA is unable to reorganise itself during cell division. A mutation in a gyrase gene allows for cell division to go on as normal but diminishes the effect of quinolones. Thus, one reason that makes the discovery of new antibiotics so difficult is because bacteria are equipped with several different mechanisms that encode and develop methods undermining the fundamental ways that antibiotics work.
How Is Resistance Acquired?
Resistance arises through the mutation or sharing of DNA using mobile genetic elements. The latter can occur in one of three ways. One way is through the use of viral mobile genetic elements during transduction. This happens when bacterial DNA is accidentally packaged in a bacteriophage capsid after infection. If this capsid binds to a recipient cell, and injects the foreign DNA, leading to the successful recombination of the donor DNA into the genome of the recipient, the bacteriophage can help transfer resistance genes. Another way this transfer can occur is through the use of plasmids during conjugation. Plasmids are extrachromosomal loops of DNA that replicate independents of the bacteria’s genophore and can be transferred when physical contact is made between two cells, followed by the formation of a pilli bridge that enables the transfer of a plasmid (which may also contain a gene for antibiotic resistance). Finally, resistance genes can also be transferred horizontally during transformation. Several antibiotic resistant pathogens are capable of this process, including Escherichia and Klebsiella which are leading causes of antibiotic resistant infections acquired within hospitals. The process of transformation happens when genes are released from nearby microbes and are taken in directly by another. This means that a single bacterium can also lead to other bacteria, previously sensitive to antibiotics, to inherit these mutations without needing to be direct offspring, perhaps ensuring that the whole microbial community is protected from the antibiotic, rendering them useless.
As aforementioned, the reproduction of the mutant resistant bacteria is also paramount in understanding the difficulty of new antibiotic discoveries. Resistance is an adaptation that occurs as a result of directional selection. When antibiotics are introduced into a community of bacteria, a selection pressure is created. Due to initial extensive genetic variation, there will be some bacterial species that inherently have alleles, allowing for resistance, which allows them to survive, to reproduce, and pass on the alleles that code for resistance to their offspring. Those without the allele for resistance die off. Thus, resistance becomes a selective advantage, and the allele frequency increases within the population. In ideal conditions, some bacterial cells can divide by binary fission every 20 minutes. Therefore, after only 8 hours, an excess of 16 million bacterial cells carrying resistance to a given antibiotic can be produced: in the wrong hands, a new antibiotic could be rendered useless overnight. For contrast, millions of years of evolution occurred before primates emerged with an enzyme that could efficiently digest alcohol, and even with this useful mutation, alcohol poisoning is still currently a problem, with alcohol-specific deaths in the UK reaching 11.8 deaths per 100,000 people in 2019. Thus, another reason that can be attributed to the difficulty of antibiotic discovery is the basic biology of bacteria which allows them to adapt to selection pressures and evolve at an exponential rate.
Contextual scenarios in which antibiotics act as a selection pressure is not limited to its use in treating infections in patients, but also within the agricultural industry– something which is becoming a growing hindrance to the efficacy of existing antibiotics and responsible for the rise of superbugs such as MRSA. According to research published by Public Health England, more than 20% of antibiotics prescribed in primary care in England are inappropriate (i.e used in cases where unnecessary, such as treating viral infections).* This statistic demonstrates the need for antimicrobial stewardship in a society that treats this marvel of biology as a limitless commodity. Furthermore, there is a strong link between increasing rates of antibiotic prescription and emergence of resistant bacteria meaning that there is an increasing need for more narrower spectrum drugs to prevent a complete antimicrobial apocalypse.
Linking to this, our dependence on the use of extremely narrow spectrum potent antibiotics is being threatened by the agricultural industry. According to statistics from the UN’s Food and Agriculture Organisation, at any one moment around 20 billion animals are being kept as livestock. To keep maintenance costs cheap, they are often kept in unhygienic and extremely small, cramped spaces: the optimum breeding ground for disease. Antibiotics tend to be used as a catch-all to both treat illness in some and act as a prophylaxis in others. This system has led to increasingly more bacteria that are resistant to antibiotics. Though there are strict rules stipulating the rules of using strong antibiotics against already resistant bacteria to counteract this, it is not enough to keep up with the growing disparity between resistant bacteria and the development of antibiotics against them. In late 2015, China reported the existence of bacteria displaying resistance against colistin. This was a surprise, as the drug was rarely used (as liver damage is a common side effect) up until this point existing only as a good last resort option for complex infections occurring in hospitals. The resistance to colistin came about as a result of millions of farm animals in Chinese pig farms being given colistin over the course of many years. As aforementioned, this acted as a selection pressure, eventually leading to the increase in pigs carrying colistin resistant bacteria, and eventually crossing over to humans through the food chain. Therein lies a huge threat to the discovery of new antibiotics: finding a balance between mitigating side effects to allow safe use in humans and finding one strong enough to deal with strains already resistant to those that are almost too unsafe for human use.
One reason for the decline in antibiotic discovery is a lack of financial incentive for pharmaceutical companies. To refer back to Isabelle’s case, phage therapies are considered to be approximately 50% cheaper than antibiotics. Furthermore, as mentioned in a Ted talk by Gerry Wright, antibiotics have become so unprofitable that only 4 major pharmaceutical companies still have active antibiotic research programmes. Profit margins for antibiotic discovery are low in a pay-per-pill system since good medications will only be used once and in circulation with other ones to combat the possibility of resistance in the long term. As a result, production of treatments to regulate cancer or muscular-skeletal disease symptoms are most prominent in pharmaceuticals due to their repeated, long term use.
FDA drug approvals by classification 2020, courtesy of Nature Reviews, Asher Mullard
In an attempt to shift profits away from the volume of medication sold, in June 2020, UK Health Secretary Matt Hancock announced the adoption of a ‘Netflix Subscription Model’. This scheme attempts to tackle the growing global health threat by de-linking incentive payments to pharmaceutical companies from sales, offering guaranteed income for innovative treatments. Similarly, Germany has implemented a process where higher prices will be awarded for particularly important antibiotics. However, even if incentives such as these help to create new antibiotics, another pivotal question remains: how to ensure that existing and new medicines get to patients in low and middle income countries. With almost 2 billion people without access to antimicrobial treatments (LEDCs being disproportionately represented), failure to improve access for antibiotics, will limit efforts to tackle resistance everywhere.
In summary, the rate of emergence of new pathogenic bacteria greatly surpasses that of antibiotic development. As stated previously, the leading factor behind this issue is the versatile methods bacteria use to develop and spread resistance, something excavated by overprescription and misuse in the agricultural industry. Furthermore, the current economic model for the pharmaceutical industry doesn’t provide enough financial incentive to motivate enough companies to invest in innovations aimed to aid and tackle this problem, leading to some governments potentially turning to an alternative where they “pay more to use less”.
Clardy, Jon, et al. “The Natural History of Antibiotics.” Current Biology, vol. 19, no. 11, 2009, doi:10.1016/j.cub.2009.04.001.
Dedrick, Rebekah M., et al. “Engineered Bacteriophages for Treatment of a Patient with a Disseminated Drug-Resistant Mycobacterium Abscessus.” Nature Medicine, vol. 25, no. 5, 2019, pp. 730–733., doi:10.1038/s41591-019-0437-z.
Lerminiaux, Nicole A., and Andrew D.s. Cameron. “Horizontal Transfer of Antibiotic Resistance Genes in Clinical Environments.” Canadian Journal of Microbiology, vol. 65, no. 1, 2019, pp. 34–44., doi:10.1139/cjm-2018-0275.
Myszka, Kamila, and Katarzyna Czaczyk. “Mechanisms Determining Bacterial Biofilm Resistance to Antimicrobial Factors.” Antimicrobial Agents, 2012, doi:10.5772/33048.
Plackett, Benjamin. “Why Big Pharma Has Abandoned Antibiotics.” Nature, vol. 586, no. 7830, 2020, doi:10.1038/d41586-020-02884-3.
Reygaert, Wanda C. “An Overview of the Antimicrobial Resistance Mechanisms of Bacteria.” AIMS Microbiology, vol. 4, no. 3, 2018, pp. 482–501., doi:10.3934/microbiol.2018.3.482.
Society, Microbiology. “Antibiotics: Microbes and the Human Body.” Microbes and the Human Body | Microbiology Society, microbiologysociety.org/why-microbiology-matters/what-is-microbiology/microbes-and-the-human-body/antibiotics.html.
Verbeken, Gilbert, et al. “Taking Bacteriophage Therapy Seriously: A Moral Argument.” BioMed Research International, vol. 2014, 2014, pp. 1–8., doi:10.1155/2014/621316.
“Http://Ljournal.ru/Wp-Content/Uploads/2016/08/d-2016-154.Pdf.” 2016, doi:10.18411/d-2016-154. HM Government
Blood is a transport liquid pumped by the heart to all parts of the body and vice versa to repeat the process. It is also a tissue with a collection of similar specialized cells that serve particular functions. It is composed of red blood cells for transportation of oxygen and carrying back of carbon dioxide to exhale, white blood cells to fight infections, platelets to heal by clotting of blood in injuries, and plasma for circulation of platelets and blood cells in the body.
Artificial blood is a blood substitute used to mimic and fulfill some vital functions of the biological blood like the transport of oxygen and carbon dioxide. It is useful during life-sustaining conditions with serious blood loss. They cannot carry out secondary functions like fighting infections. Nowadays with the growing need for blood and reduced availability of donors, the creation of artificial blood is a need for millions.
Human blood has a composition of products that need to be added to make artificial blood. First and most importantly it should ensure compatibility thus in this case the types of blood should not vary and be similar for all. It should also fulfill the purpose of the patient’s safety, to be able to process and remove the disease-causing agents like microorganisms, bacteria, and viruses, being pathogen-free. Secondly, the important aspect is regarding the transportation of oxygen and carbon dioxide. This feature is considered to be fulfilled by recent research. Third, it must be shelf-stable. Human blood can only be stored for a relatively short period. According to the Red Cross, storage of blood cells is done in the refrigerator at 6°C for 42 days, while platelets can be stored at room temperature in agitators only for 5days. So unlike donated blood, artificial blood should be able to be stored for at least a year or more.
Perfluorocarbons are a group of human-made chemicals composed of carbon and fluorine. They are thought to be used to design artificial blood and dissolve about 50 times more oxygen than blood plasma. They are inexpensive and do not require any biological materials. In making it useful for artificial blood ongoing research is in talking terms in which certain hurdles have to be overcome. Firstly, correction in its solubility in water has to be brought about, which can be achieved by its combination with emulsifiers that can suspend its tiny particles in the blood. Secondly, the quantity of Perfluorocarbons has to be large as they carry much less oxygen than hemoglobin-based products. Thus improved emulsions will be developed in the subsequent time, the process of which has begun.
Haemoglobin Based Products
Hemoglobin carries oxygen from the lungs to the other body tissues. Artificial blood made according to this principle has an advantage due to its natural function. Unlike in Perfluorocarbons, the oxygen covalently bonds to hemoglobin. It has another advantage of eliminating blood typing as they aren’t contained in a membrane and are different from the whole blood. Along with having the pros, hemoglobin-based also has some cons. Firstly, stability is an issue.
Secondly, its raw material cannot be used as it would lead to breaking down into small toxic products inside the body. Therefore artificial blood hemoglobin-based should be made by resolving these issues. The stability of it can be brought about by chemically cross-linking or using recombinant DNA technology to produce modified proteins.
The cross-linking has specific chemical cross-links which prevent dissociation to dimers or monomers. Therefore these hemoglobins attain the properties of solubility and stability. Thus it is expected that these modified ones should have a greater ability to carry oxygen than our red blood cells. The research is going on and its availability is expected to be within some years.
Process in Making
Perfluorocarbons involve polymerization reactions for the making of artificial blood. Hemoglobin-based products are mostly preferred which can be used by isolation or synthetic production with the use of amino acids. It also uses specific types of bacteria and materials needed to incubate it like warm water, glucose alcohol, urea, acetic acid, and liquid ammonia. Later the process involves molecular modification followed by reconstitution in an artificial blood formula. A strain of E.coli bacteria is taken and in three days the harvesting of protein is done along with the destruction of the bacteria.
Then starts with the fermentation process where bacterial culture is transferred to the test tube containing all the required nutrients for growth. This step results in bacterial multiplication and transferring to a seed tank which later is transferred to a fermentation tank. This leads to the production of hemoglobin after which the tank is emptied and the isolation process begins. Here the hemoglobin is isolated with the centrifugal separator and purified by fractional distillation. Lastly from here, it is transferred for final processing. It’s mixed with water and electrolytes for the production of usable artificial blood which is later pasteurized and packaged.
The various types of artificial blood are restricted to certain limitations. Researchers and scientists are also coming up with the idea of using hematopoietic stem cells for the production of artificial blood. This is said to have similar morphology with a similar amount of hemoglobin as the natural red blood cells. The Food and Drug Administration is examining the safety of this blood along with its cost-effectiveness. It may also serve as a blood donor to all common blood types. Along with all this currently, many companies are working on the production of safe and effective artificial blood by clearing the various limitations. In the future, it is anticipated that new materials to carry oxygen in the body will be found. Additionally, long-lasting products will be developed, along with the products that perform other functions of the blood effectively. Hopefully, investigations carried out will be beneficial in the upcoming times.
In recent decades, scientists have made huge progress discovering how our identities, and memories are made and stored. A patient that transformed our understanding of the way memory functions are organised in the human brain, is referred to as ‘the man who couldn’t make memories’; Henry Molaison possessed one of the most famous brains worldwide and bestowed unique insights into the inner-workings of human brains.
Henry Gustav Molaison, also known in medical literature as patient H.M. to protect his identity, was born on February 26, 1926 in Manchester, Connecticut.
As a child, he had a relatively normal childhood. Although it wasn’t long after a minor head injury and a family history of seizures (although the exact aetiology behind his seizures remains uncertain), that Molaison began suffering from severe epilepsy. At the age of 10, he started having absence seizures and 6 years later he developed generalised tonic-clonic seizures. His seizures greatly impacted his daily life and led him to drop out of high school. Later he was also unable to maintain his job and function independently. Molaison’s case was so severe that it couldn’t be treated pharmacologically with high doses of anticonvulsant medication.
After nearly 10 years he turned to Dr William Scoville, a renowned daredevil neurosurgeon of his time, with hope to lead a normal life once again. At the age of 27, his hippocampus was removed in an experimental procedure in an attempt to alleviate the impact his seizure had on the quality of his life. He underwent a ‘bilateral medial temporal lobectomy’, which surgically removed the medial temporal lobe on both sides of his brain. This included the hippocampal complex, parahippocampal gyrus, the uncus, the anterior temporal cortex, and the amygdala, according to Scoville’s own illustrations of his surgical technique. However in around 1992-199, MRI scans revealed that the surgery was less extensive than he thought, but enough to cause the damage it did. 
Although Dr Scoville hoped it would cure the epilepsy, he still wasn’t completely sure whether it would be successful or if there might be any long lasting side effects of this procedure. As a result, both of his thoughts were correct. Molaison’s seizures had stopped but unfortunately he was also left with long term memory loss, leaving him constantly living in the present tense. Later Scoville admitted that the operation was a tragic mistake and has spoken strenuously about the dangerous implications of bilateral mesial temporal lobe surgery.
Different types of Amnesia
There are multiple types of amnesia, such as Retrograde, Anterograde, Transient global and Infantile amnesia. Retrograde amnesia is when someone is unable to recall events that occurred before the development of the amnesia and is commonly used in films and media.  Whereas anterograde amnesia refers to a decreased ability to retain new information and is typically caused by brain trauma. 
Molaison developed a peculiar form of amnesia and suffered from both partial retrograde amnesia and anterograde amnesia. The latter meant he lost the ability to form new memories, such as the inability to remember what he had eaten for lunch that day, tests that he just done minutes before and names he had just been introduced to. Scoville wrote: “After operation this young man could no longer recognise the hospital staff nor find his way to the bathroom, and he seemed to recall nothing of the day to day events of his hospital life. There was also a partial retrograde amnesia.”  This meant that while he could recall memories from his childhood, he was unable to remember almost 11 years of events prior to the operation.
However, both his personality, intellectual abilities and perception remained unaffected and his IQ increased from 104 to 117.  Molaison still had the ability to form non-declarative memories, allowing him to still acquire and develop motor skills, which led to Brenda Milner’s discovery of the distinction between procedural and declarative memories. While his mind became like a sieve, through other testing performed by Milner she discovered that he still possessed short term memory. This led to the notion that this too existed in a separate brain structure to the one he lacked.
Short Term and Long Term Memory
Molaison’s misfortune ended up as a milestone in our understanding of the brain as up until it occurred memory wasn’t thought to be localised in one area of the brain. Dr Scoville and Brenda Milner were the first to make observations and report his case in 1957 in the “Loss of recent memory after bilateral hippocampal lesions”. Since he had difficulty remembering doing the tests in the day, Molaison never grew tired of the numerous experiments he partook in.
It is thanks to Molaison, that today we know that intricate functions are directly connected to distinct regions of the brain. The hippocampus, which is embedded deep into the temporal lobe, plays an important role in forming, retaining, and recalling declarative memories and spatial relationships. It’s also where short-term memories are turned into long-term memories.
Five decades later, referred to as Patient H.M., Molaison’s case grew in popularity due to the publication, which has thoroughly been cited numerous times. Researchers arrived at the conclusion that short term memory was not connected in any way to the medial temporal lobe structures. A particular researcher out of the 100 who studied him, Suzanne Corkin, spent the most time with Molaison interviewing him and working with him for 46 years. In her book “Permanent Present Tense: The man with no memory, and what he taught the world”, Dr Corkin covers how Molaison’s mind was used to understand how our minds and memory work. It also covers his early life and key childhood memories from their personal conversations or careful reporting and research. She wrote about how she went from viewing him as a “subject” to seeing him as a human being. Molaison’s life was not easy as he often struggled at times. After a while he came to understand that while others could retrieve and store memories, he could not. Nevertheless, he remained positive, coping well with his difficult situation and he acts as a true inspiration for his extreme resilience. H.M. once poignantly remarked that “every day is alone in itself. Whatever enjoyment I’ve had, and whatever sorrow I’ve had”. 
Sadly Henry Molaison passed away at the age of 82 due to respiratory failure. Despite his death in 2008, his brain still continues to excite and offer further investigation into memory as there is still much to uncover. Mr Molaison was much, much more than a research specimen but a person who despite facing grave misfortune, still managed to show ‘the world you could be saddled with a tremendous handicap and still make an enormous contribution to life.’  Columbia pictures and Scott Rudin have even acquired rights to develop a film based on his life.
As Dr Corkin described as “a beautiful finale to his enduring contributions”, his frozen brain was cut into 2,401 slices postmortem, which have been photographed and digitised into a high-resolution, 3D model for further anatomical analysis, in which we can even view individual neurons!
Molaison once commented: “It’s a funny thing – you just live and learn. I’m living and you’re learning.” Henry Molaison leaves behind a legacy (quite literally through the preservation of his brain!) which shall be remembered by us all and stay within our own memories. His forgetfulness has revolutionized our understanding of the brain, which we can still learn so much from till this date.
To end, as Dr Corkin said “Henry’s disability, a tremendous cost to him and his family, became science’s gain”.
 Loring, D. W., & Hermann, B. (2017, June). Remembering H.M.: Review of “PATIENT H.M.: A Story of Memory, Madness, and Family Secrets”. Archives of Clinical Neuropsychology. https://doi.org/10.1093/arclin/acx041
Traumatic brain injury (TBI) is often caused by a blow to the head and currently affects around five million people across the US. It is known to cause several neuropsychiatric conditions such as psychosis, mania, and Alzheimer’s disease, and can also lead to nerve cell deterioration. At the Harrington Discovery Institute in Cleveland, Ohio, Dr. Pieper and his team have discovered a way to prevent TBI-induced nerve cell deterioration in the brain. They also found a possible explanation for the relationship between TBI and Alzheimer’s disease.
Osmosis, “Traumatic Brain Injury (TBI)”
To explore the connection between Alzheimer’s and TBI, Dr. Pieper used previous knowledge of tau and acetylation in patients. Tau is a protein in nerve cells that help guide nutrients throughout the neuron. However, tau tangles with other tau molecules in patients with Alzheimer’s disease, resulting in weak synaptic communication between neurons and becoming acetylated-tau. While experimenting with mice, Dr. Pieper found high levels of acetylated-tau (ac-tau) in different forms of TBI. The elevated ac-tau persisted chronically if left without treatment. Furthermore, patients with Alzheimer’s disease had even higher levels of ac-tau if they had a history of TBI.
Labiotech, “Healthy Neuron vs Alzheimer’s Disease Neuron”
Dr. Pieper’s team found two anti-inflammatory drugs (salsalate and diflunisal) that helped to protect the mice’s neurons from deteriorating after TBI. These two medications inhibit the enzyme that causes tau acetylation, therefore preventing the transformation into ac-tau. Upon this discovery, the researchers analyzed over seven million patient records regarding the usage of salsalate and diflunisal and realized that these medications were associated with a decrease in Alzheimer’s disease and TBI cases. Additionally, they compared these two drugs with aspirin, a common anti-inflammatory drug, that does not prevent acetylation. Dr. Pieper did not find any evidence of aspirin showing the same neuroprotective activity as salsalate and diflunisal.
Knowing that tau is a protein that diffuses from the brain into the bloodstream, the researchers wondered about ac-tau levels existing in the blood of TBI patients. For both mice and humans, there was a significant increase of ac-tau in the blood. However, these elevated levels returned to normal when treated with medications such as salsalate and diflunisal, showing again that they effectively protect nerve cells from deterioration.
Dr. Rosa, the co-author of this study, explains that this newfound knowledge can have a variety of uses in the clinical setting. The research team is continuing to examine ac-tau and its relationship with neurodegenerative diseases. Additionally, they will study salsalate and diflunisal to see whether these drugs can be used as an established neuroprotective medication for humans.
Kyle Phong, Youth Medical Journal 2021
News Medical Life Sciences, “Researchers discover a new way to prevent brain nerve cells from deteriorating after injury”, 13 April 2021