Clinical trial suggests new direction for heavy-smoking head and neck cancer patients

Patients with a greater than 10 pack/year history of smoking tend to develop an especially dangerous form of head and neck squamous cell cancer (HNSCC) for which prognosis remains poor and treatments have changed little during the past two decades. However, recent phase 1 clinical trial results by the Head and Neck Cancer Group at University of Colorado Cancer Center suggest a possible new direction for these patients. The first-in-human trial of the oral PARP inhibitor olaparib, with the anti-EGFR drug cetuximab and radiation, led to 72 percent 2-year survival in 16 patients on trial, compared with an expected 2-year survival rate of about 55 percent for standard-of-care treatment.

“Colorado promotes innovation, and this trial was certainly innovative when it was designed by our group,” says David Raben, MD, CU Cancer Center investigator and professor in the CU School of Medicine Department of Radiation Oncology. “Much credit goes to Antonio Jimeno, MD, Ph.D. who was very supportive of this idea and helped move this forward along with Dr. Sana Karam and Dr. Daniel Bowles.”

The drug cetuximab targets EGF receptor signaling (EGFR) and while it earned FDA approval in 2006 for use against head and neck cancers over-expressing EGFR, Raben stated there is significant room for improvement.

“That’s where olaparib and radiation come in,” he says. “Ten years ago, I was on a sabbatical from CU, working for AstraZeneca in England. And I remember taking the train from Manchester to Cambridge to learn about this new drug from a small biotech company called Kudos Pharmaceuticals. It was a PARP-inhibitor, meant to keep cells from repairing damaged DNA. That’s the drug we now call olaparib.”

Early in development, the drug had shown remarkable activity in woman with BRCA mutations, “but we wanted to know if it worked in other diseases where BRCA wasn’t the story,” Raben says.

Olaparib inhibits the action of an enzyme known as PARP, which is important for DNA repair. HNSCC in heavy smokers already tends to carry a heavy load of DNA damage. And radiation creates additional DNA damage. When olaparib nixes the ability of these cancers’ to repair DNA, it can push cancer cells past the tipping point of damage and into cell death. In this way, PARP inhibition and radiation may be synthetically lethal, meaning that together they exploit deficiencies in gene defects that leads to enhanced cell death.

In fact, lab work by Raben and CU Cancer Center colleagues including Xiao-Jing Wang MD, Ph.D., Barb Frederick, Ph.D., and Ariel Hernandez, among others, shows that PARP inhibitors like olaparib may also amplify the effects of anti-EGFR drugs like cetuximab.

“The traditional approach against this kind of cancer uses cisplatin chemotherapy along with radiation. I had seen data suggesting that the combination of cisplatin and olaparib might be too toxic on patients’ blood counts. So our team explored this alternative approach that we hoped would offer a more targeted treatment in this poor prognosis group,” Raben says.

In addition to promising survival results, the trial reinforces earlier work showing that cancer patients who continue to smoke while receiving treatment tend to fare worse than those who quit.

“We didn’t cherry pick our patients for this trial. All were heavy smokers, many were heavy drinkers, advanced T-stages, and some continued to smoke during the treatment. People who continued smoking were the ones who did the worst,” Raben says.

However, the trial’s survival benefit came with additional side effects, some of which appeared relatively late in the course of the trial (demonstrating the importance of long-term follow-up for patients in radiation Phase I studies).

“We did see an increase in skin toxicity, which wasn’t unexpected, and we learned that when you combine olaparib with radiation, you need perhaps one tenth the dose that you would when using olaparib alone,” Raben says. Most common side effects included dermatitis (39 percent) and mucositis (69 percent). Several patients experienced increased long-term fibrosis and one showed carotid stenosis, though Raben points out that some side effects could be due to the continued influence of smoking, as well.

“The question now is whether we should move this combination into a randomized phase II trial or use what we’ve learned to design new combinations,” Raben says. For example, “There is tremendous enthusiasm in the oncology community to combine DNA damage repair inhibitors like olaparib with immune enabling drugs, and this may reduce overall toxicity further when combined with or used after radiation,” he says.

Or, Raben suggests that targeted therapies and immunotherapies could be used earlier in the course of treatment, pointing to a forthcoming clinical trial by collaborator Sana Karam, MD, Ph.D., that will test the ability of radiation and immunotherapy to shrink head and neck cancer tumors before surgery.

Source: Read Full Article

Give your child a head start with math

(HealthDay)—Many kids struggle with math—and for a number of reasons.

Knowing when to be concerned will allow you to get your child study help early on, which is important because research shows that young children who have difficulty with math typically will continue to struggle as they get older.

Signs of math difficulties can start as early as 2 years of age. Unlike the terrible 2s, however, this is not something they grow out of. For preschool children, risk factors for math struggles include low cognitive functioning, vocabulary difficulties, and being from a low socioeconomic household.

For elementary- and middle-school children, risk factors for math struggles include reading, math and attention-related behavioral difficulties, as well as being from a lower socioeconomic household.

Attending preschool or Head Start can lower the risk of math struggles. Screening and intervention efforts as soon as a child starts school also help. Kids should be assessed for math, reading and even behavior problems.

Help your kids get comfortable with math as early as possible. Play informal counting games, like counting the forks and spoons in your kitchen, or the cans and boxes as you unpack groceries. Make a game of looking for a certain number as you read the pages of a book. These activities help lay the groundwork for future classroom success.

And make sure you feel comfortable with math yourself. Kids of math-anxious parents often learn less math at school and are more likely to become math-anxious themselves when their mom or dad repeatedly tries to help with homework.

Resist verbalizing any dislike you have of math in front of your child. Instead use tools like computer and board games and even apps to help you interact in a more positive way.

Source: Read Full Article

Large collection of brain cancer data now easily, freely accessible to global researchers

A valuable cache of brain cancer biomedical data has been made freely available to researchers worldwide, say researchers at Georgetown Lombardi Comprehensive Cancer Center. The dataset, REMBRANDT (REpository for Molecular BRAin Neoplasia DaTa) hosted and supported by Georgetown, is one of only two such large collections in the country.

Information about the brain cancer data collection, which contains information on 671 adult patients collected from 14 contributing institutions, is detailed in Scientific Data, an open-access journal (Nature). Already, thousands of researchers in the U.S. and internationally log on to the data site on a daily basis, and word about the resource is expected to increase its use, says Subha Madhavan, Ph.D., chief data scientist at Georgetown University Medical Center and director of the Innovation Center for Biomedical Informatics (ICBI) at Georgetown Lombardi.

The Georgetown data resource is unique in several ways. One is that it contains genomic information, collected from volunteer patients who allowed their tumors to be sampled, as well as diagnostic (including brain scans), treatment and outcomes data. Most collections contain either one or the other.

Additionally, the data collection interface is extraordinarily easy to use, Madhavan says.

“It sits on Amazon Web Services, and has a simple web interface access to data and analysis tools. All a researcher needs is a computer and an internet connection to log onto this interface to select, filter, analyze and visualize the brain tumor datasets.

“We want this data to be widely used by the broadest audience—the entire biomedical research community—so that imagination and discovery is maximized,” says first author on the paper Yuriy Gusev, Ph.D., associate professor and a faculty member of the ICBI. “Our common goal is to tease apart the clues hidden within this biomedical and clinical information in order to find ways that advance diagnostic and clinical outcomes for these patients.”

“We are just beginning to understand the science of how these cancers evolve and how best to treat them, and datasets like this will likely be very helpful,” Madhavan says.

The REMBRANDT dataset was originally created at the National Cancer Institute and funded by Glioma Molecular Diagnostic Initiative led by co-authors Howard Fine, MD, from New York Presbyterian Hospital, and Jean-Claude Zenklusen, Ph.D., from the National Cancer Institute. They collected the data from 2004-2006.

The NCI transferred the data to Georgetown in 2015, and it is now physically located on the Georgetown Database of Cancer (G-DOC), a cancer data integration and sharing platform for hosting alongside other cancer studies. G-DOC investigators, led by Madhavan, developed novel analytical tools to process the information anew.

The genomic data includes the specific genes within individual tumors that are either over-expressed or under-expressed as well as the number of times that gene is repeated within a chromosome.

“We inherit two copies of a gene—one from Mom and one from Dad—but in cancer cells, DNA segments containing important tumor suppressor or onco- genes can be entirely deleted or amplified. It isn’t unusual to see a chromosome within a tumor that has 11 copies of a gene, each of which may be producing a toxic protein that helps the cancer grow uncontrollably,” she says.

The data collection also includes information on RNA, which is produced by genes (DNA) and can be measured to assess genes that are dysregulated.

Researchers can search their gene of interest, check their expression and amplification status and link that to clinical outcomes, Madhavan says. They can save their findings to their workspace on the G-DOC site and share with their collaborators. Given the approximately 20,000 protein coding genes in the human genome, and the variety of brain cancer tumor types, “it will take a big village—really a vast metro area—of investigators to understand the bases of these tumors and to effectively develop treatments that target them.”

Source: Read Full Article

Researchers identify new genes that may contribute to Alzheimer’s disease

Researchers from Boston University School of Medicine, working with scientists across the nation on the Alzheimer’s Disease Sequencing Project (ADSP), have discovered new genes that will further current understanding of the genetic risk factors that predispose people to the development of Alzheimer’s disease (AD). The ADSP was developed by the National Institutes of Health (NIH) in response to the National Alzheimer’s Project Act milestones to fight AD.

The incidence of AD is increasing each year and is the most common cause of dementia. Also, it is the fifth leading cause of death in those 65-years and older, according to the CDC. AD is characterized by the formation of senile plaques (extracellular deposits of β-amyloid protein) and neurofibrillary tangles (aggregates of hyper-phosphorylated tau protein) in the brain, leading to neurodegeneration and decline in memory, and eventually death. Despite the growing prevalence of AD and cost to society, the genetic and environmental factors that make some more susceptible to the development of AD is still not well understood.

“This large and deep gene sequencing study is an important part of identifying which variations may play a part in risk of getting Alzheimer’s or protection against it,” said Eliezer Masliah, MD, director of the Division of Neuroscience at the National Institute on Aging, part of NIH. “Big data efforts like the ADSP are really helping research move forward. Identifying rare variants could enhance our ability to find novel therapeutic targets and advance precision medicine approaches for Alzheimer’s disease.”

By comparing the exomes (gene-coding portions of entire genetic sequences) of nearly 6,000 individuals with AD and 5,000 cognitively healthy older adults, the researchers were able to find rare variations in genes that they believe may contribute to the development of common AD. These newly discovered genes may suggest an inflammatory response and changes in the protein production. These combined changes are thought to contribute to the overall neurodegeneration witnessed in AD.

The researchers hope their work will help bridge the knowledge gaps of the genetic architecture related to AD, which is a necessary step toward a better understanding of mechanisms leading to AD and eventual therapeutic treatments. “Many of our findings will provide insight into disease mechanisms and targets for biological experiments to gain further understanding about the role of these genes in AD pathogenesis,” explained corresponding author Lindsay A. Farrer, Ph.D., Chief of Biomedical Genetics and a professor of Medicine, Neurology, Ophthalmology, Epidemiology and Biostatistics at Boston University Schools of Medicine and Public Health.

The research team emphasizes that further research will need to be done to find other genes hidden throughout the genome, as the current paradigm is that many genes contribute to the development of AD.

Source: Read Full Article

Rethinking the stroke rule ‘time is brain’

In 1993, neurologist Camilo R. Gomez, MD, coined a phrase that for a quarter century has been a fundamental rule of stroke care: “Time is brain!”

“Unquestionably the longer therapy is delayed, the lesser the chance that it will be successful,” Dr. Gomez wrote in an editorial 25 years ago. “Simply stated: time is brain!”

But the “time is brain” rule is not as simple as it once seemed, Dr. Gomez now argues in his most recent paper, published in the August, 2018 Journal of Stroke & Cerebrovascular Diseases (published online April 25). Dr. Gomez is a Loyola Medicine stroke specialist and nationally known expert in minimally invasive neuroendovascular surgery.

It is still true that stroke outcomes generally are worse the longer treatment is delayed so it remains critically important to call 911 immediately after the first signs of stroke. But, Dr. Gomez reports, the effect of time can vary greatly among patients. Depending on the blood circulation pattern in the brain, emergency treatment could greatly help one patient, but be too late for another patient treated at the same time.

“It’s clearly evident that the effect of time on the ischemic process is relative,” Dr. Gomez wrote.

About 85 percent of strokes are ischemic, meaning the stroke is caused by a blood clot that blocks blood flow to an area of the brain. Starved of blood and oxygen, brain cells begin dying.

Traditionally, there was little physicians could do to halt this ischemic process, so there was no rush to treat stroke patients. But in his groundbreaking editorial, Dr. Gomez wrote that rapid improvements in imaging technologies and treatments might enable physicians to minimize stroke damage during the critical first hours.

“It is imperative that clinicians begin to look upon stroke as a medical emergency of a magnitude similar to that of myocardial infarction (heart attack) or head trauma,” he wrote.

As new treatments such as the clot-busting drug tPA became available, doctors did indeed begin treating strokes as emergencies. In select patients, intravenous tPA was shown to stop strokes in their tracks by dissolving clots and restoring blood flow. Initially, tPA was recommended in select patients within three hours of the onset of symptoms. This therapeutic window later was lengthened to 4.5 hours.

But Dr. Gomez said there should be no hard-and-fast rule governing when therapy can be given because strokes progress differently in different patients. Time is not the only important factor. Also critical is the blood circulation pattern in the brain.

After an ischemic stroke strikes, a core of brain tissue begins to die. Around this core is a penumbra of cells that continue to receive blood from surrounding arteries in a process called collateral circulation. Collateral circulation can keep cells in the penumbra alive for a time before they too begin to die. Good circulation slows down the rate at which the cells die.

In his latest project, Dr. Gomez used computational modeling to identify four distinct types of ischemic stroke based on the collateral circulation. “It is no longer reasonable to believe that the effect of time on the ischemic process represents an absolute paradigm,” Dr. Gomez wrote. “It is increasingly evident that the volume of injured tissue within a given interval after the time of onset shows considerable variability, in large part due to the beneficial effect of a robust collateral circulation.”

Dr. Gomez added that this computational modeling “represents a first step in our journey to enhance clinical decisions and predictions under conditions of considerable uncertainty.”

Source: Read Full Article

Why do we get nose bleeds?

Nose bleeds, or epistaxes, are often a mystery to the 60% of us who have had at least one in our lifetime. Suddenly, and without obvious cause, bright red blood starts streaming from one nostril.

Usually they’re not something to worry about, but why we get them is not always clear.

What causes nose bleeds?

The nose is very prone to bleeding. This is because of the important role it plays in warming and humidifying the air we breathe. Large numbers of small blood vessels lie just under a thin layer of skin, as a heat exchange mechanism for air going to the lungs.

A number of things can cause those vessels to rupture and the nose to bleed.

A blow to the face may lead to a nose bleed, with or without a fractured nose.

Nasal infections and dryness are known to increase the frequency of bleeds. In one study, people who got nose bleeds were seven times more likely to have staphylococcal bacteria in their nose than than their peers who didn’t get nose bleeds.

Very hot or cold weather may increase the likelihood of nose bleeds, with low levels of humidity making this worse. One study from the United States showed 40% more attendances to the emergency department for nose bleeds in winter, while nose bleeds in parts of Africa are higher in the hot, dry season.

Depending on the location of the ruptured blood vessels, the bleed is classified as anterior (the front part of the nose) or posterior (the back part of the nose). The blood from an anterior bleed will predominantly flow out of the nostrils, while with a posterior bleed, much of the blood will end up in the throat to be spat up or swallowed.

Occasionally, the cause of vomiting up blood can be traced to a nose bleed the person wasn’t aware of.

Higher risk groups

Nose bleeds are common in young children, where it is usually mild and from the front of the nose.

A study of children presenting to a US emergency department showed the majority had stopped bleeding before they were assessed by the doctor. Of the small number who required treatment, 93% settled easily with simple treatments, such as pressure to the front of the nose.

Young children also have a tendency to pick at “scabs” in the nose – consisting of dried mucus and dead cells – exposing shallow blood vessels just under the inflamed skin.

The second peak is in the over-65 age group, where the nose bleed may be more severe.

In this group, nose bleeds may be indicative of a more serious health problem such as bleeding disorders and chronic sinus infections. Rarer causes include local cancers and leukaemia.

Bleeding due to blood thinning medication, or sometimes as a side effect of nasal steroid sprays, are becoming more common.

Bleeding from the back of the nose, while less common overall, is more likely to occur in older people. It’s also likely to be more difficult to control and may continue for many hours.

Extensive blood loss may occasionally lead to anaemia or require a transfusion. The risk of death from nose bleeds is extremely low. Out of 2.4 million deaths in the US in 1999, four were due to nose bleeds. Often the nose bleed has simply complicated other existing medical conditions.

What to do

Not knowing how long your nose will continue to bleed and the amount of blood lost can be disconcerting. But people generally overestimate blood loss.

Most first aid recommendations are fairly consistent. They suggest the patient should be quietly seated, leaning forward (to avoid swallowing blood) and applying pressure to the front, soft part of the nose.

If the bleeding is severe, persists for more than 30 minutes, or is caused by a blow to the head or side effects of medication, see a doctor.

Doctors will use various methods to apply pressure directly to the site of the bleeding. These include packing the nose with a long thread of ribbon gauze material soaked in medication to constrict blood vessels, or the use of balloon catheters (small balloons inserted into the nostrils).

If you have recurrent minor bleeds, try nasal decongestant sprays or nasal lubricants such as petroleum jelly (Vaseline).

The other approach is to seal the bleeding vessels with chemical (such as silver nitrate applicators) or heat cauterisation. But cauterisation is painful and a systematic review of treatments showed it’s no more effective than antibiotic cream or petroleum jelly.

Sesame oil based nasal sprays have shown benefit in small trials in the lubrication of dry noses. This may assist in controlling crusting that’s associated with nose bleeds. But sesame oils are yet to be comprehensively assessed for this purpose.

Source: Read Full Article

Team finds missing immune cells that could fight lethal brain tumors

Glioblastoma brain tumors can have an unusual effect on the body’s immune system, often causing a dramatic drop in the number of circulating T-cells that help drive the body’s defenses.

Where the T-cells go has been unclear, even as immunotherapies are increasingly employed to stimulate the body’s natural ability to fight invasive tumors.

Now researchers at Duke Cancer Institute have tracked the missing T-cells in glioblastoma patients. They found them in abundance in the bone marrow, locked away and unable to function because of a process the brain stimulates in response to glioblastoma, to other tumors that metastasize in the brain and even to injury.

The findings, published online Aug. 13 in the journal Nature Medicine, open a new area of exploration for adjunct cancer drugs that could free trapped T-cells from the bone marrow, potentially improving the effectiveness of existing and new immunotherapies.

“Part of the problem with all these immunotherapies—particularly for glioblastoma and other tumors that have spread to the brain—is that the immune system is shot,” said lead author Peter E. Fecci, M.D., Ph.D., director of the Brain Tumor Immunotherapy Program in Duke’s Department of Neurosurgery. “If the goal is to activate the T-cells and the T-cells aren’t there, you’re simply delivering therapy into a black hole.”

Fecci said the research team began its search for the missing T-cells after observing that many newly diagnosed glioblastoma patients have the equivalent immune systems of people with full-blown AIDS, even before they undergo surgery, chemotherapy and radiation.

Where most people have a CD-4 “helper” T-cell count upwards of 700-1,000, a substantial proportion of untreated glioblastoma patients have counts of 200 or less, marking poor immune function that makes them susceptible to all manner of infections and potentially to progression of their cancer.

Initially, the researchers hunted for the missing T-cells in the spleen, which is known to pathologically harbor the cells in certain disease states. But the spleens were abnormally small, as were the thymus glands—another potential T-cell haven. They decided to check the bone marrow to see if production was somehow stymied and instead found hordes of T-cells.

“It’s totally bizarre—this is not seen in any disease state,” Fecci said. “This appears to be a mechanism that the brain possesses for keeping T-cells out, but it’s being usurped by tumors to limit the immune system’s ability to attack them.”

When examining the stashed T-cells, Fecci and colleagues found that they lacked a receptor on the cell surface called S1P1, which essentially serves as a key that enables them to leave the bone marrow and lymph system. Lacking that key, they instead get locked in, unable to circulate and fight infections, let alone cancer.

Fecci said the research team is now working to learn exactly how the brain triggers the dysfunction of this S1P1 receptor. He said the current theory is that the receptor somehow is signaled to retract from the cell surface into the cell interior.

“Interestingly, when we restore this receptor to T-cells in mice, the T-cells leave the bone marrow and travel to the tumor, so we know this process is reversible,” Fecci said.

His team is collaborating with Duke scientist Robert Lefkowitz, M.D., whose 2012 Nobel Prize in Chemistry honored discovery of the class of receptors to which S1P1 belongs. They are working to develop molecules that would restore the receptors on the cells’ surface.

Source: Read Full Article

Machine-learning system determines the fewest, smallest doses that could still shrink brain tumors

MIT researchers are employing novel machine-learning techniques to improve the quality of life for patients by reducing toxic chemotherapy and radiotherapy dosing for glioblastoma, the most aggressive form of brain cancer.

Glioblastoma is a malignant tumor that appears in the brain or spinal cord, and prognosis for adults is no more than five years. Patients must endure a combination of radiation therapy and multiple drugs taken every month. Medical professionals generally administer maximum safe drug doses to shrink the tumor as much as possible. But these strong pharmaceuticals still cause debilitating side effects in patients.

In a paper being presented next week at the 2018 Machine Learning for Healthcare conference at Stanford University, MIT Media Lab researchers detail a model that could make dosing regimens less toxic but still effective. Powered by a “self-learning” machine-learning technique, the model looks at treatment regimens currently in use, and iteratively adjusts the doses. Eventually, it finds an optimal treatment plan, with the lowest possible potency and frequency of doses that should still reduce tumor sizes to a degree comparable to that of traditional regimens.

In simulated trials of 50 patients, the machine-learning model designed treatment cycles that reduced the potency to a quarter or half of nearly all the doses while maintaining the same tumor-shrinking potential. Many times, it skipped doses altogether, scheduling administrations only twice a year instead of monthly.

“We kept the goal, where we have to help patients by reducing tumor sizes but, at the same time, we want to make sure the quality of life—the dosing toxicity—doesn’t lead to overwhelming sickness and harmful side effects,” says Pratik Shah, a principal investigator at the Media Lab who supervised this research.

The paper’s first author is Media Lab researcher Gregory Yauney.

Rewarding good choices

The researchers’ model uses a technique called reinforced learning (RL), a method inspired by behavioral psychology, in which a model learns to favor certain behavior that leads to a desired outcome.

The technique comprises artificially intelligent “agents” that complete “actions” in an unpredictable, complex environment to reach a desired “outcome.” Whenever it completes an action, the agent receives a “reward” or “penalty,” depending on whether the action works toward the outcome. Then, the agent adjusts its actions accordingly to achieve that outcome.

Rewards and penalties are basically positive and negative numbers, say +1 or -1. Their values vary by the action taken, calculated by probability of succeeding or failing at the outcome, among other factors. The agent is essentially trying to numerically optimize all actions, based on reward and penalty values, to get to a maximum outcome score for a given task.

The approach was used to train the computer program DeepMind that in 2016 made headlines for beating one of the world’s best human players in the game “Go.” It’s also used to train driverless cars in maneuvers, such as merging into traffic or parking, where the vehicle will practice over and over, adjusting its course, until it gets it right.

The researchers adapted an RL model for glioblastoma treatments that use a combination of the drugs temozolomide (TMZ) and procarbazine, lomustine, and vincristine (PVC), administered over weeks or months.

The model’s agent combs through traditionally administered regimens. These regimens are based on protocols that have been used clinically for decades and are based on animal testing and various clinical trials. Oncologists use these established protocols to predict how much doses to give patients based on weight.

As the model explores the regimen, at each planned dosing interval—say, once a month—it decides on one of several actions. It can, first, either initiate or withhold a dose. If it does administer, it then decides if the entire dose, or only a portion, is necessary. At each action, it pings another clinical model—often used to predict a tumor’s change in size in response to treatments—to see if the action shrinks the mean tumor diameter. If it does, the model receives a reward.

However, the researchers also had to make sure the model doesn’t just dish out a maximum number and potency of doses. Whenever the model chooses to administer all full doses, therefore, it gets penalized, so instead chooses fewer, smaller doses. “If all we want to do is reduce the mean tumor diameter, and let it take whatever actions it wants, it will administer drugs irresponsibly,” Shah says. “Instead, we said, ‘We need to reduce the harmful actions it takes to get to that outcome.'”

This represents an “unorthodox RL model, described in the paper for the first time,” Shah says, that weighs potential negative consequences of actions (doses) against an outcome (tumor reduction). Traditional RL models work toward a single outcome, such as winning a game, and take any and all actions that maximize that outcome. On the other hand, the researchers’ model, at each action, has flexibility to find a dose that doesn’t necessarily solely maximize tumor reduction, but that strikes a perfect balance between maximum tumor reduction and low toxicity. This technique, he adds, has various medical and clinical trial applications, where actions for treating patients must be regulated to prevent harmful side effects.

Optimal regimens

The researchers trained the model on 50 simulated patients, randomly selected from a large database of glioblastoma patients who had previously undergone traditional treatments. For each patient, the model conducted about 20,000 trial-and-error test runs. Once training was complete, the model learned parameters for optimal regimens. When given new patients, the model used those parameters to formulate new regimens based on various constraints the researchers provided.

The researchers then tested the model on 50 new simulated patients and compared the results to those of a conventional regimen using both TMZ and PVC. When given no dosage penalty, the model designed nearly identical regimens to human experts. Given small and large dosing penalties, however, it substantially cut the doses’ frequency and potency, while reducing tumor sizes.

The researchers also designed the model to treat each patient individually, as well as in a single cohort, and achieved similar results (medical data for each patient was available to the researchers). Traditionally, a same dosing regimen is applied to groups of patients, but differences in tumor size, medical histories, genetic profiles, and biomarkers can all change how a patient is treated. These variables are not considered during traditional clinical trial designs and other treatments, often leading to poor responses to therapy in large populations, Shah says.

Source: Read Full Article

Primate study offers clues to evolution of speech

New research examining the brains and vocal repertoires of primates offers important insight into the evolution of human speech.

The vocal tract and larynx is similar in form and function amongst virtually all terrestrial mammals, including humans. However, relative to humans, non-human primates produce an extremely limited range of vocalisations.

Published in the journal Frontiers in Neuroscience, the new research investigates whether the reason primates are incapable of producing speech is because they lack the brain mechanisms needed to control and coordinate vocal production.

The study focused on two particular features of the brain: the cortical association areas that govern voluntary control over behaviour; and the brainstem nuclei that are involved in the neural control of muscles responsible for vocal production.

The academics, from Anglia Ruskin University and Stony Brook University, found a positive correlation between the relative size of cortical association areas and the size of the vocal repertoire of primates, which can range from just two call types in pottos to at least 38 different calls made by bonobos.

Lead author Dr. Jacob Dunn, Senior Lecturer in Zoology at Anglia Ruskin University, said: “This study shows, for the first time, a significant positive correlation between the vocal repertoire and the relative size of the parts of the brain responsible for voluntary control over behaviour.

https://youtube.com/watch?v=xV2BAiucdjk%3Fcolor%3Dwhite

“Cortical association areas are found within the neocortex and are key to the higher cognitive processing capacities considered to be the foundation for the complex forms of behaviour observed in primates. Interestingly, the overall size of the primate’s brain was not linked to the vocal repertoire of that species, only the relative size of these specific areas.

“We also found a positive relationship between the relative volumes of the cortical association areas and the hypoglossal nucleus in apes, both of which are significantly bigger in these species. The hypoglossal nucleus is associated with the cranial nerve that controls the muscles of the tongue, thus suggesting increased voluntary control over the tongue in our closest relatives.

Source: Read Full Article

Ebola virus experts discover powerful, new approach for future therapeutics

A one-two punch of powerful antibodies may be the best way to stop Ebola virus, reports an international team of scientists in the journal Cell. Their findings suggest new therapies should disable Ebola virus’s infection machinery and spark the patient’s immune system to call in reinforcements.

“This study presents results from an unprecedented international collaboration and demonstrates how 43 previously competing labs can together accelerate therapeutics and vaccine design,” says Erica Ollmann Saphire, Ph.D., professor at Scripps Research and director of the Viral Hemorrhagic Fever Immunotherapeutic Consortium (VIC).

From 2013-2016, West Africa faced the deadliest Ebola outbreak the world has ever seen. By the time the outbreak was declared over, 11,325 people had died. The VIC is an international group of the world’s leading virologists, immunologists, systems biologists and structural biologists working to stop an outbreak on that scale from ever striking again.

The VIC researchers aim to understand which Ebola-fighting antibodies are best-and why. The hope is that the most effective antibodies can be combined in a therapeutic “cocktail.” Unlike an Ebola vaccine, these cocktails could be given to those already infected, which is important for stopping a disease that tends to emerge unexpectedly in remote locations.

Ollmann Saphire and her colleagues in the VIC have published more than 40 studies in just the last five years. This landmark study is the first-ever side-by-side comparison of 171 antibodies against Ebola virus and other related viruses, known as filoviruses. All antibodies in the panel were donated by different labs around the world, and many had not been previously characterized in such extensive detail.

“Through the VIC, we could test a larger pool of antibodies in parallel, which increased the potential to detect statistically significant relationships between antibody features and protection,” says Saphire. “We used this global pool of antibodies to evaluate, and streamline, the research pipeline itself.”

In addition to identifying links between antibody target locations and activity, VIC researchers tested this huge pool of antibodies to reveal which antibodies “neutralized” the virus, why neutralization assays so often disagree, and whether or not neutralization in test tubes adequately predicted how well these antibodies would protect live animals from Ebola virus infection. Unexpectedly, neutralization alone was not always associated with the protective ability of an antibody.

Notably, the scientists found nine antibodies that protected mice from infection without neutralizing the virus in test tubes. These antibodies likely fight infection by interacting with an infected person’s immune system, helping orchestrate a better immune response to the virus.

This “immune effector” activity is featured in the team’s companion study published simultaneously in Cell Host & Microbe. “The ability to evoke an immune response will likely represent a new avenue of study for therapeutic antibodies for Ebola virus infection,” says Sharon Schendel, project manager for the VIC and science writer in the Saphire lab.

Source: Read Full Article