Stanford University researchers have discovered a new cellular signal that cancer cells seem to use to evade detection and destruction by the cells of the immune system, and macrophages in particular. Studies by the team of researchers in mice model paved a way for development of new therapeutic strategies. The scientists have shown that blocking this signal in mice implanted with human cancers allows immune cells to attack the cancers.
Cancer cells have shown in earlier studies, that they choose to evade destruction by macrophages by overexpressing anti-phagocytic surface proteins called ‘don’t eat me’ signals such as CD471, programmed cell death ligand 1 (PD-L1) and the beta-2 microglobulin subunit of the major histocompatibility class I complex (B2M). Antibodies that block CD47 are in clinical trials. Cancer treatments that target PD-L1 are already being used in the clinics.
The study lead by Amira Barkal, an MD-PhD student, (lead author). Irving Weissman, MD, professor of pathology and of developmental biology and director of the Stanford Institute for Stem Cell Biology and Regenerative Medicine (senior author), showed that CD24 can be the dominant innate immune checkpoint in ovarian cancer and breast cancer, and is a promising target for cancer immunotherapy.Looking for additional signalsThe scientists began by looking for proteins that were produced more highly in cancers than in the tissues from which the cancers arose. “You know that if cancers are growing in the presence of macrophages, they must be making some signal that keeps those cells from attacking the cancer,” Barkal said. “You want to find those signals so you can disrupt them and unleash the full potential of the immune system to fight the cancer.”
The search showed that many cancers produce an abundance of CD24 compared with normal cells and surrounding tissues. In further studies, the scientists showed that the macrophage cells that infiltrate the tumor can sense the CD24 signal through a receptor called SIGLEC-10. They also showed that if they mixed cancer cells from patients with macrophages in a dish, and then blocked the interaction between CD24 and SIGLEC-10, the macrophages would start gorging on cancer cells like they were at an all-you-can-eat buffet. “When we imaged the macrophages after treating the cancers with CD24 blockade, we could see that some of them were just stuffed with cancer cells,” Barkal said.
Lastly, they implanted human breast cancer cells in mice. When CD24 signaling was blocked, the mice’s scavenger macrophages of the immune system attacked the cancer.
Of particular interest was the discovery that ovarian and triple-negative breast cancer, both of which are very hard to treat, were highly affected by blocking the CD24 signaling. “This may be a vulnerability for those very dangerous cancers,” Barkal said.
Complementary to CD47?The other interesting discovery was that CD24 signaling often seems to operate in a complementary way to CD47 signaling. Some cancers, like blood cancers, seem to be highly susceptible to CD47-signaling blockage, but not to CD24-signaling blockage, whereas in other cancers, like ovarian cancer, the opposite is true. This raises the hope that most cancers will be susceptible to attack by blocking one of these signals, and that cancers may be even more vulnerable when more than one “don’t eat me” signal is blocked.
“There are probably many major and minor ‘don’t eat me’ signals, and CD24 seems to be one of the major ones,” Barkal said The researchers now hope that therapies to block CD24 signaling will follow in the footsteps of anti-CD47 therapies, being tested first for safety in preclinical trials, followed by safety and efficacy clinical trials in humans.
For Weissman, the discovery of a second major “don’t eat me” signal validates a scientific approach that combines basic and clinical research. “These features of CD47 and CD24 were discovered by graduate students in MD-PhD programs at Stanford along with other fellows,” Weissman said. “These started as fundamental basic discoveries, but the connection to cancers and their escape from scavenger macrophages led the team to pursue preclinical tests of their potential. This shows that combining investigation and medical training can accelerate potential lifesaving discoveries.”
1. Original article: Stanford University of Medicine
Note: The article has been edited for style and length
2. Journal article: Amira A. Barkal, Rachel E. Brewer, Maxim Markovic, Mark Kowarsky, Sammy A. Barkal, Balyn W. Zaro, Venkatesh Krishnan, Jason Hatakeyama, Oliver Dorigo, Layla J. Barkal, Irving L. Weissman. CD24 signalling through macrophage Siglec-10 is a target for cancer immunotherapy. Nature, 2019; DOI: 10.1038/s41586-019-1456-0
3. Image source: Stanford University of medince
Love affects us all! and when it comes to food there is no greater love than the love for your favorite cuisine. Much like our lives not all love is good for us and some time love cost us more dearly than we expect! No I am not talking about break ups, but one must pay attention to what they put in their plate and " Love is blind" kind of logic do not work here specially if it increases your chances for falling for a deadly disease, which may be avoided by going for healthier life style choices.
A recent study published in the journal of Cancer Epidemiology by Jamie J. Lo et, al reveals that red meat consumption may increase the risk of breast cancer, whereas poultry consumption may confer protection against breast cancer risk. For the study the team of researchers collected information on consumption of different meat categories and meat cooking practice behaviors was obtained from 42,012 Sister Study participants for a duration of over 7.6 years and exposure to meat type and meat mutagens was calculated, and estimated for associations with invasive breast cancer risk using multivariable Cox proportional hazards regression. During follow‐up 1,536 invasive breast cancers were diagnosed at least 1 year after enrollment. The study revealed that women who consumed the highest amount of red meat had a 23% higher risk compared with women who consumed the lowest amount. Conversely, increasing consumption of poultry was associated with decreased invasive breast cancer risk: women with the highest consumption had a 15% lower risk than those with the lowest consumption. Breast cancer was reduced even further for women who substituted poultry for meat. Their findings remained consistent even when analyses controlled for known breast cancer risk factors or potential confounding factors such as race, socioeconomic status, obesity, physical activity, alcohol consumption, and other dietary factors. No associations were observed for cooking practices or chemicals formed when cooking meat at high temperature. Senior author Dale P. Sandler, PhD, of the National Institute of Environmental Health Sciences. commented that "While the mechanism through which poultry consumption decreases breast cancer risk is not clear, our study does provide evidence that substituting poultry for red meat may be a simple change that can help reduce the incidence of breast cancer."
Note: Content edited for style and length
1. Jamie J. Lo, Yong‐Moon Mark Park, Rashmi Sinha, Dale P. Sandler. Association between meat consumption and risk of breast cancer: Findings from the Sister Study. International Journal of Cancer, 2019; DOI: 10.1002/ijc.32547
2. Image courtesy: Pixabay
Scientists at Tokyo Institute of Technology have imaged live T cells to reveal the role of CLIP-170 in T-cell activation, a critical process in the immune response.
When bacteria or viruses enter the body, proteins on their surfaces are recognized and processed to activate T cells, white blood cells with critical roles in fighting infections. During T-cell activation, a molecular complex known as the microtubule-organizing center (MTOC) moves to a central location on the surface of the T-cell. Microtubules have several important functions, including determining cell shape and cell division. Thus, MTOC repositioning plays a critical role in the immune response initiated by activated T cells.
In a recent publication in Scientific Reports, the first authors Lim Wei Ming and Yuma Ito, along with their colleagues at Tokyo Institute of Technology (Tokyo Tech), provide compelling evidence that a key protein responsible for the relocation of the MTOC in activated T cells is a molecule known as CLIP-170, a microtubule-binding protein.
The researchers used live-cell imaging to uncover the mechanism of MTOC relocation. “The use of dual-color fluorescence microscopic imaging of live T cells allowed us to visualize and quantify the molecular interactions and dynamics of proteins during MTOC repositioning,” notes Dr. Sakata-Sogawa. This technque allowed them to confirm that phosphorylation of CLIP-170 is involved in movement of the MTOC to the center of the contacted cell surface (Fig. 1); the findings were confirmed using both cells with phosphodeficient CLIP-170 mutant and cells in which AMPK, the molecule that phosphorylates and activates CLIP-170, was impaired. Further imaging showed that CLIP-170 is essential for directing dynein, a motor protein, to the plus ends of microtubules and for anchoring dynein in the center of the cell surface (Fig. 2). Dynein then pulls on the microtubules to reposition the MTOC to its new location in the center.
“These findings shed new light on microtubule binding proteins and microtubule dynamics,” explains Dr. Tokunaga. Such research is critical, as a deeper understanding of T cell activation in the immune response, and could lead to the development of safer methods for cancer immunotherapy because presentation of CTLA-4, which is found by a 2018 Novel Prize laureate and used as a target of the therapy, is also regulated by MTOC reposition
Figure 1. CLIP-170 phosphorylation regulates MTOC repositioning and full activation of T cells.
Fluorescence live-cell imaging of the wild-type CLIP-170-TagRFP-T (a,b) or a phosphodeficient S312A mutant CLIP-170-TagREP-T (c) and dynein light chain (DLC)-mEGFP co-expressed in T cells. Increased dynein relocation to the center, which is responsible for MTOC repositioning, requires both stimulation and CLIP-170 phosphorylation. The boxed regions in the merged images are enlarged (right). Scale bars: 5 μm (left, 2nd left, merged) and 2 μm (right)
Figure 2: Schematic model for the keyrole of CLIP-170 in MTOC repositioning during T cell activation by regulating dynein relocation.
In resting T cells, the majority of dynein is immobile on the contacted cell surface and is located at the periphery region. T cell stimulation increases the fraction of dynein undergoing minus-end-directed motility (“mobilise”), which is a “weakly processive” state. Then, the dynein is anchored to the surface (“anchor”). Alongside this, stimulation induces some fraction of dynein to colocalize with CLIP-170 and dynactin and follow plus-end tracking (“recruit”). After tracking of one or two micrometers, the dynein is released from the complex and anchored (“release”). As a result, dynein relocation increases to the center region of the contact surface, the immunological synapse, where “anchored” dynein molecules are immobile and or weakly processive at a velocity in good agreement with the velocity of MTOC repositioning. “Anchored” and weakly processive dynein pulls the microtubules and the MTOC (“pull”), which causes MTOC repositioning near the immunological synapse and full activation of T cells. Phosphorylation of CLIP-170 is essential for dynein recruitment to the plus-end and for dynein relocation.
1. Original article source: School of Life Science and Technology, Tokyo Institute of Technology
2. Research Paper: Wei Ming Lim, Yuma Ito, Kumiko Sakata-Sogawa* & Makio Tokunaga* CLIP-170 is essential for MTOC repositioning during T cell activation by regulating dynein localisation on the cell surface Nature Scientific Reports DOI : 10.1038/s41598-018-35593-z
3. Image source: School of Life Science and Technology, Tokyo Institute of Technology
New research demonstrates antiparasitic drug turns lethal for malaria-carrying mosquitoes showing promising future for the drug. Preliminary analysis showed that the drug reduced malaria cases in children under 5 by 16%. The trends are encouraging for the drug and it might become part of national malaria control programs.
Ivermectin a well known antiparasitic drug for treatment of a wide range of parasite infestations from head lice, scabies, river blindness (onchocerciasis), strongyloidiasis, trichuriasis, and lymphatic filariasis, may have another hidden benefit.
Mosquitoes of the genus Anopheles and only females of the species are capable of transmitting the malaria pathogen a protozoa, of genus Plasmodium, which undergoes a series of infection steps before arriving at the mosquito’s salivary gland, from where it ultimately gets transmitted to the human host during blood meal.
Malaria has been a well known killer in the tropical regions of the globe killing more peoples combined who died from outer causes. The data for malaria infection and transmission is staggering as each year, the disease infects more than 200 million people, causing 429,000 deaths — and the situation seems to get worse as despite spending billions on malaria eradication programs, we seem to have reached a plateau. Meanwhile mosquitoes are becoming increasingly resistant to insecticides, which is forcing researchers to think of all sorts of new solutions like a malaria vaccine, genetically engineering mosquitoes so they wipe themselves out and many more.
Previous studies have found that malaria-carrying mosquitoes would die after sucking the blood from individuals who had taken ivermectin, researchers have known for decades that the drug also kills insects if they ingest it. Brian Foy, a medical entomologist at Colorado State University, Fort Collins, believes that makes it a prime candidate in the fight against malaria. If enough people in an area have ivermectin in their blood, says Foy, some of the female mosquitoes that bite them will die, whereas others will be too weakened to pass on the malaria parasite. Foy has shown in lab studies that the approach holds promise, and co-founded a research network last year to study the concept further, to show that ivermectin actually has an impact on malaria in the field, Foy teamed up with Roch Dabiré, a researcher at the Institute of Health Studies in Bobo-Dioulasso, Burkina Faso. The scientists went to eight villages near the town of Diébougou, in the southwest of the country. At the start of the trial, in July, the population of all villages received one dose of ivermectin and another drug, albendazole; this standard combination is given twice yearly around Burkina Faso to control elephantiasis and soil-transmitted worms. In four of the villages, this was followed by ivermectin tablets every 3 weeks for the entire population except pregnant women and children under 90 centimeters tall, who may be at higher risk of side effects. The four other villages served as controls; they received no drugs after the first dose.
The trial is still ongoing and will conclude in November. But an interim analysis presented today by Foy and Dabiré at the annual meeting of the American Society of Tropical Medicine and Hygiene suggests that the drug is already having an impact. Among children under the age of 5—the group at the highest risk of severe disease and death from malaria—there were 16% fewer cases in the villages that received ivermectin at 3-week intervals. That translates to 94 cases averted so far this season in the four villages.
The full results will take some time to analyze, and the study will need to be repeated at a larger scale to see if the findings hold up, Dabiré says. If they do, ivermectin could be another weapon in the antimalaria arsenal, Foy says. He adds that it wouldn’t replace other measures, such as insecticide-treated bed nets.
It’s an interesting approach that should be explored further, says Michel Boussinesq, who studies ivermectin at the Institute of Research for Development in Montpellier, France. But the need to give ivermectin every 3 weeks could be a logistical problem, he says. Boussinesq and his colleagues are working on an ivermectin implant for animals that instead releases the drug slowly and offers long-term protection.
Such implants aren’t likely to be acceptable for use in humans, Foy says—but he points out that ivermectin would only be given during the rainy season, when malaria mosquitoes are active. The season lasts about 6 months in the region where the study took place and even less than that farther north, in the Sahel region. “I think that’s feasible,” Foy says.
Willem Takken, a medical entomologist at Wageningen University and Research Centre in the Netherlands, sees another, fundamental problem: Mosquitoes have developed resistance against almost any chemical that humans have thrown at them. He says that’s bound to happen with ivermectin, too. That’s why, despite the encouraging data, “I find it hard to get enthusiastic about this,” Takken says. He believes that nonchemical approaches, such as mosquito traps or bacteria that render mosquitoes unable to transmit pathogens, hold more long-term promise.
Recent study by a group of scientists from Intermountain Healthcare Heart Institute in Salt Lake City, identified eight new gene mutations that may contribute to idiopathic dilated cardiomyopathy, a form of heart disease not caused by known external influences.
In a new study from the Intermountain Healthcare Heart Institute in Salt Lake City, led by Dr. Jeffrey L. Anderson, MD, the researchers have identified eight new gene mutations that may cause or contribute to idiopathic dilated cardiomyopathy, a form of heart disease not caused by known external influences, such as high blood pressure, obesity, smoking, or diseased coronary arteries. The study observed that for at least 40 percent of the enrolled patients, the disease had an underlying genetic cause that leads to the muscle in the major pumping chamber of the heart (left ventricle) being too weak and thin to function properly, causing heart failure.
“Although many mutations contributing to non-ischemic dilated cardiomyopathy have been identified, there remains a large gap in our knowledge of its heritability. The more we can learn about what’s causing the condition, the better we can identify and treat it,” said Jeffrey L. Anderson, MD, principal investigator of the study, and a researcher at the Intermountain Healthcare Heart Institute. “If it’s passed on in families, we’ll be able to identify those who might be at risk for developing heart disease and work to prevent it, diagnose it, and begin treatment earlier.”
The study team is going to present the findings from the study at the American College of Cardiology’s Annual Scientific Session in New Orleans on March 18, 2019.
A quarter to one-third of idiopathic dilated cardiomyopathy patients will need a mechanical support device, a heart transplant, or will die within five years, Dr. Anderson noted, so this is a very serious condition.
In the study, researchers looked at genetic samples of 231 patients with idiopathic dilated cardiomyopathy, evaluated in an Intermountain Medical Center Specialty Clinic who volunteered to enter blood samples into the Intermountain Healthcare INSPIRE Registry and DNA Bank, which is the system’s collection of biological samples, clinical information, and laboratory data from consenting patients who are diagnosed with any of a number of healthcare-related conditions.
In collaboration with Intermountain’s Precision Genomics laboratory, researchers sequenced patients’ DNA, focusing on the TITIN (TNN) gene, which codes the body’s largest protein.
“That protein acts as a spring in your heart muscle,” said Dr. Anderson. “It enhances the passive elasticity of the muscle and also limits how much you can stretch it.” Previous studies have already observed variants of TTN in patients with idiopathic dilated cardiomyopathy, but the story has been incomplete.
Now, in this new study, Intermountain researchers identified 24 patients with TTN variants, and eight of those variants hadn’t been seen or documented before. They also confirmed the presence of seven variants that had been discovered and reported previously. The new variants all are of the “truncating” variety, that is, they lead to a shortening of the protein and in doing so it is predicted to cause the protein to malfunction in its role of maintaining the integrity of heart muscle function.
These new variants, Dr. Anderson said, still will require functional testing and clinical validation, but they likely will lead to further expansion of the known spectrum of genes that predispose to idiopathic dilated cardiomyopathy.
The addition of these variants to the current list of known pathological heart muscle protein mutations will help to close the still large gap in our knowledge of the heritability of heart muscle disease and in doing so can lead to earlier diagnosis and more effective prevention and treatment.
The study was funded by the Intermountain Foundation and an in-kind grant from Intermountain Precision Genomics.
The compounds in frying oils that are repeatedly reheated to high temperatures may trigger cell proliferation and metastases in breast tumors, scientists in food science and human nutrition at the University of Illinois found in a new study of mice model.
A common scene we observe in our roadside food stalls is that the cooking oil used for frying eatables is often reused, the prime reason being cost effectiveness, however its risks are often discussed in closed circles but no concrete steps are ever taken. However now a consciousness is seen to be arising and some regulatory bodies like FSSAI of India issued directive against reusing cooking oil.
In the present study published in the journal of Cancer Prevention Research the scientists observed that, thermally abused frying oil ( cooking oil that has been repeatedly reheated to high temperatures) may act as a toxicological trigger that promotes tumor cell proliferation, metastases and changes in lipid metabolism. The study conducted in mice also suggests that consuming the chemical compounds found in thermally abused cooking oil may trigger genetic changes that promote the progression of late-stage breast cancer.
In this study the mice were divided into two groups consuming a low-fat diet for one week, one group of the mice was fed unheated fresh soybean oil, while another group consumed thermally abused oil for the next 16 weeks. Soybean oil was used in the study because of its common use by the food service industry in deep frying.
The team of research scientists simulated late-stage breast cancer by injecting 4T1 breast cancer cells into a tibia of each mouse. The 4T1 cells are an aggressive form of the disease that can spontaneously metastasize to multiple distant sites in the body, including the lungs, liver and lymph nodes, according to the study.
Twenty days after inoculation with the tumor cells, the primary tumors in the tibias of the mice that consumed the thermally abused oil had more than four times as much metastatic growth as the mice that consumed the fresh soybean oil. And when the researchers examined the animals’ lungs, they found more metastases among those that consumed the thermally abused oil.
“There were twice as many tumors in the lung, and they were more aggressive and invasive,” said William G. Helferich, a professor of food science and human nutrition, who led the research.
Food chemistry professor Nicki J. Engeseth, the acting head of the department, co-wrote the paper. Graduate student Ashley W. Oyirifi and U. of I. alumnus Anthony Cam were the lead authors.
“I just assumed these nodules in the lungs were little clones – but they weren’t. They’d undergone transformation to become more aggressive. The metastases in the fresh-oil group were there, but they weren’t as invasive or aggressive, and the proliferation wasn’t as extensive,” Helferich said.
In examining both groups of mice, the scientists found that the metastatic lung tumors in those that consumed thermally abused frying oil expressed significantly more of a key protein, Ki-67, which is associated strictly with cell proliferation.
Gene expression in these animals’ livers was altered as well. When the researchers conducted RNA sequencing analysis, they found 455 genes in which expression was at least two times greater – or, conversely, two times lower – than in mice that consumed the fresh oil.
The altered gene pathways were associated with oxidative stress and the metabolism of foreign substances, Oyirifi said. When oil is repeatedly reused, triglycerides are broken apart, oxidizing free fatty acids and releasing acrolein, a toxic chemical that has carcinogenic properties. Scientists have long known that thermally abused oil contains acrolein, and studies have linked the lipid peroxides in it with a variety of health problems, including atherosclerosis and heart disease. As the oil degrades, polymer molecules also accumulate, raising nutritional and toxicological concerns, Engeseth said.
Countries in Europe and elsewhere regulate the amount of polar materials in frying oil, which are chemically altered triglycerides and fatty acids that are used as chemical markers of oils’ decomposition. Typically, these standards permit restaurants to use oil containing up to 24-27 percent polar material. By contrast, the thermally abused oil used in the current study contained about 15 percent polar material, while fresh oil contains 2-4 percent or less, Helferich said.
“Because there are no regulations in the U.S., it’s really difficult for us to evaluate what’s out there,” Engeseth said. “But the important thing is, the food that’s fried in these oils sucks up quite a bit of oil. Even though we’re not consuming the oil directly, we’re consuming oil that’s brought into the food during the frying process.”
Breast cancer survivors’ biggest fear is recurrence, and the majority of these survivors have dormant tumor cells circulating in their blood, Helferich said.
“What wakes those cells up is anybody’s guess, but I’m convinced that diet activates them and creates an environment in different tissues that’s more fertile for them to grow,” he said.
“Many cancer biologists are trying to understand what’s happening at metastatic sites to prime them for tumor growth,” Oyirifi said. “We’re trying to add to this conversation and help people understand that it might not be just some inherent biological mechanism but a lifestyle factor. If diet provides an opportunity to reduce breast cancer survivors’ risk, it offers them agency over their own health.” Additional co-authors on the study were Urszula T. Iwaniec and Russell T. Turner, both of Oregon State University; and Fureya (Yunxian) Liu, a then-graduate student at the U. of I.
The National Center for Complementary and Integrative Health, the Office of Dietary Supplements, the National Cancer Institute and the National Institute of Environmental Health Sciences funded the research.
Note: The article is edited for style and length
Biological activities of Woolly mammoth nuclei was observed by Japanese scientists, could the prehistoric giants be brought back to life?
The 28,000-year-old remains of a woolly mammoth, named ‘Yuka’, were found in Siberian permafrost. The study group recovered the less-damaged nucleus-like structures from the remains and visualised their dynamics in living mouse oocytes after nuclear transfer (aka cloning). In the reconstructed oocytes, the mammoth nuclei showed the spindle assembly, histone incorporation and partial nuclear formation; however, the full activation of nuclei for cleavage was not confirmed. The scientists hope their work provides a platform to evaluate the biological activities of nuclei in extinct animal species.
Wolly mammoth an ice age giant often found plodding in animated movies like The ice age, actually died out just over 4,000 years ago, and climate change was one of the most probable reason for their death apart from alleged viral infections but could the prehistoric giants be soon back to life? Probably not, however the work by the team led by Dr. Akira Iritani from Institute of Advanced Technology, Kindai University, Wakayama,Japan provides a platform to evaluate the biological activities of nuclei in extinct animal species.
Their results published in the journal Nature Scientific reports indicates that “a part of mammoth nuclei possesses the potential for nuclear reconstitution, while observing possible signs of repair to damaged mammoth DNA.” However despite the successes, the scientists did not observe the further cell division necessary to create a viable egg, “possibly due to the extensive DNA damage in the transferred nuclei”.
Researcher Kei Miyamoto, one of the study’s authors told Japan’s Nikkei news outlet. “We want to move our study forward to the stage of cell division,” he added, but acknowledged “we still have a long way to go”.
The samples of woolly mammoth (named Yuka) used in the present study was found in Siberian permafrost in 2010. The animal, is beleived to be of about seven-years-old at the time of death and it is one of the best preserved mammoths known to science. Dr. Akira’s team extracted tissue samples from the animal’s bone marrow and muscle. It is worth mentioning that most mammoth populations died out between 14,000 and 10,000 years ago. The last mainland population existed in the Kyttyk peninsula of Siberia until 9,650 years ago. But the species is observed to be surviving for another 5,000 years on Siberian islands, which became cut off from the mainland by retreating ice following the last ice age. The last known population remained on Wrangel Island in the Arctic Ocean until 4,000 years ago – well beyond the dawn of human civilisation, but finally becoming extinct around the time of the construction of the pyramids of Giza in Egypt. The real cause for their extinction is still a issue where no scientific consensus has been reached, climate change leading to habitat destruction and hunting by humans are commonly discussed theories and works like that of Dr. Akira might shine some light on the other aspects of their extinction.
Note: The article has been edited for style and length
If you have a sweet tooth and often crave for a cola, I think you need to pay attention what lies ahead. A cola or a carbonated drink is usually our resort to cool off during a hot summer day, however its a well known fact that it packs a lot of false calories and people on diet strictly avoid drinking cola.
The main reason cola is high in calories is a common sweetening agent present in all processed foods, High-fructose corn syrup (HFCS), also known as glucose-fructose, isoglucose and glucose-fructose syrup, is a sweetener made from corn starch. Although extensively used in almost all processed foods HFCS has been found to be closely associated with Obesity, Diabetes. However the present study led by researchers at Baylor College of Medicine and Weill Cornell Medicine published in Science, showed that consuming a daily modest amount of high-fructose corn syrup – the equivalent of people drinking about 12 ounces of a sugar-sweetened beverage daily – accelerates the growth of intestinal tumors in mouse models of the disease, independently of obesity.
The team also discovered the mechanism by which the consumption of sugary drinks can directly feed cancer growth, suggesting potential novel therapeutic strategies.“An increasing number of observational studies have raised awareness of the association between consuming sugary drinks, obesity and the risk of colorectal cancer,” said co-corresponding author Dr. Jihye Yun, assistant professor of molecular and human genetics at Baylor. “The current thought is that sugar is harmful to our health mainly because consuming too much can lead to obesity. We know that obesity increases the risk of many types of cancer including colorectal cancer; however, we were uncertain whether a direct and causal link existed between sugar consumption and cancer. Therefore, I decided to address this important question when I was a postdoc in the Dr. Lewis Cantley lab at Weill Cornell Medicine.
First, Yun and her colleagues generated a mouse model of early-stage colon cancer where APC gene is deleted. “APC is a gatekeeper in colorectal cancer. Deleting this protein is like removing the breaks of a car. Without it, normal intestinal cells neither stop growing nor die, forming early stage tumors called polyps. More than 90 percent of colorectal cancer patients have this type of APC mutation,” Yun said.
Using this mouse model of the disease, the team tested the effect of consuming sugar-sweetened water on tumor development. The sweetened water was 25 percent high-fructose corn syrup, which is the main sweetener of sugary drinks people consume. High-fructose corn syrup consists of glucose and fructose at a 45:55 ratio.
When the researchers provided the sugary drink in the water bottle for the APC-model mice to drink at their will, mice rapidly gained weight in a month. To prevent the mice from being obese and mimic humans’ daily consumption of one can of soda, the researchers gave the mice a moderate amount of sugary water orally with a special syringe once a day. After two months, the APC-model mice receiving sugary water did not become obese, but developed tumors that were larger and of higher-grade than those in model mice treated with regular water.
“These results suggest that when the animals have early stage of tumors in the intestines – which can occur in many young adult humans by chance and without notice – consuming even modest amounts of high-fructose corn syrup in liquid form can boost tumor growth and progression independently of obesity,” Yun said. “Further research is needed to translate these discovery to people; however, our findings in animal models suggest that chronic consumption of sugary drinks can shorten the time it takes cancer to develop. In humans, it usually takes 20 to 30 years for colorectal cancer to grow from early stage benign tumors to aggressive cancers.”
“This observation in animal models might explain why increased consumption of sweet drinks and other foods with high sugar content over the past 30 years is correlating with an increase in colorectal cancers in 25 to 50-year-olds in the United States,” said Cantley, co-corresponding author, former mentor of Yun and professor of cancer biology in medicine and director of the Sandra and Edward Meyer Cancer Center at Weill Cornell Medicine.
The team then investigated the mechanism by which this sugar promoted tumor growth. They discovered that the APC-model mice receiving modest high-fructose corn syrup had high amounts of fructose in their colons. “We observed that sugary drinks increased the levels of fructose and glucose in the colon and blood, respectively and that tumors could efficiently take up both fructose and glucose via different routes.”
Using cutting-edge technologies to trace the fate of glucose and fructose in tumor tissues, the team showed that fructose was first chemically changed and this process then enabled it to efficiently promote the production of fatty acids, which ultimately contribute to tumor growth.
“Most previous studies used either glucose or fructose alone to study the effect of sugar in animals or cell lines. We thought that this approach did not reflect how people actually consume sugary drinks because neither drinks nor foods have only glucose or fructose. They have both glucose and fructose together in similar amounts,” Yun said. “Our findings suggest that the role of fructose in tumors is to enhance glucose’s role of directing fatty acids synthesis. The resulting abundance of fatty acids can be potentially used by cancer cells to form cellular membranes and signaling molecules, to grow or to influence inflammation.”
To determine whether fructose metabolism or increased fatty acid production was responsible for sugar-induced tumor growth, the researchers modified APC-model mice to lack genes coding for enzymes involved in either fructose metabolism or fatty acid synthesis. One group of APC-model mice lacked an enzyme KHK, which is involved in fructose metabolism, and another group lacked enzyme FASN, which participates in fatty acid synthesis. They found that mice lacking either of these genes did not develop larger tumors, unlike APC-model mice, when fed the same modest amounts of high-fructose corn syrup.
“This study revealed the surprising result that colorectal cancers utilize high-fructose corn syrup, the major ingredient in most sugary sodas and many other processed foods, as a fuel to increase rates of tumor growth,” Cantley said. “While many studies have correlated increased rates of colorectal cancer with diet, this study shows a direct molecular mechanism for the correlation between consumption of sugar and colorectal cancer.”
“Our findings also open new possibilities for treatment,” Yun said. “Unlike glucose, fructose is not essential for the survival and growth of normal cells, which suggests that therapies targeting fructose metabolism are worth exploring. Alternatively, avoiding consuming sugary drinks as much as possible instead of relying on drugs would significantly reduce the availability of sugar in the colon.”
While further studies in humans are necessary, Yun and colleagues hope this research will help to raise public awareness about the potentially harmful consequences consuming sugary drinks has on human health and contribute to reducing the risk and mortality of colorectal cancer worldwide.
Other contributors to this work include Drs. Sukjin Yang, Yumei Wang and Justin Van Riper with Baylor, Marcus Goncalves (lead author), Changyuan Lu, Jordan Trautner, Travis Hartman, Seo-Kyoung Hwang, Charles Murphy, Roxanne Morris, Sam Taylor, Quiying Chen, Steven Gross and Kyu Rhee, all with Weill Cornell Medicine, Chantal Pauli with the University Hospital Zurich, Kaitlyn Bosch with the Icahn School of Medicine at Mount Sinai, H Carl Lekaye with Memorial Sloan Kettering Cancer Center, Jatin Roper with Duke University and Young Kim with Chonnam National University.
This study was supported by the National Institutes of Health, Stand Up 2 Cancer, the Cancer Prevention and Research Institute of Texas and the National Cancer Institute.
Note: Content may be edited for style and length.
1. Original article can be accessed here : https://www.bcm.edu/news/molecular-and-human-genetics/high-fructose-corn-syrup-intestinal-tumors
2. Journal Reference: Marcus D. Goncalves, Changyuan Lu, et al, High-fructose corn syrup enhances intestinal tumor growth in mice. Science, 2019; 363 (6433): 1345-1349 DOI: 10.1126/science.aat8515
3. Image source: For representation only, https://goo.gl/images/s4dkKy
Computer scientists at Caltech have designed DNA molecules that can carry out reprogrammable computations, for the first time creating so-called algorithmic self-assembly in which the same "hardware" can be configured to run different "softwares"
I remember during my graduation days early 2005-06 I could lay my hands on a science magazine called junior science refresher, although the magazine was not a top notch but nevertheless it was the only science news available in my state at that time. During my course of reading the magazine I came across the term "DNA computing" which fascinated me as a life-science student that one day the lving molecule could replace our silicon based computers. Now this recent development by Caltech caught my eye so thought of sharing the information with my readers and friends. Here is the original new byte from caltech.
In a paper publishing in Nature on March 21, a team headed by Caltech's Erik Winfree (PhD '98), professor of computer science, computation and neural systems, and bioengineering, showed how the DNA computations could execute six-bit algorithms that perform simple tasks. The system is analogous to a computer, but instead of using transistors and diodes, it uses molecules to represent a six-bit binary number (for example, 011001) as input, during computation, and as output. One such algorithm determines whether the number of 1-bits in the input is odd or even, (the example above would be odd, since it has three 1-bits); while another determines whether the input is a palindrome; and yet another generates random numbers.
"Think of them as nano apps," says Damien Woods, professor of computer science at Maynooth University near Dublin, Ireland, and one of two lead authors of the study. "The ability to run any type of software program without having to change the hardware is what allowed computers to become so useful. We are implementing that idea in molecules, essentially embedding an algorithm within chemistry to control chemical processes."
The system works by self-assembly: small, specially designed DNA strands stick together to build a logic circuit while simultaneously executing the circuit algorithm. Starting with the original six bits that represent the input, the system adds row after row of molecules—progressively running the algorithm. Modern digital electronic computers use electricity flowing through circuits to manipulate information; here, the rows of DNA strands sticking together perform the computation. The end result is a test tube filled with billions of completed algorithms, each one resembling a knitted scarf of DNA, representing a readout of the computation. The pattern on each "scarf" gives you the solution to the algorithm that you were running. The system can be reprogrammed to run a different algorithm by simply selecting a different subset of strands from the roughly 700 that constitute the system.
"We were surprised by the versatility of programs we were able to design, despite being limited to six-bit inputs," says David Doty, fellow lead author and assistant professor of computer science at the University of California, Davis. "When we began experiments, we had only designed three programs. But once we started using the system, we realized just how much potential it has. It was the same excitement we felt the first time we programmed a computer, and we became intensely curious about what else these strands could do. By the end, we had designed and run a total of 21 circuits."
The researchers were able to experimentally demonstrate six-bit molecular algorithms for a diverse set of tasks. In mathematics, their circuits tested inputs to assess if they were multiples of three, performed equality checks, and counted to 63. Other circuits drew "pictures" on the DNA "scarves," such as a zigzag, a double helix, and irregularly spaced diamonds. Probabilistic behaviors were also demonstrated, including random walks, as well as a clever algorithm (originally developed by computer pioneer John von Neumann) for obtaining a fair 50/50 random choice from a biased coin.
Both Woods and Doty were theoretical computer scientists when beginning this research, so they had to learn a new set of "wet lab" skills that are typically more in the wheelhouse of bioengineers and biophysicists. "When engineering requires crossing disciplines, there is a significant barrier to entry," says Winfree. "Computer engineering overcame this barrier by designing machines that are reprogrammable at a high level—so today's programmers don't need to know transistor physics. Our goal in this work was to show that molecular systems similarly can be programmed at a high level, so that in the future, tomorrow's molecular programmers can unleash their creativity without having to master multiple disciplines."
"Unlike previous experiments on molecules specially designed to execute a single computation, reprogramming our system to solve these different problems was as simple as choosing different test tubes to mix together," Woods says. "We were programming at the lab bench."
Although DNA computers have the potential to perform more complex computations than the ones featured in the Nature paper, Winfree cautions that one should not expect them to start replacing the standard silicon microchip computers. That is not the point of this research. "These are rudimentary computations, but they have the power to teach us more about how simple molecular processes like self-assembly can encode information and carry out algorithms. Biology is proof that chemistry is inherently information-based and can store information that can direct algorithmic behavior at the molecular level," he says.
1. Original story scource: https://www.caltech.edu/about/news/computer-scientists-create-reprogrammable-molecular-computing-system
2. Journal reference: Damien Woods, David Doty, Cameron Myhrvold, Joy Hui, Felix Zhou, Peng Yin & Erik Winfree. Diverse and robust molecular algorithms using reprogrammable DNA self-assembly. Nature, 2019 DOI: 10.1038/s41586-019-1014-9
3. Image source: Completed DNA algorithms Credit: Winfree Lab/Caltech
Study uncovers genetic switches that control process of whole-body regeneration
The ability to regenerate lost limbs or organs just like the salamander which can easily regrow its lost limb is an ability which humans have been aspiring to achieve. Moreover even if we exclude the casualties of war and only consider the casualties of numerous road accidents leading to amputation or other injuries and medical procedures requiring removal of the organ toatally or partially. In these scenarios development of the therapeutic ability to regenerate lost limbs will serve as a great boon.
A team of Researchers Led by Assistant Professor of Organismic and Evolutionary Biology Mansi Srivastava and Andrew Gehrke, a post-doctoral fellow working in her lab, ,at Harvard is shedding new light on how animals pull off the feat, along the way uncovering a number of DNA switches that appear to control genes for whole-body regeneration. The study is described in a March 15 paper in the journal Science.
Using three-banded panther worms to test the process, Srivastava and Gehrke, found that a section of noncoding DNA controls the activation of a “master control gene” called early growth response, or EGR. Once active, EGR controls a number of other processes by switching other genes on or off.
“What we found is that this one master gene comes on [and activates] genes that are turning on during regeneration,” Gehrke said. “Basically, what’s going on is the noncoding regions are telling the coding regions to turn on or off, so a good way to think of it is as though they are switches.” For that process to work, Gehrke said, the DNA in the worms’ cells, which normally is tightly folded and compacted, has to change, making new areas available for activation.
“A lot of those very tightly packed portions of the genome actually physically become more open,” he said, “because there are regulatory switches in there that have to turn genes on or off. So one of the big findings in this paper is that the genome is very dynamic and really changes during regeneration as different parts are opening and closing.” Before Gehrke and Srivastava could understand the dynamic nature of the worm’s genome, they had to assemble its sequence — no simple feat in itself. “That’s a
big part of this paper,” Srivastava said. “We’re releasing the genome of this species, which is important because it’s the first from this phylum. Until now there had been no full genome sequence available.” It’s also noteworthy, she added, because the three-banded panther worm represents a new model system for studying regeneration. “Previous work on other species helped us learn many things about regeneration,” she said. “But there are some reasons to work with these new worms.” For one thing, they’re in an important phylogenetic position. “So the way they’re related to other animals … allows us to make statements about evolution.” The other reason, she said, is, “They’re really great lab rats. I collected them in the field in Bermuda a number of years ago during my postdoc, and since we’ve brought them into the lab they’re amenable to a lot more tools than some other systems.” While those tools can demonstrate the dynamic nature of the genome during regeneration — Gehrke was able to identify as many as 18,000 regions that change — what’s important, Srivastava said, is how much meaning he was able to derive from studying them. She said the results show that EGR acts like a power switch for regeneration — once it is turned on, other processes can take place, but without it, nothing happens.“We were able to decrease the activity of this gene and we found that if you don’t have EGR, nothing happens,” Srivastava said. “The animals just can’t regenerate. All those downstream genes won’t turn on, so the other switches don’t work, and the whole house goes dark, basically.”
While the study reveals new information about how the process works in worms, it also may help explain why it doesn’t work in humans.
“It turns out that EGR, the master gene, and the other genes that are being turned on and off downstream are present in other species, including humans,” Gehrke said.
“The reason we called this gene in the worms EGR is because when you look at its sequence, it’s similar to a gene that’s already been studied in humans and other animals,” Srivastava said. “If you have human cells in a dish and stress them, whether it’s mechanically or you put toxins on them, they’ll express EGR right away.”
The question is, Srivastava said, “If humans can turn on EGR, and not only turn it on, but do it when our cells are injured, why can’t we regenerate? The answer may be that if EGR is the power switch, we think the wiring is different. What EGR is talking to in human cells may be different than what it is talking to in the three-banded panther worm, and what Andrew has done with this study is come up with a way to get at this wiring. So we want to figure out what those connections are, and then apply that to other animals, including vertebrates that can only do more limited regeneration.”
Going forward, Srivastava and Gehrke said they hope to investigate whether the genetic switches activated during regeneration are the same as those used during development, and to continue working to better understand the dynamic nature of the genome. “Now that we know what the switches are for regeneration, we are looking at the switches involved in development, and whether they are the same,” Srivastava said. “Do you just do development over again, or is a different process involved?”
The team is also working on understanding the precise ways that EGR and other genes activate the regeneration process, both for three-banded panther worms and for other species as well.
In the end, Srivastava and Gehrke said, the study highlights the value of understanding not only the genome, but all of the genome — the noncoding as well as the coding portions.
“Only about 2 percent of the genome makes things like proteins,” Gehrke said. “We wanted to know: What is the other 98 percent of the genome doing during whole-body regeneration? People have known for some time that many DNA changes that cause disease are in noncoding regions … but it has been underappreciated for a process like whole-body regeneration.
“I think we’ve only just scratched the surface,” he continued. “We’ve looked at some of these switches, but there’s a whole other aspect of how the genome is interacting on a larger scale, not just how pieces open and close. And all of that is important for turning genes on and off, so I think there are multiple layers of this regulatory nature.”
“It’s a very natural question to look at the natural world and think, if a gecko can do this, why can’t I?” Srivastava said. “There are many species that can regenerate, and others that can’t, but it turns out if you compare genomes across all animals, most of the genes that we have are also in the three-banded panther worm so we think that some of these answers are probably not going to come from whether or not certain genes are present, but from how they are wired or networked together, and that answer can only come from the noncoding portion of the genome.”
1. Original article :
Note: Content edited for style and length.
2.Regeneration timelapse Video link:
3. Image: Dr. Srivastava in her lab along with Dr. Gehrke.
Traditionally we have been teaching and learning that evolution is a continuous process and sometimes takes millions of years to manifest its effect at the macromolecular level or at the organism level. This information give birth to one of the most sought after query, how rapidly new proteins evolve in organisms? Now recently a new study led by scientists from the University of Chicago challenged one of the classic assumptions about how new proteins evolve. Their findings are published in Nature Ecology and Evolution Now for the uninitiated ones (Proteins are the building blocks that carry out the basic functions of life. As the genes that produce them change, the proteins change as well, introducing new functionality or traits that can eventually lead to the evolution of new species.)
One of the key outcomes of the research was that they were able to demonstrate that random, noncoding sections of DNA can quickly evolve to produce new proteins. These de novo, or “from scratch,” genes provide a new, unexplored way that proteins evolve and contribute to biodiversity.
“Using a big genome comparison, we show that noncoding sequences can evolve into completely novel proteins. That’s a huge discovery,” said Manyuan Long, PhD, the Edna K. Papazian Distinguished Service Professor of Ecology and Evolution at UChicago and senior author of the new study.
A third way for genes to evolve
For decades, scientists believed that there were only two ways new genes evolved: duplication and divergence or recombination. During the normal process of replication and repair, a section of DNA gets copied and creates a duplicate version of the gene. Then, one of these copies may acquire mutations that change its functionality enough that it diverges and becomes a distinct new gene. With recombination, pieces of genetic material are reshuffled to create new combinations and new genes. However, these two methods only account for a relatively small number of proteins, given the total number of possible combinations of amino acids that comprise them. Scientists have long wondered about a third mechanism, where de novo genes could evolve from scratch. All organisms have long stretches of genetic material that do not encode proteins, sometimes up to 97 percent of the total genome. Is it possible for these noncoding sections to acquire mutations that suddenly make them functional? This has been difficult to study because it requires high-quality reference genomes from several closely related species that show both the ancestral, noncoding sequences and subsequent new genes that evolved from them. Without this clear, visible line of evolution, there’s no way to prove it’s truly a de novo gene. The supposed new genes reported previously could just be an “orphaned gene” that diverged or transferred from unrelated organisms at some point, then all traces of its predecessors disappeared.
To overcome these challenges, Long’s team took advantage of 13 new genomes sequenced and annotated recently from 11 closely-related species of rice plants, including Oryza sativa, the most common food crop. He worked with groups headed by Prof. Rod Wing at the University of Arizona. Prof. Yidan Ouyang from Huazhong Agricultural University, China, also led a team that cultivated their own rice plants in Hainan, a tropical island off the southern coast of China, and harvested them for proteomics sampling.
After analyzing the genomes of these plants, they detected at least 175 de novo genes. Further mass spectrometry analysis of protein activity was conducted by another group led by Prof. Siqi Liu at BGI-Shenzhen, a genome sequencing center located in Shenzhen, Guangdong, China. They found evidence that 57 percent of these genes actually translated into new proteins, including more than 300 new peptides.
With this first, large dataset of authentic de novo genes, Long’s team detected a pattern in their evolution. It began with the early evolution of expression, followed by subsequent mutation into protein coding potentials for almost all de novo genes.
“This makes sense given the widely observed expression of intergenic regions in various organisms,” said Li Zhang, a postdoctoral researcher at UChicago and lead author of the article.
Long says that the Oryza plants are good genomes to search for de novo genes because they are relatively young—you can still see evidence of evolution in their existing genomes.
“The 11 species diverged from each other only about three to four million years ago, so they are all young species,” he said. “For that reason, when we sequence the genomes, all the sequences are highly similar. They haven't accumulated multiple generations of changes, so all the previous non-coding sections are still there.”
The path ahead
Long and his team next want to study the new proteins to further understand their function and evolution and see if there is something unique about their structure. If de novo genes open up an unexplored path for evolution, they could reveal mechanisms for creating new and improved cellular functions. For instance, the researchers detected evidence of natural selection acting to fix insertions and deletions in the genome to generate new protein sequences, and the sequence’s evolution toward improved functions. “The new proteins may make certain functions better, or help regulate the genes better,” he said. “Each step of the way, they can bring some kind of benefit to the organism until it gradually becomes fixed in the genome.”
Original article written by Matt Wood, a senior science writer at UChicago Medicine and the Biological Sciences Division. Note: Content may be edited for style and length.
1.Original article: https://www.uchicagomedicine.org/forefront/biological-sciences-articles/2019/march/genes-that-evolve-from-scratch-expand-protein-diversity
2. Li Zhang, et. al, Rapid evolution of protein diversity by de novo origination in Oryza. Nature Ecology & Evolution, 2019; DOI: 10.1038/s41559-019-0822-5
Snakes are one group of animals which always instill an element of fear into the minds of most creatures and the fear of snake may have evolutionary links to our development as it was one of the prominent threat in the wilderness during our hunter gatherer days. However the fear deffinitely do not deter herpetologists and snake lovers to look out for an encounter with a beautiful snake as they are often motivated by the beauty and diversity of these creatures. The discovery we are talking about was made by a team of scientists and researchers led by Dr. Mark-Oliver Roedel from Berlin's Natural History Museum, in the rain forests of Southeastern Guinea and Northwestern Liberia. The team discovered a new species of stiletto snake which can stab sideways and jump a distance equal to its own body length has been discovered in West Africa. Three specimens were found by a team of scientists and were later all identified as a species previously unknown to science.
The snake is from a family of vipers which have teeth protruding from the sides of their mouths, allowing them to strike prey with their venomous fangs from an unusual angle and without even opening their mouths.
The group is also known as mole vipers or burrowing asps and, due to their unusual physiology, they cannot be handled as other snakes can by holding them behind the head.
While most of these burrowing snakes are not venomous enough to kill a human, some are able to inflict serious tissue necrosis, which could lead to the loss of a finger or thumb.
The species has been named Branch's stiletto snake or Atractaspis branchi, in honour of the South African herpetologist Prof. William Branch, a world-leading expert on African reptiles who died in February 2017.
The first specimen was collected at night from a steep bank of a small rocky riverbed in a lowland in the evergreen forest of Liberia.
The team's findings have been published in the open-access journal Zoosystematics and Evolution
1. Find the original article here
2. Journal link: https://zse.pensoft.net
3. Image: A Stiletto Snake, Wikimedia
The beginning of a long quest
It was the year 1856 when few limestone excavators working near Düsseldorf, Germany, unveiled bones that resembled to humans and initial analysts inferred them as belonging to a deformed human, citing their oval shaped skull, with a low, receding forehead, distinct brow ridges, and bones that were unusually thick. It was only subsequent studies that revealed that the remains belonged to a previously unknown species of hominid, or early human ancestor, that was similar to our own species, Homo sapiens. In 1864, the specimen was dubbed Homo neanderthalensis, after the Neander Valley where the remains were found.Neanderthals rose to prominance around 200,000 and 250,000 years ago and ruled the hills and grasslands of europe till extiction around 30000 years ago. The exact date of their extinction had been disputed but in 2014, a team led by Thomas Higham of the University of Oxford used an improved radiocarbon dating technique on material from 40 archaeological sites to show that Neanderthals died out in Europe between 41,000 and 39,000 years ago, with the last group disappearing from southern Spain 28,000 years ago.
Similarity of Neanderthals with Rhodesian Man (Homo rhodesiensis) made early investigators infer that they share similar ancestor but comparison of the DNA of Neanderthals and Homo sapiens suggests that they diverged from a common ancestor between 350,000 and 400,000 years ago, which some argue might be Homo rhodesiensis but this argument assumes that H. rhodesiensis goes back to around 600,000 years ago. However one can not rule out convergent evolutionary paths for the two hominids displaying feathres such as distinct brow ridges. Neanderthals settled in Eurasia, but not extending beyond modern day Israel. No neanderthal sites were observed in the African continent and Homo sapiens appears to have been the only human type in the Nile River Valley because of the warmer climate present in that period.
Are Neanderthals really extinct?
Sudden disappearnce of Neanderthals from Europe co-incides with the arrival of H. sapiens and this information prompted many scientists to suspect that the two events are closely linked, and humans contributed to the demise of their close cousins, either by outcompeting them for resources or through open conflict. The hypothesis that early humans violently replaced Neanderthals was first proposed by French palaeontologist Marcellin Boule (the first person to publish an analysis of a Neanderthal) in 1912. However according to a 2014 study by Thomas Higham and colleagues based on organic samples suggest that the two different human populations shared Europe for several thousand years. Therefore outright violent extinction seems less plausible and leads to the formation of two scenarios for Neanderthal extinction.
Possible scenarios for the extinction of the Neanderthals are:
Ancient DNA to the rescue
DNA sequence analysis of the fossils can reveal an entirely new world of information to us, but recovering DNA from samples that are fossilized thousands of years ago, is a daunting task in itself making ancient DNA research far from routine. The samples are prone to degradtion and contamination by DNA from other sources, and retriving data out of the ancient material is costly and painstaking work. At a more fundamental level, it requires determining whether the necessary samples even exist and, if so, how to get access to them.
An international group of Anthropologists from Max Planck Institute for Evolutionary Anthropology, Cold Spring Harbour Laboratories and Cornell University using various different methods of DNA analysis estimated an interbreeding to have happened less than 65,000 years ago, around the time that modern human populations spread across Eurasia from Africa. They reported evidences for a modern human contribution to the Neanderthal genome.
Martin Kuhlwilm, co-first author of the new paper, identified the regions of the Altai Neanderthal genome sharing mutations with modern humans. They found evidences of gene flow from descendants of modern humans into the Neanderthal genome to one specific sample of Neanderthal DNA recovered from a cave in the Altai Mountains in southern Siberia, near the Russia-Mongolia border.
Earlier studies have observed that DNA of modern humans contains 2.5 to 4 percent Neanderthal DNA. However studies conducted by Mendez et. al. revealed that no Neanderthal Y chromosomal DNA was ever observed in any human sample they have tested. Contemplating upon the observations they initially felt that the Neanderthal Y chromosome genes could have drifted out of the human gene pool by chance over the millennia, or there are possibilities that the Neanderthal Y chromosomes include genes that are incompatible with other human genes. Mendez, and his colleagues have found evidence supporting this idea, and they think that the two groups may have been reproductively isolated unlike thought earlier. Their study identified protein-coding differences between Neandertal and modern human Y chromosomes. Changes included potentially damaging mutations to PCDH11Y, TMSB4Y, USP9Y, and KDM5D, and three of these changes are missense mutations in genes producing male-specific minor histocompatibility (H-Y) antigens. Antigens derived from KDM5D, for example, are thought to elicit a maternal immune response during gestation.
It is possible that these incompatibilities at one or more of these genes played a role in the reproductive isolation of the two groups. Thus Y-chromosomal studies have re-drawn the time-line of divergence of the two species ~4 million years ago, which according to previous estimates based on mitochondrial DNA put the divergence of the human and Neanderthal lineages at between 400,000 and 800,000 years ago.
New data emerging out of GWA studies could shed further light on the evolutionary history of the two hominids. In my opinion the image could resolve better if we look into the pathogen associated and immune response genes that we might have inherited or acquired during our evolutionary journey.
This post is written specially keeping it consistent to the C.B.S.E curriculum for class XII. Nevertheless students from other boards can benefit from it too :)
Sexual reproduction in flowering plants (angiosperms) are carried out with the help of sexual organelles of the plant, i.e Flowers.Angiosperms: Angiosperms (Gr. Angios: Covered, Spermae: seed) are plants that have their seeds enclosed in a ovule inside the ovary of their flowers.
There is a huge diversity among flowers of the angiosperms but all flowers have these structures:
The ovary, which may contain one or multiple ovules, may be placed above other flower parts (referred to as superior); or it may be placed below the other flower parts (referred to as inferior).
Structure of Stamen, Anther, Pollen Sac/Microsporangium and Pollen Grain in Plants!
(a) The Stamen:
Stamen in a flower consists of two parts, the long narrow stalk like filament and upper broader knob-like bi-lobed anther (Fig. 2 A). The proximal end of the filament is attached to the thalamus or petal of the flower. The number and length of stamens vary in different species.
b) Structure of anther:
A typical angiosperm anther is bilobed with each lobe having two theca, i.e they are bithecous or dithecous anther is made up of two anther lobes, which are connected by a strip of sterile part called connective. The anther is a four-sided (tetragonal) structure consisting of four elongated cavities or pollen sacs (microsporangia) the four microsporangia are located at the corners, two in each lobe. The microsporangia develop further and become pollen sacs in which pollen grains are produced.(c) Structure of microsporangium
In a transverse section, a typical microsporangium appears circular in outline, consisting of two parts, microsporangial wall and sporogenous tissue.
i) Microsporangial Wall: Includes the epidermis, endothecium, middle layers and the tapetum. The outer three wall layers perform the function of protection and help in dehiscence of anther to release the pollen. The innermost wall layer is the tapetum, its cells have dense cytoplasm, become large, multinucleate and are specialized in nourishing the developing pollen grains.
Functions of Tapetum
It fills the interior of the microsporangium, all the cells are simmilar and called sporogenous cells. Sporogenous cells devide regularly to from the diploid microspore mother cells. The microspore mother cell devides to form pollen grains.
M icrosporogenesis : As the anther develops, the cells of the sporogenous tissue is capable of giving rise to a microspore tetrad. Each one is a potential pollen or microspore mother cell. The process of formation of microspores from a pollen mother cell (PMC) through meiosis is called microsporogenesis. The microspores, as they are formed, are arranged in a cluster of four cells–the microspore tetradmeiotic divisions to form microspore tetrads.
Types of microspore tetrads
As the anthers mature and dehydrate, the wall of the microspore mother cell degenerates and the microspores dissociate from each other and develop into pollen grains. Inside each microsporangium several thousands of microspores or pollen grains are formed that are released with the dehiscence of anther.
Pollen grains are male reproductive propagule or young male gametophyte which is formed in the anther and is meant for reaching the female reproductive organ through a pollinating agent. Pollen grains are generally spherical measuring about 25-50 micrometers in diameter. The pollen grains are coverd by a two-layered wall called sporoderm. The two layers of sporoderm are inner intine and outer exine.
1. Intine: It is the inner wall of the pollen grain and is a thin and continuous layer made up of cellulose and pectin. Some enzymatic proteins also occour in the intine.
2. Exine: The exine is the hard outer layer made up of sporopollenin which is one of the most resistant organic material known. It can withstand high temperatures and strong acids and alkali. No enzyme that degrades sporopollenin is so far known. Pollen grains where sporopollenin is absent can be easily identified by the presence of prominent apertures called germ pores. The exine surface may be smooth, pitted, reticulate, spiny, warty etc, the exine surface sculpting are specific for each type of pollen grain. Pollen grains are also well preserved as fossils because of the presence of sporopollenin, and thus are helpful in studying the evolutionary history of the plant.
The cytoplasm of a mature pollen grain is surrounded by a plasma membrane and contains two cells, the vegetative cell and generative cell. The vegetative cell is bigger, has abundant food reserve and a large irregularly shaped nucleus. The generative cell is small and floats in the cytoplasm of the vegetative cell. It is spindle shaped with dense cytoplasm and a nucleus. In over 60 per cent of angiosperms, pollen grains of a microsporeare shed at this 2-celled stage. In the remaining species, the pollen grain generative cell divides mitotically to give rise to the two male gametes before pollen grains are shed (3-celled stage).
Pollen grains of many species cause severe allergies and bronchial afflictions in some people often leading to chronic respiratory disorders – asthma, bronchitis, etc. However they are rich in neutrients and thus often consumed as food suppliment.
The Pistil, Megasporangium (ovule) and Embryo sac
The female reproductive parts of the flower are knwon as carpels, and are collectively called as gynoecium. Gynoecium may consist of a single pistil (monocarpellary) or may have more than one pistil (multicarpellary). When there are more than one carpel the pistils may be fused together (syncarpous) or may be free (apocarpous).
Each pistil has three parts, the stigma, style and ovary. The stigma serves as a landing platform for pollen grains. The style is the elongated slender part beneath the stigma. The basal bulged part of the pistil is the ovary. Inside the ovary is the ovarian cavity (locule). The placenta is located inside the ovarian cavity.
Structure of a Megasporangium (Ovule)
The ovule is a small structure attached to the placenta by means of a stalk called funicle. The body of the ovule fuses with funicle in the region called hilum. Thus, hilum represents the junction between ovule and funicle. Each ovule has one or two protective envelopes called integuments. Integuments encircle the nucellus except at the tip where a small opening called the micropyle is organised. Opposite the micropylar end, is the chalaza, representing the basal part of the ovule. The main body of the ovule is composed of parenchymatous mass called nucellus. Cells of the nucellus has abundant reserve of food. Located in the nucellus is the embryo sac or female gametophyte. An ovule generally has a single embryo sac formed from a megaspore.
It is the process of formation of haploid megaspore from the diploid megaspore mother cell (MMC). Usually a single MMC differentiates in the micropylar region. It is a large cell containing dense cytoplasm and a prominent nucleus. The MMC undergoes meiotic division. which results in the production of four haploid megaspores, arranged generally in the form of a linear tetrad.
Female gametophyte or Embryo sac: Only one of the megaspores is functional while the other three degenerate. The functional megaspore develops into the female gametophyte (embryo sac).
Pollination is the process of transferring pollen from the stamens to the stigmatic surface in angiosperms or the micropyle region of the ovule in gymnosperms. Depending on the source of pollen, pollination can be divided into three types.
Pollen transfer can be facilitated by the aid of abiotic (wind, water), abiotic (insects, birds, mammals).In some cases, pollen is transferred simply by gravity and the proximity of the anthers to the stigma.
Both wind and water pollinated flowers are not very colourful and do not produce nectar.
3. Zoophily It is a mode of pollination in which the biotic agents bring about pollination in flowering plants. Zoophily has several subtypes eg. Entomophily (by insects) malacophily (by snails ) chiropterophily (by Bats), ornithophilly (by birds eg. Humming bird), myrmecophily (by ants), anthrophily (by Humans).
Flower traits associated with different pollination agents
Advantages and Disadvantages of Cross Pollination
1. A number of plants are self-sterile, that is, the pollen grains cannot complete growth on the stigma of the same flower due to mutual inhibition or incompatibility, e.g., many crucifers, solanaceous plants. Several plants are pre-potent, that is, pollen grains of another flower germinate more readily and rapidly over the stigma than the pollen grains of the same flower, e.g., Grape, Apple. Such plants of economic interest give higher yield only if their biotic pollinators like bees are available along-with plants of different varieties or descent
2. Cross pollination introduces genetic re-combinations and hence variations in the progeny.
3. Cross pollination increases the adaptability of the offspring towards changes in the environment.
4. It makes the organisms better fitted in the struggle for existence.
5. The plants produced through cross pollination are more resistant to diseases.
6. The seeds produced are usually larger and the offspring have characters better than the parents due to the phenomenon of hybrid vigour.
7. New and more useful varieties can be produced through cross pollination.
8. The defective characters of the race are eliminated and replaced by better characters.
9. Yield never falls below an average minimum.
1. It is highly wasteful because plants have to produce a larger number of pollen grains and other accessory structures in order to suit the various pollinating agencies.
2. A factor of chance is always involved in cross .pollination.
3. It is less economical.
4. Some undesirable characters may creep in the race.
5. The very good characters of the race are likely to be spoiled.
As continued self-pollination result in inbreeding depression, flowering plants have developed many devices to discourage self- pollination and to encourage cross-pollination.
Artificial hybridisation is one of the major approaches of crop improvement programme. Here only the desired pollen grains are used for pollination and the stigma is protected from contamination (unwanted pollen). This is achieved by emasculation and bagging techniques. Anthers from the flower bud before the anther dehisces using a pair of forceps is necessary. This step is referred to as emasculation.
Ref: NCERT Biology for Class 12
* All Images copyright of respective owners
It was Charls Darwin in 1859 who first sketched the evolutionary tree in his book The Origin of Species, and since then trees have remained a central metaphor in evolutionary biology even the present day. Today, phylogenetics (Greek: phylé, phylon = tribe, clan, race +genetikós = origin, source, birth)– is the study of the evolutionary history and relationships among individuals or groups of organisms therefore evolutionary trees—have permeated within and increasingly outside evolutionary biology and fostering skills in reading and interpreting trees are therefore a critical component of biological education. Conversely, misconceptions and erroneous understanding of the evolutionary trees can be very detrimental to one’s understanding of the patterns and processes that have occurred in the history of life.
This article is aimed as an aide to students and enthusiasts to read and interpret a phylogenetic tree, however it does not intend to teach how to create one. We can discus that in a separate article later.
So what is an Evolutionary Tree anyway?
In the most simplistic terms, an evolutionary tree—also known as a phylogenetic tree/ cladogram is a 2D graph or diagram depicting biological entities (sequences or species) that are connected through common descent (i.e. their evolutionary relationship). Thus evolutionary trees provide us some basic information regarding: historical pattern of ancestry, divergence, and descent, by depicting a series of branches that merge at points representing common ancestors, which themselves are connected through more distant ancestors. Consider the tree shown below, here you and your siblings share a common ancestor (your parents) and your parents and aunt with their parents, however you and your cousins share the same ancestry but have divergent origins.
Components of a tree
A typical phylogenetic tree as shown above consists of the following components
What's the difference between a dendogram, a phylogenetic tree, and a cladogram?
For general purposes, not much, and many biologists, often use these terms interchangeably. However in the most general terms, tree diagrams are known as “dendrograms” (after the Greek for tree), cladogram only represent a branching pattern; i.e., its branch spans do not represent time or relative amount of character change. While in contrast,trees known as phylograms or phylogenetic trees present branch lengths as being proportional to some measure of divergence between species and typically include a scale bar to indicate the degree of divergence represented by a given length of branch.
Homology Vs Similarity
Now you may say that since closely related species share a common ancestor and often resemble each other, it might seem that the best way to uncover the evolutionary relationships would be with overall similarity? Surprisingly the answer would be No, and to understand why is it so? we will have to look deeper into the difference between similarity and homology.
Similarity may be misleading as because when unrelated species adopt a similar way of life, their body parts may take on similar functions and end up resembling one another due to convergent evolution and result in the formation of analogous features. One classical example is the wings of birds and bats. However when two species have a similar characteristic because it was inherited by both from a common ancestor, it is called a homologous feature (or homology). For example, the even-toed foot of the deer, camels, cattle, pigs, and hippopotamus is a homologous similarity because all inherited the feature from their common paleodont ancestor.
How to read a Phylogenetic tree?
Phylogenetic trees contain a lot of information which can be both qualitative and quantitative, and decoding them is not always straightforward and requires understanding of the above basic facts. Consider the hypothetical tree of different viruses shown below:
Qualitatively here the length of the branches in horizontal dimension gives the amount of genetic change, thus the longer the branch is, larger is the amount of change. While the quantitative information regarding the amount of genetic change is given by the bar at the bottom of the figure which acts as a scale for this. In this case the line segment with the number '0.07' shows the length of branch that represents an amount genetic change of 0.07. The units of branch length are nucleotide substitutions per site – that is the number of changes or 'substitutions' divided by the length of the sequence. The scale may also sometimes represent the % change, i.e., the number of changes per 100 nucleotide sites.
However the vertical lines joining the nodes has no meaning and is used simply to lay out the tree for better visual understanding.
Different presentation schemes of evolutionary trees
Unless indicated otherwise, a phylogenetic tree only depicts the branching history of common ancestry. The pattern of branching (i.e., the topology) is what matters here. Branch lengths are irrelevant. Thus, the three trees shown in here all contain the same information.
This might seem confusing to you at first, but however do remember that that the lines of a tree represent evolutionary lineages--and evolutionary lineages do not have any true position or shape. Therefore it doesn't matter whether branches are drawn as straight diagonal lines, or are kinked to make a rectangular tree, or are curved to make a circular tree.
To further simplify the concept, consider them as flexible pipes rather than rigid rods; similarly, nodes as swivel joints rather than fixed welds. The basic rule is that if you can change one tree into another tree simply by twisting, rotating, or bending branches, without having to cut and reattach branches, then the two trees have the same topology and therefore depict the same evolutionary history.
The battle of our immune system with that of pathogens has been going on for millennia (longer than battle of Avengers and all their their nemesis combined!). The bugle of battle was blown with the occurrence of first multicellular organism 3.5 million years ago and with the rise of first parasite, who knows that it might be our own mitochondria? Its a speculation though, but lack of explanation for its origin and its astounding similarity with prokaryotes, makes it open to any body's guess.
However, in these millions of years both our immune system and pathogens evolved playing the game of hide and seek and developed several weapons in their armory to outsmart each other. Genomic analysis of plants and animals provides evidence that a sophisticated mechanism of host defense was in existence by the time the ancestors of plants and animals diverged. This system, is being shared by plants and animals, and the Toll pathway of NFκB activation is an example, demonstrated conclusively in fruit flies such as Drosophila and in vertebrates such as mice and humans and also believed to occur in plants in the form of Leucin Rich Repeats (LRRs). Now let us introduce ourselves with the three super hero of our immune system.
Granulocyte-Monocyte progenitor cells in the bone marrow differentiate into pro-monocytes, which upon entering blood, further differentiates into mature monocytes. Monocytes circulate in the bloodstream for about 8 h, during which they migrate into the tissues and differentiate into specific tissue macrophages. Enlarge five- to ten fold; its intracellular organelles increase in both number and complexity; and it acquires increased phagocytic ability, produces higher levels of hydrolytic enzymes, and begins to secrete a variety of soluble factors.(Remember Hulk!).
Whenever I think of macrophage it sounds to me like Hulk, and I have reasons to backup the claim, first it is one of the biggest cell observable under microscope with size approx 21 μm (micrometres). Hulk likes to smash, and our macrophage likes to phagocytose its oponents. Macrophages are capable of ingesting and digesting exogenous antigens, such as whole microorganisms and insoluble particles, and endogenous matter, such as injured or dead host cells, cellular debris, and activated clotting factors. Moreover years of selection pressure has made macrophage a more leathal enemy as it equipped itself with many more weapons, such as Opsonization, production of reactive oxygen intermediates (ROIs) and reactive nitrogen intermediates that have potent antimicrobial properties (consider WMDs), along with a group of antimicrobial and cytotoxic peptides, commonly known as defensins. Defensin peptides have been shown to form ion-permeable channels (pores!) in bacterial cell membranes, and can kill a variety of bacteria, including Staphylococcus aureus, Streptococcus pneumoniae, Escherichia coli,Pseudomonas aeruginosa, and Haemophilus influenzae. Consider Hulk with a gun huh!
Dendritic Cells (DCs)
Dendritic cells are derived from hematopoietic bone marrow progenitor cells, these progenitor cells initially transform into immature dendritic cells. Dcs acquired its name because it is covered with long membrane extensions that resemble the dendrites of nerve cells. DCs constitutively express high levels of both class II MHC molecules and members of the co-stimulatory B7 family. For this reason, they are more potent antigen-presenting cells than macrophages and B cells, both of which need to be activated before they can function as antigen-presenting cells (APCs). The dendritic cells are constantly in communication with other cells in the body. This communication can take the form of direct cell–cell contact based on the interaction of cell-surface proteins. An example of this includes the interaction of the membrane proteins of the B7 family of the dendritic cell with CD28 present on the lymphocyte. However, the cell–cell interaction can also take place at a distance via cytokines. Following microbial invasion or during inflammation, mature and immature forms of Langerhans cells and interstitial dendritic cells migrate into draining lymph nodes, where they make the critical presentation of antigen to TH cells that is required for the initiation of responses by those key cells. Looks like Captain America isn't it? Flexible, resourceful, communicating crucial intel to raise deffence, planning, integrating and keeps the team going.
Natural Killer Cell (NK cell)
Natural Killer Cell (NK cell)consists of a small population of large, granular lymphocytes that display cytotoxic activity and are analogous to that of cytotoxic T cells. Cytotoxic activity is displayed against a wide range of cells, both viral infected and transformed. If nature could issue a license to kill NK cells would be the best candidate for it because unlike cytotoxic T cells, NK cell can directly induce the death of tumor cells and virus-infected cells in the absence of specific immunization. Armed with an array of receptors that can either stimulate NK cell reactivity (activating receptors eg. NKG2) or dampen NK cell reactivity (inhibitory receptors e.g. KIRs), NK cells are smart like Tony Stark and lethal like Iron Man! However do not think that all this fire power is uncontrolled, it has a very smart control system like JARVIS, which avoids auto-reactivity, by initiating an education system where NK cells acquire self-tolerance. Unlike T-cells however the potentially autoreactive NK cells are not generally clonally deleted but instead a maintained in a state of hyporesponsiveness or anergy. Several findings suggest that the responsiveness of mature NK cells is not fixed but may adapt to a changing environment in vivo. It is observed that persistent stimulation without inhibition results in NK cell hyporesponsiveness, whereas persistent stimulation coupled with commensurate inhibition results in NK cell responsiveness. These results suggest that NK cell tuning might occur throughout the lifetime of the NK cell under steady-state conditions. In infected animals, however, hyporesponsive NK cells are converted to a higher state of responsiveness.
Most of us grow up listening, reading and learning the fact that we "Human" beings can boast of being the most evolved and a higher organism. Earlier than 1960, the image below would have been correct and sensible. However our idea of supremacy in terms of genome size and number of genes takes a flak when we first started looking at the complexity of genome size, it was soon realized that the large genomes were often composed of huge chunks of repetitive DNA, while only a few percent of the genome in these organisms were unique.
Now lets have a look at the past and try to figure out the origin of this concept.
Classically biologists recognize that the living world comprises two types of organisms. Prokaryotes and Eukaryotes. Assuming that you already know what prokaryotes and eukaryotes are, I am not going to dive into the difference between the two.
So what is C-value?
'C-value ', of an organism is defined as the total amount of DNA contained within its haploid chromosome set. Prokaryotic cells typically have genomes smaller than 10 megabases (Mb), while the genome of single cell eukaryote is typically less than 50Mb. Therefore for simplicity's sake here we are not comparing the genomes from two classes of organisms together.
However eukaryotes alone show immense diversity among their genome sizes, from the smallest eukaryote being less than 10 Mb in length, and the largest over 100 000 Mb, and all these observation seems coinciding to a certain extent with the complexity of the organism, the simplest eukaryotes such as fungi having the smallest genomes, and higher eukaryotes such as vertebrates and flowering plants having the largest ones.
So it seems fair to think that, complexity of an organism is related to the number of genes in its genome - higher eukaryotes need larger genomes to accommodate the extra genes. However, in fact this correlation is far from being precise: if it were, then the nuclear genome of the yeast S. cerevisiae, which at 12 Mb is 0.004 times the size of the human nuclear genome, would be expected to contain 0.004 × 35 000 genes, which is just 140. In fact the S. cerevisiae genome contains about 5800 genes! Therefore for many years this lack of precise correlation between the complexity of an organism and the size of its genome was looked on as a bit of a puzzle, and called as C-value paradox/C-value enigma.
Questions raised by C-value paradox
The C-value paradox not only represents one question, but it rather raises three of them, as suggested by T. R. Gregory(2007), 1) the generation of large-scale variation in genome size, which may occur by continuous or quantum processes, (2) the non-random distributions of genome size variation, whereby some groups vary greatly and others appear constrained, and (3) the strong positive relationship between C-value and nuclear and cell sizes and the negative correlation with cell division rates. Therefore any proposed solution must try to solve these three problems as well.
Now we biologists are pretty good at dividing ourselves among different school of thoughts (remember the RNA and DNA world!) and the C-value paradox wasn't an exception either. Nonetheless two school of thoughts emerged here too, one proposing Mutation pressure theories and the other proposing Optimal DNA theories.
The table shown below summarizes the theories proposed along with their proposed mechanism. Each theory can be classified according to its explanation for the accumulation or maintenance of DNA (MP,mutation pressure theory ; OD, optimal DNA theory) and according to its explanation for the observed cellular correlations (CN, coincidental, CE, coevolutionary, CA, causative). Note that these theories are not necessarily mutually exclusive in all respects, since the optimal DNA theories do not specify the mechanism(s) of DNA content change and can include those presented by both mutation pressure theories. (ref: GREGORY, T. R. (2001),page # 69)
So what is the most plausible explanation of C-value paradox?
In 1980s, two landmark papers, by Orgel and Crick and by Doolittle and Sapienza, established a strong case against 'selfish DNA elements' which we better know as transposons. They proposed that ‘selfish DNA’ elements, such as transposons, essentially act as molecular parasites, replicating and increasing their numbers at the (usually slight) expense of a host genome, i.e these elements functions for themselves while providing little or no selective advantage to the host. Computational genomic studies have shown that transposable elements invade in waves over evolutionary time, sweeping into a genome in large numbers, then dying and decaying away leaving the 'Junk DNA' in its trail. 45% of the human genome is detectably derived from these transposable elements. Therefore we can say that C-value paradox is mostly (though not entirely) explained by different loads of leftovers from transposable elements and larger the genomes longer is the trail leftover by transposons.
So, if the C-value paradox is explained and rested for good, why dig it up again?
Recent publications discussing the outcome of ENCODE (Encyclopedia Of DNA Elements) project suggest that 80% of the human genome is reproducibly transcribed, bound to proteins, or has its chromatin specifically modified. Moreover the previously considered junk DNA is found to be biochemically active disapproving the 'Junk DNA' theory.
Now in the light of ENCODE data it is pertinent that scientists need to come up with a alternative hypothesis capable of explaining C-value paradox, for mutational load, and for how a large fraction of eukaryotic genomes is composed of neutrally drifting transposon-derived sequences.
The Toll family of receptors
Toll Like Receptors or TLRs are type I transmembrane proteins of the Interleukin-1 receptor (IL-1R) family that possess an N-terminal leucine-rich repeat (LRR) domain for ligand binding, a single transmembrane domain, and a C-terminal intracellular signaling domain. The TLR C terminus is homologous to the intracellular domain of the IL-1R and is thus referred to as the Toll/IL-1 receptor (TIR) domain. TLRs are expressed at the cell membrane and in sub cellular compartments such as the endosomes and are widely expressed in many cell types, including non hematopoietic epithelial, endothelial cells. Although most cell types express only a select subset of these receptors. However hematopoietically derived sentinel cells, such as macrophages, neutrophils, and dendritic cells (DCs), express most of the TLRs, with some variation in different subsets, e.g., between conventional DCs and plasmacytoid DCs. Thus far, 13 mammalian TLRs, 10 in humans and 13 in mice, have been identified (Beutler 2004). TLRs 1– 9 are conserved among humans and mice, yet TLR10 is present only in humans and TLR11 is functional only in mice. Although much is known about the ligands and signaling pathways of TLRs 1–9 and 11, the biological roles of TLRs 10, 12, and 13 remain unclear, as their expression patterns, ligands, and modes of signaling have yet to be defined.
TLRs mediate initial responses in innate immunity and are required for the development of the adaptive immune response. Toll-like receptors (TLRs) enable innate immune recognition of endogenous and exogenous prototypic ligands. They also orchestrate innate and adaptive immune response to infection, inflammation, and tissue injury. Given their significance in the immune response, it is not surprising that genetic variations of TLRs can affect their function and by extension affect the response of the organism to environmental stimuli. The genetics of TLRs provides important insights in gene-environment interactions in health and disease, and it may enable scientists to assess patients’ susceptibility to diseases or predict their response to treatments.
Evolutionary genetics of TLRs
In the domain of the evolutionary genetics of infectious diseases, the aim is to identify the evolutionary footprints of natural selection exerted by past infections in the genome of present day healthy human populations. Given the tremendous selective pressure that pathogens have exerted in the past, and continue to exert, it is hardly surprising that some of the strongest evidence for selection, of various types and intensities, in the human genome has actually been obtained for genes involved in immunity or host defense as immunity related functions seem to be a privileged target of natural selection in the human species as a whole (with respect to other primates) and in different human populations from diverse geographic regions.
Phylogenetic studies have indicated an ancient origin for TLR genes, some 700 million years ago, suggesting that TLR-mediated immune responses originated in the common ancestor of bilaterian animals. However, several recent, independent lines of evidence genomic, phylogenetic, and functional data have suggested that the similarities and differences between TLR-mediated innate immunity functions in insects and vertebrates may instead have resulted from convergent evolution. Convergent evolution refers the phenomenon where organisms that are not closely related independently evolve similar traits as a result of having to adapt to similar environments or ecological niches. Another study showed that vertebrate TLRs can be divided into six major families, with all the TLRs within a given family recognizing the same general or speciﬁc class of microbial compound. The patterns of interspecies divergence and levels of polymorphism in various primates, including humans, have recently been investigated. Signatures of accelerated evolution (species-wide positive selection) was found across primate species for most TLRs, with the strongest evidence of this obtained for TLR1 and TLR4, which have been independently targeted by positive selection. However, within each primate species, the patterns of nucleotide variation were generally constrained.
TLRs and Diseases
TLRs In Pulmonary Diseases
Current data suggest that TLR signaling can modify both allergic asthma and chronic obstructive pulmonary disease (COPD). Activation of TLRs can be either beneﬁcial or detrimental depending on many host factors, as well as dose, duration, and intensity of expo-sure to TLR ligands. Multiple epidemiologic studies have associated childhood exposure to TLR ligands with protection against develop-ing allergic asthma later in life (the “hygiene hypothesis”); for example, individuals living on farms have a reduced risk of developing hay fever or asthma. The most extensively studied TLR is TLR4. A study of asthma speciﬁcally associated with LPS in house dust showed that people with the TLR4 polymorphism Asp299Gly had a de-creased risk of bronchoreactivity. These observations are consistent with the hypothesis that LPS can exacerbate existing airway inﬂammation and that individuals with the Asp299Gly polymorphism have diminished pulmonary responses to LPS.
TLRs In CardioVascular Disease
Atherosclerosis is an inﬂammatory process, and innate immunity has been shown to par-ticipate in the development and rupture of atherosclerotic plaques. TLR4 polymorphisms that render the receptor less responsive to its ligands would therefore be expected to hinder the development and progression of atherosclerosis. Indeed, the Asp299Gly poly-morphism has been associated with decreased atherosclerosis, decreased risk for acute coronary events , and an improved re-sponse to statin treatment. The exact mechanism of this beneﬁcial effect is un-known; however, TLRs are expressed on several cells that participate in the atherosclerotic plaque, such as macrophages, dendritic cells, endothelia, smooth muscle cells, and lymphocytes. As the involvement of innate immunity in atherosclerosis is better understood, more genetic factors are likely to be discovered to inﬂuence both the suscep-tibility to cardiovascular disease and the response to treatment.
TLRs In Cancer
In Cancer inﬂammation acts as a double-edged sword. On one hand, chronic inﬂammation is associated with carcinogenesis, and cancer is a complication of chronic inﬂammatory conditions such as Crohn’s disease, chronic cystitis, and hepatitis. TLR activation leads to production of NF-κ B, which is associated with carcinogenesis and chemoresistance On the other hand, the immune system is necessary for the elimination of malignant cells, and immunosuppressed patients are a risk for the development of cancer. Immunotherapy (i.e., use of the patient’s own immune system to combat cancer, with the aid of vaccination or adjuvants) has been gaining attention as a potential treatment option in cancer. Indeed, we are only now beginning to understand how cancer cells evade elimination by modifying the innate and adaptive immune responses of the host. Activation of the immune system may therefore be a viable approach to cancer therapy.
1 . Casanova, Jean-Laurent, Laurent Abel, and Lluis Quintana-Murci. "Human TLRs and IL-1Rs in host defense: natural insights from evolutionary, epidemiological, and clinical genetics." Annual review of immunology 29 (2011): 447-491.
2. Pancer, Zeev, and Max D. Cooper. "The evolution of adaptive immunity." Annu. Rev. Immunol. 24 (2006): 497-518.
3. Garantziotis, Stavros, et al. "The Effect of Toll-Like Receptors and Toll-Like Receptor
4. Roach, Jared C., et al. "The evolution of vertebrate Toll-like receptors." Proceedings of the National Academy of Sciences of the United States of America 102.27 (2005): 9577-9582.
5. CE Sawian, SD Lourembam, A Banerjee, S Baruah “ Polymorphisms and expression of TLR4 and 9 in malaria in two ethnic groups of Assam, northeast India” Innate immunity 19 (2), 174-183
Hello! My name is Arunabha Banerjee, and I am the mind behind Biologiks. Leaning new things and teaching biology are my hobbies and passion, it is a continuous journey, and I welcome you all to join with me