Precision medicine is a growing field in which genetic factors, environment, metabolism and even lifestyle are taken into account when deciding who should receive a treatment or not. When it comes to bronchopulmonary dysplasia I believe anyone who works in Neonatal care can attest it is a mystery why some infants go on to develop BPD while others don’t. We do know that certain treatment strategies may increase risk such as using excessive volumes or pressure to ventilate and in the last 25 years the notion that your level of cortisol in the blood may make a difference as well. I have written about prophylactic hydrocortisone use before in Hydrocortisone after birth may benefit the smallest preemies the most! When looking at the literature thus far and taking into account the results of the individual patient meta-analysis the following table can be generated highlighting a summary of benefits.
The question thus becomes if there is benefit for some infants under 26 weeks and then for some that are 26 and 27 weeks but there is also risk of harm, is there a way to select out those who are most likely to benefit with the least risk of harm.
A baby’s initial cortisol level may be the answer
The PREMILOC study was a double-blond multicentred trial of 523 infants randomly assigned to either prophylactic hydrocortisone in the first 24 hours of life or placebo. All infants were under 28 weeks at birth and received 1 mg/kg/d of hydrocortisone 1 mg/kg/d for 7 days followed by 3 days of 0.5 mg/kg/d for three days. In a pre-planned study coming out of the PREMILOC study, researchers looked at the role of baseline cortisol in predicting response to treatment or risk of adverse outcomes.
What they found in examining baseline levels for both treatment and placebo groups was that a relationship exists between the baseline level and such outcomes.
From Table 4 they found a relationship between survival without BPD and a higher initial level of cortisol but found no such relationship in the treatment arm. The threshold of what was considered high was 880 nmol/L although the mean cortisol was in the 400-500 nmol/L range. in other words, if having adequate physiologic levels of cortisol is the goal and a baby already has that, giving more non-antiinflammatory dosing of hydrocortisone doesn’t yield benefit.
Similarly, when looking at side effects a positive correlation was found between higher baseline levels of cortisol and risk of grade III/IV IVH and spontaneous intestinal perforation. It would seem therefore that if a baby has the level of cortisol that they would normally have from a physiologic perspective they are no different than a placebo arm patient when given hydrocortisone as you bring them to where they need to be. When you double the dose however that they should have, side effects begin to rear their ugly head.
How can you use this information?
From personal conversations I know that many centres are struggling with what to do about giving hydrocortisone. On the one hand there isn’t much benefit (if at all) for BPD in the 24 and 25 week infants but they do better from a neurodevelopmental standpoint. On the other hand there is a benefit in the 26 and 27 week infants but you may predispose them to side effects as well.
This is where precision medicine comes in. One option for centers unsure of who to provide this to (if at all) could be to use a threshold of 880 nmol/L and if the initial level is above this you would not treat but if below offer treatment. This level while found in the study to be predictive of side effects in particular if high does seem very high to me. I would think most babies would qualify which is not necessarily a bad thing but in our center we have typically used levels above 400 or 500 as an adequate stress response. Regardless of the level picked one would be using physiologic data to determine who to give hydrocortisone to as a way to try and maximize benefit and minimize harm for the individual patient.
Make no mistake. Regardless of whether you decide to try this for your patients I don’t believe this is a magic bullet. The best chances for our patients come from having bundles of evidence based based practices and applying them to the patient population if we hope to reduce BPD and minimize risk from any side effects of our treatments. The question is whether prophylactic hydrocortisone should be part of this bundle.
Knowing when to extubate an ELBW is never an easy task. Much has been written about extubation checklists including such measures as mean airway pressure minimums and oxygen thresholds as well as trials of pressure support at low rates. The fact remains that no matter how hard we try there are those that fail even when all conditions seem to be met for success. The main culprit has been thought to be weakening of the diaphragm as the infant stays on the ventilator for longer periods of time. Specifically, myofibrillar contractile dysfunction and myofilament protein loss are what is occurring leading to a weakened diaphragm which may be incapable of supporting the infant when extubated even to CPAP. More recently in Neonatology the use of point of care ultrasound (POCUS) has gained in popularity and specifically use of lung ultrasound has helped to better classify various disease conditions not only in determining which disease is active but also following its course. Using POCUS to measure thickness and excursion of the diaphragm has been employed in the adult world so using it in neonates to determine extubation readiness seems like a logical next step.
An Observational Cohort Study
Bahgat E et al published Sonographic evaluation of diaphragmatic thickness and excursion as a predictor for successful extubation in mechanically ventilated preterm infants in the European Journal of Pediatrics. This small study sought to look at preterm infants born under 32 weeks and assessed a number of measurements of their diaphragm bilaterally including thickness of both during the respiratory cycle and the excursion (measured as most caudad and cephalad position during respiration). All patients underwent a similar process prior to extubation using PSV with a support of +4 over peep with measurements taken 1 hour prior to planned extubation. All infants met unit criteria for a trial of extubation based on blood gases, FiO2 and MAP being less than 8 cm H2O. All infants received a PSV trial for 2 hours before being extubated to CPAP +5. The sonographic assessment technique is laid out in the paper and the study end point was no reintubation in the 72 hours after extubation. The decision to reintubate was standardized as follows: more than six episodes of apnea requiring stimulation within 6 h, or more than one significant episode of apnea requiring bag and mask ventilation, respiratory acidosis (PaCO2 > 65 mmHg and pH < 7.25) or FiO2 > 60% to maintain saturation in the target range (90–95%).
Differences between the groups at baseline included a longer median day of extubation by 3 days, total duration of mechanical ventilation, higher mean airway pressure and FiO2 all in in the failure group.
Results of the study find a key difference in measurements
Looking at table 2 below the main finding of the study was that the biggest difference between those infants who succeeded and those that failed was the excursion of the diaphragm rather than the thickness. The greater the excursion the better the chance at successful extubation. In experienced hands the measurement does not take that long to do either.
As the authors point out in the paper:
“A right hemidiaphragmatic excursion of 2.75 mm was associated with 94% sensitivity and 89% specificity in predicting successful extubation. A left hemidiaphragmatic excursion of 2.45mmwas associated with 94% sensitivity and 89% specificity in predicting successful extubation”
Is this the holy grail?
There is no question that this technique adds another piece to the puzzle in helping us determine when it is safe to extubate. If I can pick one fault with the study it is the use of a pressure of +5 to support the extubated infants. If you look at the mean level of MAP the infants were on prior to extubation in the two groups it was 6.3 in the successful group and 6.6 in those who failed. By choosing to extubate the group that was already on a mean of about 24% to an even lower pressure level I can’t help but wonder what the results would have looked like if extubation occurred at a non-invasive level above that when they were intubated. Our unit would typically choose a level of +7 to extubate such infants to and avoid pulmonary volume loss so what would the results show if higher pressures were used (someone feel free to take this on).
One thing though that is borne out of all this however is that if diaphragmatic weakening happens in the neonate with prolonged ventilation as well it would be supported by the long length of ventilation in the failure group that also has less diaphragmatic thickness and excursion. What this study in my mind really says is that extubation should occur as early as possible. Every time you hear someone say “why don’t we wait one more day” you can now imagine that diaphragm getting just a little weaker.
As I said on a “tweet” recently “No one should brag about having a 100% extubation success rate”. If that is your number you are waiting too long to extubate. Based on the information here it should be a reminder that the plan for extubation needs to start as soon as the tube is inserted in the first place.
This could turn into a book one day I suppose but I have become interested in chalenging some of my long held beliefs these days. Recently I had the honour of presenting a webinar on “Dogmas of Neonatology” for the Indian Academy of Pediatrics which examined a few practices that I have called into question (which you can watch in link). Today I turn my attention to a practice that I have been following for at least twenty years. I have to also admit it is something I have never really questioned until now! In our institution and I suspect many others, infants born under 1250g have been fed every two hours while those above every three. The rationale for this has been that a two hour volume is smaller and causes less gastric distention. This in theory would benefit these small infants by helping to not compromise ventilation or lead to reflux. Overwhelming the intestine with large distending boluses would also in theory lead to less necrotizing enterocolitis. All of this of course has been theoretical and I can thank those who preceded me in Neonatology for coming up with these rules!
Study Challenges This Old Belief
Yadav A et al published Two-hourly versus Three-hourly Feeding in Very Low Birthweight Neonates: A Randomized Controlled Trial out of India (well timed given my recent talk!). The authors randomized 175 babies born between 1000-1500g to either be fed q2h vs q3h once they began protocol feeding. The primary outcome was time to full feedings. Curiously, the paper indicates they decided to do a preplanned subgroup analysis of the 1000-1250 and 1251 -1500g groups but in the discussion it sounds like this is going to be done as a separate paper so we don’t have that data here.
The study controlled conditions for determining feeding intolerance fairly well. As per the authors:
“Full enteral feed was defined as 150 mL/Kg/day of enteral feeds, hypoglycaemia was defined as blood glucose concentration <45mg/dL [15]. Feed intolerance was defined as abdominal distension (abdominal girth ≥2 cm), with blood or bile stained aspirates or vomiting or pre-feed gastric residual volume more than 50% of feed volume; the latter checked only once feeds reached 5 mL/kg volume [16]. NEC was defined as per the modified Bells staging.”
We don’t use gastric residuals in our unit to guide cessation of feedings anymore but the groups both had residuals treated the same way so that is different but not somethign that I think would invalidate the study. The patients in the study had the baseline characteristics shown below and were comparable.
Results
It will be little surprise to you that the results indicate no difference in time to full feedings as shown in Figure 2 from the paper.
The curves for feeding advancement are essentially superimposed. Feeding every two vs three hours made no difference whatsoever. Looking at secondary outcomes there were no differences as well in rates of NEC or hypoglycemia. Importantly when examining rates of feeding intolerance 7.4% of babies in the 2 hour and 6.9% in the 3 hour groups had this issue with no difference in risk observed.
Taking the results as they are from this study there doens’t seem to be much basis for drawing the line at 1250g although it would still be nice to see the preplanned subgroup analysis to see if there were any concerns in the 1000-1250 group.
Supporting this study though is a large systematic review by Dr. A. Razak (whom I have collaborated with before). In his systematic review Two-hourly versus three-hourly feeding in very low-birth-weight infants: A systematic review and metaanalysis. he concluded there was no difference in time to full feeds but did note a positive benefit of q3h feeding in the 962 pooled infants with infants fed 3-hourly regainin birth weight earlier than infants fed 2-hourly (3 RCTs; 350 participants; mean difference [95% confidence interval] -1.12 [-2.16 to -0.08]; I2 = 0%; p = 0.04). This new study is a large one and will certainly strengthen the evidence from these smaller pooled studies.
Final Thoughts
The practice of switching to q2h feedings under 1250g is certainly being challenged. The question will be whether the mental barriers to changing this practice can be broken. There are many people that will read this and think “if it’s not broken don’t fix it” or resist change due to change itself. The evidence that is out there though I would submit should cause us all to think about this aspect of our practice. I will!
Regardless of where you live, you may have noticed that this year’s flu and RSV season has not been as bad as in previous years. I recall early in the pandemic hearing that Australia had virtually no flu season at all this year. The question therefore is where did these viruses go? Many people have attributed the drop is other viral illnesses to the fact that we are wearing masks and washing our hands a lot more. This no doubt has something to do with it but the reality is that many have chosen not to wear masks and additionally we know that others have failed to use PPE appropriately. Let’s face it, COVID has spread far and wide and yet the other viruses have not. So where did they go?
Viral Interference
This is a subject I had never heard of or really considered as an entity until I began looking at COVID19 as part of a talk I gave this year. The concept has been known since the 1930s or even earlier. The first reference I found for this was by Findlay GM An interference phenomenon in relation to yellow fever and other viruses from 1937 (sorry no link to the paper) and then Lennette EH, Koprowski H. INTERFERENCE BETWEEN VIRUSES IN TISSUE CULTURE. J Exp Med. 1946 Feb 28;83(3):195-219. In this latter study innoculation of yellow fever virus and the West Nile virus produced either partial or complete suppression of growth of the Venezuelan equine encephalomyelitis virus. In other words, the idea here is that once one virus takes hold of a cell it becomes quite difficult (but not impossible) for another virus to also grow at the same time. Looking at how SARS-CoV-2 and RSV might interact is interesting as in tissue culture RSV is the slower replicating virus. SARS-CoV-2 seems to grow in cell culture in the order of 3-5 days while RSV tends to be longer at 5-14 days. In vivo of course things can be different as the host can influence rapidity of growth but the if SARS-CoV-2 is spreading rapidly through the community it could be that by infecting the host first and being the faster growing virus it effectively and strangely protects the host against other viruses such as RSV. If there is anything about this pandemic that maybe we can take some comfort it it is maybe that!
What about disease severtiy when two viruses take hold?
You would think if you were unlucky enough to get RSV and COVID19 at the same time your symptoms would be worse than getting RSV alone but the evidence suggests otherwise. Having two viruses competing for the same host may lessen the severity of the disease. This was demonstrated by Kim Brand H et al in Infection with multiple viruses is not associated with increased disease severity in children with bronchiolitis Pediatr Pulmonol 2012 Apr;47(4):393-400, In this study the authors examined 142 nasal aspirates of children with bronchiolitis and grouped them into categories of mild, moderate and severe disease. What they found suggests that two or more viruses infecting the same host may reduce the severity of the illness usually ascribed to the virus compared to when it affects the host alone. In the case of RSV from Table 2 above, with severe bronciolitis, RSV was found 76% of the time. Bronchiolitis may be caused by other viruses of course and towards the bottom of the table when severe disease was present one virus was found 70% of the time. Howeve as the number of viruses in the host increased the likelihood of severe diease dropped while mild diease increased.
What to expect then?
I am just a Neonatologist but based on the above research I am expecting a couple things this winter season. I predict we will continue to see lower rates of RSV infection and perhaps influenza as the domination of SARS-CoV-2 continues. The other thing that will be interestin to look at retrospectively will be what the distribution of disease spectrum for RSV is this season as if the above is correct we should see less severe disease when looking at emergency visits and hospitalizations. Will be an interesting story for 2021 and I suspect much will be written about the impacts of COVID19 on many fronts. Look forward to no longer talkign about that virus at some point later in 2021 when we all start saying “remember when…”
Any regular reader of this blog will know that human milk and the benefits derived from its consumption is a frequent topic covered. As the evidence continues to mount it is becoming fairly clear that the greater the consumption of mother’s own milk the better the outcomes appear to be with respect to risks of late onset sepsis or BPD as examples. Moving to an exclusive human milk diet has been advocated by some as being the next step in improving outcomes further. While evidence continues to come suggesting that replacement of fortification with a human based instead of a bovine based fortifier may improve outcomes, the largest studies have been retrospective in nature and therefore prone to the usual error that such papers may have.
What is evident though as the science pursues this topic further is that the risk of necrotizing enterocolitis or NEC is not zero even with a human milk diet. Why is that? It might be that some risks for NEC such as intestinal ischemia or extreme prematurity simply are too much to overcome the protective effect of breastmilk. Perhaps though it could be related to something intrinsic in the breastmilk that differs from one mother to another with some producing more protective milk than others.
Secretors vs Non-secretors
When it comes to the constituents of breastmilk, human milk oligosaccharides or HMOs are known to be secreted into breastmilk differently depending on whether a mother has a secretor gene or not. this has been demonstrated recently in HMOs affecting the microbiome in infants Association of Maternal Secretor Status and Human Milk Oligosaccharides With Milk Microbiota: An Observational Pilot Study. HMOs are capable of a few things such as stimulating growth of beneficial microbes and acting as “receptor decoys” for pathogenic bacteria. Previous rat models have also demonstrated their potential to reduce NEC in rat models. Essentially, mothers who have the secretor gene produce more diverse types of HMOs than mothers who are secretor negative.
The Type of HMO May Be the Key To Reducing NEC Wejryd E et al in 2018 published Low Diversity of Human Milk Oligosaccharides is Associated with Necrotising Enterocolitis in Extremely Low Birth Weight Infants This paper was an offshoot of the PROPEL study on the use of prophylactic probiotcs to reduce severe morbidities. Babies were all born between 23 + 0 to 27 +6 weeks and all infants received exclusive breastmilk. All fortification was with a bovine product. Breastmilk samples were obtained from 91 mothers of 106 infants at 2 weeks, 28 days after birth and finally at 36 weeks PMA and the HMO content analyzed.
What came out of the study were a couple very interesting findings. The first is that when analyzing the HMOs present in breastmilk at 2 weeks and comparing those who developed NEC to those who did not there was one significant difference. Lacto-N-difucohexaose I (LNDH I)had a median level of 0 (IQR 0-213) from the milk of those mothers who had infants affected by NEC. There were no differences observed for any other HMOs.
Also of interest was the greater diversity of HMOs present in the breastmilk samples of mothers whose infants did not develop NEC. This was present at all time points.
How Could This Be Useful?
If a broader array of HMOs is associated with less risk of NEC and the presence of LNDH I carries the same association it opens the door to the next phase of this research. Could provision of LNDH I in particular but moreover a wide array of HMOs to mother’s milk reduce the occurrence of NEC? This will need to be tested of course in well designed randomized trials but this type of fortification could be the next step in what we add to human milk to enhance infant outcomes. Given that it may be difficult to determine in short order whether women have these HMOs already a broad based fortification strategy assuming insufficient amounts of HMOs would be best. A quick search on clinicaltrials.gov shows that there are 101 trials in children looking at HMOs at the moment so more information on this topic is certainly on the way. Could HMOs be the magic bullet to help reduce NEC? Just maybe!