Jump to content

CharonY

Moderators
  • Posts

    13283
  • Joined

  • Last visited

  • Days Won

    149

Everything posted by CharonY

  1. The inheritance is mostly not governed by the mitochondrium itself but by its host cells.
  2. Also to some extent applicable is the Dunning-Kruger effect. Specifically in expert areas supervisors are likely to be non-experts and will have issue in evaluating their own competence. If they are micro-managers, that it will be an issue.
  3. I am not sure what your main hypothesis is, but data so far suggest that heteroplasmy may be deleterious.
  4. Well, it is not terribly tricky, at best slightly on the expensive side. But even if you see higher mutation rate it does not necessarily explain why maternal is favored. As I mentioned, in some organisms, which are isogamous (i.e. the gametes are physiologically similar or the same) also uniparental inheritance can be observed. It could be a contributing factor, or it is just a side effect of some other mechanism. To the best of my knowledge (which is admittedly limited) we simply do not know. In general, mitochondrial mutation rate is rather high and there may be certain selective features in there. In this respect dilution becomes more interesting again, as under stringent conditions (to minimize mixtures of mitochondria, or heteroplasmy) only few are likely to survive. The dilution effect is something that is well known to happen in bacteria with respect to plasmid propagation, and I suspect a similar logic. There are estimates somewhere that put sperm mtDNA at a higher mutation rate, though this could be muddied by the high proliferation rate. But again, this alone does not seem to explain why homoplasmy is preferred. As a counterpoint it is as likely that it is due to the presence of the molecular apparatus (maybe linked to sexual reproduction in general) that maintains it. An interesting link is that in mouse either homoplasmic (paternal or maternal) were apparently normal, but heteroplasmic ones were not (Sharpley et al. Cell 2012). So one could speculate that the cell needs to maintain homoplasmy (for some reasons) and since the maternal cell carries most of the proteins, the apparatus may favor its own.
  5. I am not sure how genome comparison would yield specific insights as outlined in OP. Or maybe I am missing something here. I am also not sure about the reasoning in the first part, you mean why the degradation pathways exist? The latter is a rather new finding that put the dilution hypothesis (that has been around for quite a while) in question. I do not understand what that means.
  6. There are two basic hypotheses regarding maternal inheritance of mitochondria. Note that selection is not a necessary factor here. In the dilution hypothesis as outlined above it is a sheer numbers game. It is quite possible that this is merely down to the fact that sperm cannot or need not contain as many mitochondria as oocytes. However, in some animal models evidence for a second, mechanism, the active degradation of paternal mitochondria have been gathered. Different mechanisms have been identified ranging from mammals to slime mold. In biology the why question is quite dangerous. It is always attractive to speculate some evolutionary mechanisms somewhere, but often it is incredibly hard to find specific evidence. It then becomes too easy to build up narratives that make intuitive sense, but are not supported but actual data. For instance, it makes sense to speculate that during the swimming stage higher oxidative stress causes DNA damage. But then it does not explain why uniparental inheritance exist in isagamous organisms such as Physarum. Or one could speculate some competition between mitochondria, where the inequal distribution of gamete size and composition leads to the the maternal one winning out. But again, without further data they will remain speculative narratives.
  7. Did they do mass-screening pre-Fukushima?
  8. Yeah, no. It would require the government to secretly have access to a scifi level advancements unknown to scientists. Extremely unlikely and bordering to conspiracy theory.
  9. In addition to what swansont said, the issue is that we are dealing with an incomplete dataset. I.e. we have partial dependent (resampled part of the set) as well as independent data. A proper analysis would have to take that into account. Note that so far no approach has been suggested, much less provided. But if OP had proposed a statistical test, it would have been easy to demonstrate why the assumptions are violated. Unless, of course a proper method for incomplete sets such as proposed by Choi and Stablein (1982), although one would need more information on the dataset, such as, e.g. how many repeat samples the set contained. A comparison with a set outside of Fukushima would only make sense if the same proportion of the population had been screened (also a point in the Lancet paper). I also echo the request for publications. I believe that I actually saw one, but IIRC it was a rather limited data set but now I am not even sure whether it was a proper paper at all (as I cannot find it).
  10. One of the extremely tricky bits is of course to correlate the total radiation, absorbed radiation and biological damage (which also has to take source into account). That is why we have all these different measures (becquerel/sievert/gray), which can confuse matters a lot. That being said, Hiroshima was estimated to be 8-11YBq; Fukushima (according to Steinhauser et al Science of The Total Environment 470–471, 2014, P 800–817) 520 PBq. But again, that alone does not allow assessment of biological effects many other parameters (including timing, localization and spread of release) would severely affect the actual radiation damage. Radiation damage is simply not easy to assess at all and any short-term conclusions would have to be taken with a grain of salt until some larger longitudinal studies are available.
  11. Yes, and assuming you normalize to a year you would have to add that part in the methods section anyway.The alternative would to limit the analysis to the time frame where you have data for every region, if possible.
  12. Arete addressed all the points already, but for this, there is sometimes a rare option that the company hires you at master's level with the expectation that you get a PhD. This will be sponsored by the company and is often a joint project with an university. However, I would not count on it as these are typically special cases..
  13. Actually this is tricky. Extrapolating to 12 months is dangerous if you do not know whether incidence rate is stable throughout the year (which it typically is not). But if you do not find any data it is definitely a discussion point. Rather unfortunately it seems that the WHO also struggled with the dearth of data, so it could be that your report will havet to be somewhat inconclusive...
  14. You are missing the point that you are comparing data merely a year apart, how much resampling is being done in that group? Unless you can answer that your assertions are simply wrong. But as I do not see any interest in discussing the issue I will assume that the discussion will lead nowhere. Likewise, the blatant misunderstanding of the Chernobyl cases. This nicely sums up your basis misunderstanding what the purpose of statistical analyses are for. But as it is clear that you have no real interest in learning how a proper analysis could be done, and your use of spurious associations (and that is generous considering that there is no statistic whatsoever) indicates that you are just looking for someone to support your assertions. I guess this makes the thread useless until more papers are published (such as Watanobe et al. PLoS One. 2014 Dec 4;9(12) or Shibuya et al Lancet 383:9932 p1883-1884 who iterate some of the points I made; Tronke and Mykola Thyroid 2014 24:10 p 1547-1548 with a very preliminary assessment in comparison with Chernobyl; but seriously, why conduct a proper study when a few numbers and gut feeling can work as well?) Also, to whom are you attributing the quotes that you considered to be lies?
  15. Let me try to explain why it is not the case. Assume that you have cancer cases in the first time point of children below 18. Assume further that one year later you measure the same cohort. Unless all your cases were 18 the last year, you will count them and add new cases to the set, as cancer risk is cumulative with age. Your assumption of equilibrium (which does not matter anyway with that set) is flawed as obviously only one group leaves the set, those 18 at the first measurement. Every group before stays. Unless you account for that (and other factors as birth etc.) you cannot blindly use a trend built on it. Again, the data set as described is not independent. Once I have more time I could walk you through the statistical errors you are committing, if you are really interested.
  16. Not necessarily. For critical applications you would place it in a clean area, anyway. But often you have equal or larger risks on the way, unless you work in a zero flow box. In that case it helps if you place it under a horizontal clean bench. Sample-to-sample contamination typically does not happen (or at least I found no evidence for it when I evaluated the system). That being said, I am not aware of speedvacs with HEPA filters though some freeze-drying systems have them (but the vacc is typically higher there, too, resulting in more turbulences).
  17. Depends, really. Most modern speedvacs are pretty good in timing the onset of centrifugation (if you do it manually, some people switch it on too early) and have a vent that allows slow release while still centrifuging down. Thus, even if you load everything up chances are that you will not observe cross-contamination. However, general contamination can get into the sample fairly easily if the area is prone to any kind of contamination as the samples remain open for an extended amount of time. If everything is kept clean, contamination is rarely an issue (I have run speedvacced samples through MS or did sensitive PCRs with no indication of contamination of any kind, for example). However, in shared labs where the instrument is used (and abused) for all kind of stuff, chances rise dramatically that the area and the instrument itself gets dirty. So unless you have juryrigged system of dubious origin, the main issue is tends to be cleanliness and not so much the process or instrument itself (cleaning out rotor and chamber is mandatory).
  18. I suspect that the implementation of any measure on the state level could also be quite challenging. I suspect the definition of an underprivileged minority alone would be tricky if one tries to make it water tight enough to avoid politicking on the state level (or below).
  19. Now we have moved from hyperbole to utter ridiculousness.
  20. Assuming you mean UV radiation I would like to see a source not for the claim that it causes cancer, but that it is unparalleled. Based on which measures, specifically (per dose, per normal exposure etc.). Typically these types of assessments are very complicated and such a blanket statement is most likely not terribly accurate, to frame it carefully.
  21. Well, in Germany that would be a litre (or mass as called in the funny area called Bavaria).
  22. It depends on the type of assay. Some work with crude samples sufficiently well, in others you need protein purification. Your fist step should be looking at the protocol for the assay you are using and work your way backward from there to establish a standardized workflow.
  23. Either you are misunderstanding me or I am misunderstanding you. However, you do not have independent variables here. The same children are getting screened a year later. Here, cancer risk is cumulative. Also you would need a reference set in order to compare it to. If you looked into any population with that methodology you will see an increase over time, which may or may not reach stable values over time, depending whether children leave or enter the area and how many are born vs. leaving the set at age 18. It does not mean that these have an effect on the outcome, but any analysis at minimum would have to normalize for them. You can also look at the problem as a timer series, which is probably more appropriate given the structure of the data. What you describe is an increase (or regression) from 12 to 86 which looks massive. But first of all it is only two data points (so just drawing a line through it is tricky) and you do not have a reference set. I.e. you would have to look at a different prefecture or area that has the same methodology in screening and a comparable composition of youths. Then, you would have to look at least what the yearly growth rate would be. And to reiterate: in Chernobyl it took minimum of 4 years to see deviation from the norm but a bit longer until it was certain that it was significant (aside from the immediate deaths from acute doses). It was by far not immediately noticeable. The first report that I am aware of that mentioned that something was outside the norm was a short communication in Nature 1992 (i.e. six years after the incidence), but while it collected data, it was also lacking in in statistical soundness. So while it started rising the data (also the sampling methodology was a bit lacking, but it was more of a quick and dirty survey at that time) was still inconclusive. It took a while after that to put these values into statistical perspective . One of the reason is depending on sampling methodology and population structure fluctuations are expected. Just a single year that is higher does not tell you much. But if the trend in a time series continues, it will start sticking out. You are proposing a statistical design that is unsound, I am afraid. However, the whole time series set is being published and I am certain that people are looking into it. Does the number look high in isolation? Yes. But at this point it is barely above gut feeling and that is why sound statistical models are needed to test it. The proposed design is not useful for that, though. The trouble with time series is that you need quite a few of them. Population size determines that your data entry within a time point is not total crap, but you need more time points to figure out trends and associations. In the study for which I provided the DOI so far no significant correlations were found (including thyroid issues in dependence on radiation), but I have not read the full report yet. I am currently not certain whether there are others out there, but most epidemiologists (well, those that I know in any case) agree that it is too early to make any kind of assessments with any level of certainty at this point. But the good thing is that data collection is apparently better so that we may have studies out earlier than in the Chernobyl case.
  24. If job security is an issue, a research career, including academic one is probably not the best way to go. Unless you are extremely well networked you would have to deal with employment uncertainty at least until your ~40s. While there are also research positions in companies, they tend to be relatively rare and also more focused on development rather than discovery.
  25. Not all while being an undergrad. You are supposed to look around and find your interest. The real topic selection tend to start when you join a group for you graduate thesis. Even then switching is possible but typically once you build up a certain set of expertise it will to a degree influence your topic of research.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.