Jump to content

Glider

Senior Members
  • Posts

    2384
  • Joined

  • Last visited

Everything posted by Glider

  1. If you're talking about the whites of your eyes going red, then it's simply the dilation of the capillaries in the sclera (the white of your eyes). Anything that reduces this redness is simply reversing that effect.
  2. Yeees. Outstanding bedside manner. Very reassuring.
  3. I find it ironic the number of time the opening phrase "As a Christian..." is followed by open demonstrations of intolerance and sweeping judgement. I wonder if these alleged Christians believe that it's only non-Christian heathens that need to bother with the other tenets of the religion, such as "Judge not, lest ye be judged", or "Love thine enemy" and "Let he who is without sin cast the first stone". Speaking of 'love thine enemy', the full thing goes (more or less): "I say unto you, Love your enemies, bless them that curse you, do good to them that hate you, and pray for them which despitefully use you, and persecute you; that ye may be the children of your Father which is in heaven". Gays and lesbians aren't anybody's enemies, they do not 'curse you' nor 'hate you' nor 'use you despitefully' not 'persecute you', yet so many self-professed 'Christians' can't even bring themselves to show them the most basic human respect, let alone show them any Chritian love. I don't believe in God, but even so, the larger question of God's existence seems to me to be premature when there is so little evidence for the existence of true Christians.
  4. In the experiment outlined by Ms DNA (which is a good one; I would go with it), your independent variable would what substance you coated the fruit with, and will have three levels: Lemon Juice, vinegar, water (water being the control). That would be easier than having amount of lemon juice as an IV, because I think coating a piece of fruit in a liquid would take a fixed amount (depending on the viscosity of the liquid), so to manipulate the amount of lemon juice, you would have to work out several different concentrations of lemon juice and things could begin to get complicated then. Your dependent variable would be rate of browning. This is probably the easiest measure as you can simply record the amount of time until the fruit starts going brown rather than trying to measure how brown a piece of fruit is or how much 'browner' any piece of fruit is than any other, which may be a bit subjective.
  5. Glider

    Live Forever?

    Pretty much. As I say, excluding disease, there is no reason we can't remain mentally agile into our 80s and 90s. Many people do. Not everybody sinks into dribbling oblivion as they get older. It's like the body, keep it active and fit, and it will serve you well for longer.
  6. Not to mention increasingly cynical.
  7. Glider

    Live Forever?

    "Your brain cell dies out, and so you die"? We have many brain cells. Many, many, many. We can also lose a significant number of them before there is any noticable effect at all. The majority of (natural) deaths are attributable to the failure of other organs; the heart and kidneys are common candidates. These are a function of a general systemic failure. The heart becomes less effective, the blood pressure becomes less controlled, the kidneys (which depend on strictly controlled BP, which is why they are significant in its control in the first place) begin to fail, and so-on and so-on, leading to complete systemic collapse.
  8. Glider

    Live Forever?

    No. Alzheimer's is a disease. Another term for it is pre-senile dementia. Alzheimer's can aflict people in their 50s and 60s (pre-senile), and occasionally even earlier. Moreover, whilst natural aging processes may have an effect on cognitive function, the neurological and cognitive carnage wrought by Alzheimer's far outweighs anything that occurs naturally (i.e. in the absence of pathology). There is no real reason that people in their 80s can't be as mentally agile as anybody else. There is a lot of evidence to suggest that the loss of mental ability, outside of actual pathology, conforms largely to the principle; 'If you don't use it, you lose it'.
  9. I got Glider....oh, wait...
  10. The theory was developed independently by Williams James and Carl Lange. Their argument (as far as I recall) was that immediate response to emotional stimuli was visceral (autonomic), and that the emotional response was determined by these autonomic changes. The argument has been pretty much trashed by now, on the basis that autonomic changes are comparitively slow, where emotional responses can occur before the individual is even consciously aware of the stimulus (within 250 ms). Moreover, there are limited characteristic patterns of autonomic response, generally based upon fight or flight, but there are a much greater range of emotional states, so autonomic patterns of arousal cannot account for anywhere near the complete range of emotions for which they were supposed to be responsible. Schacter & Singer (1982) proposed the cognitive labelling theory, which states that physical arousal was inherently neutral, and that it was the cognitive evaluation of the context which determined the resulting emotional state. The idea had some merit, but their experiment was flawed. They injected adrenaline into participants (telling them it was something else), and then placed them in a waiting room with an experimental confederate who acted either happy or angry. The results showed that perticipants in the company of the 'happy' confederate labelled their state of arousal (which was due to the adrenaline) as happiness, and those in the company of the 'angry' confederate labelled their arousal as anger. The main flaw here is that the experiment was based on the assumption that emotion is caused by autonomic arousal (i.e. that autonomic arousal precedes emotion), and so was based on circular reasoning and arguably didn't actually test anything.
  11. That isn't an experiment, it's a question. You need to provide an outline of the experiment you would use to try to answer that question.
  12. That's right. Because it is small and results in a small chance of a type-I error. The precise value for alpha can be set at whatever value the researcher thinks is appropriate (although it would be extremely unwise to increase alpha). That the alpha is simply a convention is not grounds for rejecting it. Moreover, it means that if you don't agree with it, you can change it. The responsibility lies with the resercher, not with the concept of significance. That's right. This comes down to the definition of 'significance'. As alpha is only a convention, 'significance' means only that should a researcher consider a result significant then there is an acceptably low probability of falsely rejecting the null hypothesis. It does not mean that an effect showing p less than 0.05 is 'true' and p greater than 0.05 is 'false'. Most comparisons are made against known population parameters; i.e. normative values which are known within a given population. If your sample population is representative then these values (mean and SD) will be the same within your sample and the population. You apply your intervention and and measure the degree to which these values have changed (e.g. a t-test). The t-test provides a statistic which indicates the magnitude of difference (you assume is) caused by your experimental intervention and is a test of whether the two samples of data are from a broad, single distribution (i.e. one population with high varience) or from two separate (although overlapping) distributions. Alpha is set to provide the best balance between the probabilities of type-I and type-II errors. If it is set at 0.05 and your t statistic is shown to significant at alpha = 0.05, that simply means you have an acceptably low probility of being wrong by rejecting the null hypothesis (A = B) in favour of the alternative (A /= B). The p-value is simply the probability that an effect as large as or larger than that observed would occur by chance if there were no other differences between the groups. By inversion, p = 0.05 denotes a 95 % chance that the difference was due to the experimental intervention. But as I said, depending on what is being measured, it is for the researcher to decide whether a 5% chance of being wrong is acceptable. Software may provide apparently highly precise values for p, but that does not mean they are accurate (if you think about it, the term 'accurate measure of probability' is almost an oxymoron). They can only be as accurate as your sample is representative and even then are subject to the effects of error and noise. Nor does it mean that they have to be accepted. The researcher must decide which error is the most important (type-I or II), and should also have an idea of how much of a difference their intervention must cause for it to be acceptable (i.e. they should have calculated an acceptable effect size). p values are simply a guide. Over-reliance on p values and computer output is no substitute for common sense and sound data and I admit freely that there are many researchers who depend entirely upon their SPSS output as the be-all and end-all, without considering other factors. All statistics are estimates. The whole point of them is to be able to achieve reasonable estimates of population variables to avoid the need for measuring every individual within a population. Statistics are based on sample estimates of population perameters. Even the calculation for the sample SD (square-root of (sum of (x - mean) squared), divided by (n-1)) uses n-1 as an attempt to account for error, which makes sample SD an estimate of the population SD (where the whole lot is divided by N). The validity of any statistic (including p) depends upon the validity of your data, and the degree to which your sample is truly representative of the population of interest. p values do not say 'significant or not'. They provide an estimate of the probability that the difference you observed could happen by chance alone. It is the researcher who must decide whether or not that probability is low enough to reject the null hypothesis with a reasonable degree of safety and whether it is safe to assume that the difference was due to their intervention and not some other factor(s). In order to make that decision, the researcher has to take into consideration the power of the experiment, the effect size, the sampling method, the type of data, the level of measurement, possible confounds, the degree of noise and all the other factors that may influence the outcome and validity of the experiment. As you said yourself, alpha is arbitrary, so by definition it is the researcher's decision. I agree, there is a large scope for abuse, but then that's true of all statistics; there are lies, damn lies and then there are statistics (check out the use politicians and advertisers make of stats.). But people abuse the statistics. To blame the statistics themselves is a bit pointless. In fact, the greater the understanding of the underlying principles of statistics, the less the scope for abuse (unless you're an unscrupulous git of course; cf the politician and advertisers thing).
  13. Yes, that's true, and it is a problem, but it's more due to misuse by researchers than an inherent problem with the statistic. It is both, if you think about it. P stands for both proportion (under the normal distribution) and denotes an area of the distribution within which your effect probably falls (assuming, as you rightly point out, the research is sound and free of any confounds or bias, which is extremely rare). By the same token it denotes the probability that you will be wrong if you reject the null hyothesis. It's essentially the same thing worded differently. Of course it can't. But then the same can be said of any test statistic. The presence of a confounding variable, i.e. an uncontrolled systematic error renders all results worthless. A p value will only be as reliable as the experiment that generated the data it is based upon. The principle problem is the failure by many researchers to understand this, i.e. that although stats software often presents precice 'exact p' values to three or four decimals places, this does not mean it is accurate. It is only as good as the data that it is based upon. If sample selection, data collection or any number of other factors are flawed, then any results based upon those data will also be flawed. The main debate is that given there will be noise in any experimental data, how much weight should be given to precise values and rejection levels? For example, Howell (1997) states in his book: "The particular view of hypothesis testing described here is the classical one that a null hypothesis is rejected if its probability is less than the predefined significance level, and not rejected if its probability is greater than the significance level. Currently a substantial body of opinion holds that such cut-and-dried rules are inappropriate and that more attention should be paid to the probability value itself. In other words, the classical approach (using a .05 rejection level) would declare p = .051 and p = .150 to be (equally) "nonsignificant" and p = .048 and p = .0003 to be (equally) "significant." The alternative view would think of p = .051 as "nearly significant" and p = .0003 as "very significant." While this view has much to recommend it, it will not be wholeheartedly adopted here. Most computer programs do print out exact probability levels, and those values, when interpreted judiciously, can be useful. The difficulty comes in defining what is meant by "interpreted judiciously"." So it seems that whilst p values are useful, potential problems stem from inappropriate interpretation of them. Howell, D. C., (1997). Statistical Methods for Psychology. (4th Ed.). International: Wadsworth.
  14. But the sign does indicate the direction of difference, (A greater than B or A less than B) so it's still important. We know that the cut-off value of 0.05 is arbitrary, and as such, we know it is not fixed, and can be changed (reduced, never increased) according to the requirements of the particular experiment. That the alpha value is simply a convention is not grounds for rejection. There is a reason for that convention. I beg to differ. Research is based on probability. Hypotheses are formulated to be disproved, but they can never be proven, only supported. Due to the absence of certainty in research, it attempts to walk the fine line between type-I and type-II error. A type one error is where the reseacher fails to reject the null hypothesis when it is true, and the probability of this is denoted as alpha. A type two error is where the researcher accepts the null hypothesis when it is false. The probability of this is denoted as beta. As you say, the alpha level of 0.05 is arbitrary, but alpha and beta probabilities are inversely related. If you reduce alpha too far, you increase beta to unacceptable levels. If you reduce beta too far, you increase alpha to unacceptable levels. This is why the maximum level for alpha has been (arbitrarily) fixed at 0.05. Power analysis is used to reduce beta as far as possible. It is for the researcher to decide which error is the most important depending upon what the experiment is testing. However, most of the problems occur when people misinterpret the meaning of p. If you conduct an experiment with alpha = 0.05 and the results show p = 0.04, this does not mean your hypothesis is proven, it means only that the probability that the result happened by chance is 4%. In other words, you (the researcher) have a four percent chance of being wrong if you accept your alternative hypothesis (reject the null hypothesis). Moreover, alpha can be set at any lower level you wish, depending on which you consider to be the most important error, or the error with the most dire implications; failing to detect an effect that exists (type-II), or detecting an effect which does not exist (type-I). In areas that generate less noisy data than Psychology, alpha is often set by the researchers at 0.01, or even 0.001 Rubbish. Very small samples increase beta and reduce alpha. In other words, the smaller the sample, the less representative of the population it is, and the greater the chance you will fail to detect an effect that does exist within that population (type-II error). Sample size, effect size and power are all interrelated. Sample size is a key factor in the power of an experiment to detect an effect (avoid a type-II error). Take for example Pearson's product moment correlation. Any first year knows that using a small sample, one can generate large correlation coefficients ® that will fail to reach significance, but using a large sample, comparatively small values for r can reach significance. The same has been shown for a number of antidepressant drugs. Under strictly controlled and finely measured conditions, the drugs have a small but statistically significant effect. However, when used in the real world (outside of laboratory conditions) these drugs had no measurable (clinical) effects. In fact they would still have been having some effect, but that would have been buried under the 'noise' of real-world conditions (i.e. the original experiments lacked ecological validity). What you are talking about here is the difference between statistical and clinical significance, and also the failure of some people to understand the difference between statistical significance and effect size. Effect size and statistical significance are different things. To generate p less than 0.0005 (for example), does not mean there is a large effect. It simply means the chances that you are wrong in rejecting the null hypothesis are very small. As shown by the Pearson's example, small effects can achieve statistical significance if the experiment is of sufficient power. All a low value for p means is that there is a greater probability that the effect does exist, not that it is large. Effect size is calculated by dividing the standardized difference between samples by the standard deviation, and is completely different from p. A statistically significant but small effect can be genuine, but have no clinically significant effect. In other words, to be clinically significant, an effect needs to be of sufficient size to have a measurable effect in real-world application (i.e. an effect that is detectable through all the other 'noise' generated outside of controlled conditions). 1) This is not the kind of advice you should be offering to second year reseach methods students. Mainly because... 2) You are wrong. Errors concerning statistical significance occur through misuse and abuse by people who have failed to understand the priciples and functions of reserch methods and statistics. Out-of-hand rejection of basic statistical principles tend to come from the same people.
  15. A t value can be positive or negative. Basically t is a continuum, with zero in the centre, where zero indicates the means of the two samples are identical. The further away from zero t is, in either direction, the greater the difference between the means of the two samples of data. The value for p (significance) comes from a different test, but SPSS does both and presents all the statistics you need to report; t, df and p. In Psychology, the cut-off point for significance is 0.05 (5%), so any p value less than 0.05 is considered significant. If you are doing an independent groups t test (e.g. males Vs females) and measured reaction times, you wold for example code males as 1 and females as 2. Say this gives a result of t = -2.459. The negative value for t comes simply from the way you coded your groups. If you coded females as 1 and males as 2 and re-ran the same test on the same data, you would get the result t = 2.459 (same magnitude, just at the other end of the continuum). Essentially, t = - 2.459, p = 0.02 indicates a significant (but not very large) difference between your samples.
  16. Cool! It's not evidence of anything though. Who indeed. It was on TV. It was probably there for a reason. Neat trick; not evidence. I already have one of those. It's the random number generator (RNG) function on my calculator. It can generate lottery numbers for me. The only drawback is that numbers predicted by the RNG have the same probability of being correct as numbers generated by any other method (about 13.9 million to 1 against).
  17. Very true. As for how drugs get into prison, visitors (and occasionally staff) smuggle them in.
  18. The question "Does neuroscience explain how psychics work?" is based on the (flawed) premise that psychics do work (i.e. that there is something to explain). As Blike said, there is no empirical evidence to show that psychics do work, therefore there is nothing for neuroscience to explain.
  19. Hamza is talking about the mean deviation. The measure of how much any given observation varies from the mean is a deviation. These will have either positive or negative values. The mean of them will always be zero (as the mean is the arithmetical centre of the data). So, to calculate the mean deviation, we ignore the sign (+ or -) which gives us for each deviation the absolute deviation, shown as |d|. So, the two ways of calculating the mean deviation is either: Sum of |x - mean| ---------------------- N or Sum of |d| ------------- N The Standard deviation is: (Sum of (X - mean)^2) The square root of: ----------------------------- N (or N-1 for the sample SD). Dang! I wish I knew how to use the formula doohickey.
  20. Oooh..good question! I think a patient (prisoner or otherwise) needs to be able to trust their doctor. For example, a prisoner who may have taken an overdose is less likely to call for a doctor and more likely to try to 'ride it out' if he/she knows the doctor will report it. This is dangerous. I think drugs testing (which should happen) and medical consultation should be kept separate.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.