Jump to content

Dak

Senior Members
  • Posts

    3342
  • Joined

  • Last visited

Everything posted by Dak

  1. I don't really see why they should. the glitch only affects a small number of sites. and in any case, whilst the reaction by the firewall is a glitch, craig's list could allways stop sending out 0-byte windows. I have no idea what they are, but i gather that it's quite uncommon to send them, outside of server-overload. the problem seems to be a result of a glitch in authentumn software and an odd networking practice by craig's list. note that, even were there laws enforsing network neutrality, that this situation would (likely) not be covered by it as it is a security application preventing access to a site, not the ISP.
  2. Authentumn, however, are not the same company as Cox. It's a tad of a stretch to assume that Cox asked authentumn to build-in, and drag their feet over fixing, a glitch that would choke/prevent access to one web site for any of cox's subscribers who happened to use the suite. Not to mention that authentum presumably have other, non-cox clients, who would suffer from the same glitch, thus harming authentumn's reputation. I think Cox deserve the benifit of the doubt in this instance.
  3. I have absolutely no idea, but it seems as if it's a desision made by Craig's list, and nothing to do with Cox. In the words of their product manager, "it's a glitch" Well, considering the blog i linked to is of a reputable anti-spyware company, i'm presuming that they checked that this post that they linked to was actually written by the product manager of authentum (from whom Cox licence their security suite), so i'd say it was.
  4. possibly not
  5. question: why is this allways put forth as an american issue? Does the UK allready have network-nutrality laws, or are only US companies desciding to abandon network-nutrality? Either way, we brits could hassle our MPs to generate a request from the UK to the US to not allow screwing with the interweb, which we also have to use, like this i suppose.
  6. Im not sure how the american system works, but in the uk id imagine youd need some kind of biophisical degree, like a BSc in biophisics, biology with phisics, biochemistry (maybe), etc. for a PhD, i'd imagine you could get one in biophisics/nanotech, but bear in mind that you'd specialise in one very narrow area of nanotech.
  7. To be honest, that's a pretty good tactic. My vote goes to spidey.
  8. heh, probably the best one of those that i've seen. The chemical processes that translate different wave-lengths of light into electro-chemical signals that our brain can interpret take a slight time to reset. meaning that if you, say, stare at blue for a while, and then look at white, your blue-receptor chemicals will be slightly depleted, meaning that the green and red light will be detected better than the blue, thus over-representing the levels of green and red and making them look yellow. colour....light levels....detection....detected levels....colour seen white.....100green.....OK...............100green.........yellow .............100red.......OK...............100red .............100blue......suppressed......50blue As for why it happens so fast in this case, or why it persists so long... no idea.
  9. bit rude to charge for the privelage of beta testing. I dont think you need a registration key, but it does expire on june 2007, and you can't upgrade it past RC1. As if people aren't going to be able to hack the exipery out. 'cos microsoft are so good at stopping people bypassing their licencing checks
  10. soz, post went through twice.
  11. Really quick question, but the square root of y is x or -x, but what about [imath]\sqrt{(x)^2}[/imath]? would that also be x or -x, or would it just be x? i.e., is it possible to cancell out the square root and the square against each other, leaving an unmodified x? or would it still transform x into [imath]\pm x[/imath]?
  12. the art has progressed a bit in the last ten years It sounds like your describing restriction fragment length polymorphism profiling. Short tandem repeat profiling is used now: it takes about 3 hours, i believe (never actually stood by the machine as it was doing its stuff), and is much more specific to an individual (the FBI use 1 in 260 billion as a cut off point -- if it's less likely than that, they concider the DNA to be a match). Not sure about a fingerprint ever taking a week to make?
  13. The number is artificially low, to make the example easyer. Usually, the number is one in at least a couple-hundered million. it usually goes up to 1 in several billion, and even hits the trilllions quite often. your only 50% similar to either parent, and to your siblings. it's only a problem in the case of identical twins. extremely infrequent alleles are given an artificially high prevalence (e.g. if it's frequency is <0.001%, i believe it is taken to have a(n artificially raised) frequency of 0.001), specifically to prevent people from families/locations where the allele is present from having unrepresentatively 'inprobable' profiles. I kinda agree with your concern for the potential for error tho. tbh, i dont see why a bio-statistition doesn't just forumulate a calculation that takes all of this into account, and have someone whack a program together that gives a nice probabalistic weghting of the significance of the profile (afaik this is not the case). It seems a lot easyer and less prone to error than leaving the stats up to biologists and lawyers, neither of whom are guaranteed to be any good at stats.
  14. I'd have thought that, given the population distribution, the probability of the other's being in the area would be so low as to make the assumption that they were unjustified. No wait, i see your point. the knowledge that the guy was in the area is based on other evidence, outside of the DNA profile, to prove that he was in the area, whereupon we become justified in assuming that the others probably weren't (without the other evidence, we must also assume that he probably wasn't). hmm... how about this: the chances of a coincidental match are 1/100. there are 1000 people in the UK (just to make it easyer). therefore, without concidering any other evidence and based only on the DNA profile, there is a 1/10 chance that the suspect is guilty. then could we stack other evidence on top of it, and use the 1/10 figure as an a prior assumption of guilt to use bayes theorum to factor in the other evidence (if it renders nicely to stats)? and keep altering the presumption by running it through bayes theorum untill all the evedence had modified the presumption to give a final posteria probibility? Oh, and in case anyone is now in fear of our legal system, dont worry: most forensic scientists are taught to steer clear of all but the simplest of statistical analysis, unless they are specifically trained. We hire statistitions to do this for us when neccesary.
  15. That would be the defendants fallacy The reason it's invalid is that it assumes each person with that DNA profile was in the area, and thus capable of leaving their DNA, when in actual fact it's very unlikely that the other 64 were anywhere near the crime scene.
  16. yea, but how would i work out P(A)? i can't take other evidence into account, otherwize it becomes tatologous: 'assuming the guy is probably guilty, then this is probably his DNA, indicating that he's probably guilty'. I can't take on the procecutor or the defendants view, otherwize P(A) would have to be set at either 0 or 1, and if i take the view that i strictly speaking should (i.e., unbiased -- no prior assumptions) then both P(A) and P(B) are going to be 0.5, and, given that P(C|A) is allways going to be 1, it becomes 1/(1+P(C|B)), which seems to return awfully high results reguardless of P(C|B) Can i not, then, use bayes theorum in this case?
  17. hmm... actually, i didn't go to that lecture, and the notes that i picked up off of someone else are a tad confusingly worded, so maybe you're right. How would one deal with that? could baye's theorum not be applied if we can't estimate an a prior P(A)?
  18. i think i noticed that and edited about the same time you noticed it. And it's a neccesary assumption in forensics (according to my lecture notes... i found a bit that briefly touches on bayes theorum, tho it's not much help)
  19. P(A|C) = 1*0.5/1*0.5 + 0.01*0.5 = 0.99 umm... that's just the same as saying if there's only a 1/100 chance of someone else having the profile, theres a 99/100 chance of him being guilty, which is pretty much the procecutors falicy?
  20. Cheers. Would this be right? Note that i have finally reallised that my notation was wrong. Henceforth, P(A|B) means the prob of A given B. A = suspect being at the scene C = DNA profile that matches suspects as forensic scientists are required to be unbiased, assume a prior P(A) to be 0.5 P(A|C) = P(C|A)P(A)/P© P(A|C) = 1*0.5/0.01 = 50... umm.. OK, that'd be 'no, dak, that's not right' then what'd i do wrong?
  21. Ah, i assumed that the 'procecutor's fallicy' term was made up by my tutor, so didn't bother googling. Cheers for the link. i kinda understand now. (and sorry for getting your name wrong earlyer) Tartaglia: what values were you using for P(A) and P(B)?
  22. @ Aeternus Ah, i see. we're both taking 'random person' to reffer to different things. Your taking it to reffer to a non-guilty suspect, i'm taking it to reffer to a non-supect perpetrator. Focus on the suspect's DNA: if the suspect is actually guilty and actually left his own DNA at the scene, then the chance of the scene DNA matching the suspects will be 1 ('cos it's his). If the suspect is innocent and some-one else left the DNA at the scene, then that person will basically be a random person from the population, and the chance that their DNA will coincidentally match the suspect's is 1/100 (hence why the suspect can't be that random person -- if he was he'd be the guilty one). Logically equivellent to what you said, but that's why I said he cant be the random person in my example
  23. nnnnnnnnnnnnnnnnnnnnnnooooooooooo...
  24. Indeed. Not quite sure what you mean, but: If some random person left the DNA at the scene of crime, the chances of the DNA matching the suspects is 1/100. In other words, if 1/100 people have DNA matching the suspects, and one (random) person left DNA at the crime scene, there is a 1/100 chance that that DNA would match the suspects. Which is why i'm not getting the 'we can't say that he's 100 times more likely to be the originator of the crime-scene-DNA than not'
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.