Jump to content

Prometheus

Senior Members
  • Posts

    1898
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by Prometheus

  1. But in what way? The pertinent feature surely is that both the virtual space and the real lab present an agent with an objective and obstacles. We may in time also want agents that can formulate their own objectives within some constraints. In terms of creating an agent there is no difference between the real and the virtual world except the complexity of the former compared to the latter. This is what i meant with regard to them being of the same kind. For instance, Tesla's self-driving agents are trained in large part in virtual worlds. It is particularly helpful for edge cases - cows in the middle of roads and other bonkers stuff that happens so rarely in the real world the agent struggles to learn from a real example, but that happens enough that the agent needs to learn to deal with it.
  2. It's certainly true of current computers - or more generally AI. It'll be interesting to see how deep learning architectures progress. We have reinforcement learning agents that can learn to play one game, then do quite well on another game it has never played - the more similar the games the better it does and current work tries to make the generalisations it draws as broad as possible so it can tackle more disparate tasks. Could be interpreted as learning from past experiences. The section indicates though that they are primarily interested in agents in a real lab setting - that might be a little further off, but it doesn't seem to be a different kind of task to playing games.
  3. That seems to be the common interpretation on this forum, but is it true? If so, you would have to exclude many belief systems that are generally called religions: Buddhism, Taoism even some forms of Hinduism and Neo-Paganism i've come across. Not all even make ontological claims. Buddhism, for instance, describes the Ten Indeterminate questions, such as whether the universe is finite or infinite, whether the soul and body are the same - these questions are considered irrelevant. Whether it's rational to believe in religion then may to a large extent depend on which religion (or set of beliefs within a religion). I would ask though why we value rationality so much. Many of the ethical decisions we make are not rational, or only rational once we might a value judgement of some sort. I do not believe our search for meaning, which is integral to being human, is ultimately one based in rationality.
  4. I can understand why he'd be upset, but so easily resorting to violence is a dangerous precedent for a role model of his stature, especially given he's a professional comedian known for his charisma and so has more appropriate, and probably more effective, means of defending his wife's dignity. Apparently the stats on one punch kills are pretty sparse, for instance UK police don't collect that data.
  5. This would be a perfect little project for someone new to coding to have a go.
  6. Seems straightforward to write a program yourself to do this. Do you have any experience with coding?
  7. In 3 minutes of searching all i could find was related to vitamin k deficiency rather than excess supplementation. If there is any specific literature out there it's likely to be buried in the vast coagulopathy literature. requiring a formal systematic literature review. It's a very niche interest, might be best asking a haemotologist who has an interest in dietary concerns. Good luck.
  8. I said: "Probably the biggest difference (between human and artificial intelligence) is that modern machine learning algorithms use back-propagation, whereas there is no such (known) mechanism in brains." Hopefully what others have said here and there have cleared up that confusion for you. Modern machine learning algorithms have a function which calculates some distance (in a mathematical sense, not physically) between a predicted outcome and an actual outcome. It's sometimes called a reward function. The aim of the algorithm is to minimise the distance between its predictions and the actual event - it is 'rewarded' for getting its predictions as close to the event as possible. As far as i can tell from your example of the alphabet you are proposing some kind of weighted reward function, preferring earlier letters than later ones for some reason. In that case the AI would just learn to always predict As, or maybe Es, as these are very common letters in English and you've assigned them a high value. You should learn a little about the machine learning field: there are some excellent tutorials out there, but most assume some level of mathematics, for which there are also excellent tutorials.
  9. Probably the biggest difference is that modern machine learning algorithms use back-propagation, whereas there is no such (known) mechanism in brains.
  10. I'm afraid it only obfuscates. Try this; give yourself just one sentence in which to express the particular problem you want to investigate. The last sentence of your last sentence seems close.
  11. It's not surprising as they are assessing different things: 2 looked at warfarin for thromboembolism (surprised this was still being looked at in 2003), 1 at warfarin risk of arthritis and 2 at particular dosing regimens for peri-operative care in high risk populations. You'll be hard pressed to find many such studies as its deemed unethical to have no-treatment controls when there is a known viable treatment. To find such you'll have to look at some very old studies, maybe the clinical trial data submitted to the FDA in the first instance. But what do you actually want to know? This statement makes it seem like you are interested in the role of dietary vitamin K and DVT. If this is the case, pursuing the warfarin literature (though it might be where you got your initial idea, and have implications for it) is a red herring. Isolate the precise clinical question you want to answer, something like does increased vitamin K increase risk of DVT? If so, what is the likely causal mechanism? Restrict your question to as few variables as possible: once you have a handle on that, you might consider another variable and so on...
  12. People will not take warfarin for post-surgical emboli prophylaxis, a low-molecular-weight heparin would be used instead. If people were on warfarin pre-surgery they would be taken off it and reintroduced post-surgery. I'd be interested to see those RCTs that compare post-op use of warfarin vs untreated controls that you found. Where are you searching, what keywords are you using?
  13. The dosing schedule of warfarin is infamously tricky, requiring regular monitoring to ensure levels remain therapeutic without becoming toxic. As you note, warfarin is a vitamin k antagonist: thus the dose of warfarin is only one side of this equation - dietary intake is the other. For this reason it is usually recommended that people remain on a consistent diet, and the warfarin dose adjusted around this. But sometimes it's easier to recommend for people to manage by simply avoiding certain foods rich in vitamin k. As for evidence: i imagine you are using google, which will give plenty of results aimed at lay people. If you want the actual evidence base use something like google scholar. Here are the first two papers I came across on scholar: i've only read their abstracts, i share them as an example of what you can find now you know where to look rather than anything definitive. https://www.tandfonline.com/doi/full/10.1517/14740338.5.3.433?casa_token=G5_umicM8lsAAAAA%3AS4YanznlhhCWxtZrpraK0rxWPT9QyW5EF47eMO3JfpeKte2eekWsfSNqi6EZuxaSASqA41yrwd8 https://www.futuremedicine.com/doi/full/10.2217/pgs.11.184
  14. I understand your position: if various conditions are met (guarantees that no innocents will be tortured, that torture will work, that all other options have been exhausted and that even one-off torture won't give morally dubious individuals and regimes justification for torture) you would act in such and such a way. My position is that for any practical consideration you will never know any of these things. Further, I believe it impossible to consider any ethical problem outside these practical considerations. This discussion of an idealised scenario tells us nothing about how we would act in the real world, so any answer i give is irrelevant. My participation in this thread has just been to highlight some of those practicalities, as only one was stated in the OP (guaranteed guilt of the tortured). I understand many people disagree with my position, and that seems to stem from a concept that ethics is something absolute that we discover rather than create, but do you at least understand my position?
  15. Yeah i saw that - blood from stone comes to mind. I was thinking of their medium-term energy plans - they've bet big on gas while transitioning to a lower carbon economy.
  16. False dichotomy (in the real world, haven't been following the latest unrealistic 'scenario'). Torture is only one interrogation technique. There may be more effective techniques: i know no one here is interested in evidence, but here is some that suggests alternatives to torture actually work better.
  17. You make your opinion sound like an established fact, which of course it is not. I can understand though, if you take this position, how you can consider an ethical problem as some pure abstraction and give 'definitive' answers. But, like @joigus suggested, many people do not think of ethics like that and we have reached a stalemate; nothing more can be said. Hard to debate anyone 100% certain of anything. By the same reasoning, i should go out and murder the next person i meet, because there is a vanishingly small probability they will be the next Hitler. Of course, that is stupid. I say this to highlight that having some idea of how likely torture is to work is important to the decision. If it's as likely as any random person you meet being the next Hitler, would you still do it?
  18. I saw a poll that suggested 1/3 of Ukrainians would violently resist Russian occupation. According to the US military one counter insurgent is required per 20 resisting civilians, meaning Russia would need ~325,000 soldiers in Ukraine. Even Russia cannot afford that level of attrition for any extended period of time. A much more likely goal would be to force the federation of Ukraine so as to be better able to influence the separate federations. Any full scale invasion would be an attempted lightning strike to take the capital, then sue for peace under these terms. However, this would cost Russia dearly, economically and politically (i.e Finland and Sweden would more likely want to join NATO, and many ex-Soviet countries would become even more hostile to Russia). I wonder if Germany will reconsider their gas hungry energy policy in the face of this crisis.
  19. I disagree it's the same with ethics. We can prove the billionth digit of pi exists - by finding it. How would you prove the absolute rightness of some ethical conundrum with that same precision? Pi can be defined in terms with no reference to humans, or any other agents. How do we define an ethical act without reference to humans (or some other agent). To state that ethics has a definitive answer is a common position, especially amongst the religious, but it's not one we can prove either way. Although this may well be at the root of the different positions in this thread, maybe this point is going too far off topic and requires its own thread?
  20. I'd be happy with any attempt to try to quantify the efficacy of torture. I've never claimed torture is 100% ineffective, just that it's efficacy needs to be considered for any practical discussion. If my lighter works 50% of the time, i'll still say it's working. But if my parachute works 50% of the time... i won't be saying anything before long. If people want a purely theoretical discussion, just say we assume torture is effective, in the same manner we have assumed in this thread no innocent people are tortured. That's consistent with Plato's idea of ethics (apparently it's why he studied mathematics, he was looking for a source of absolute ethics). I see ethics as an entirely human creation. It manifests in the universe only in the relations of humans, (so far as we know. -perhaps other species have a primitive capacity). That should go some way to explain why i 'refuse' to answer the question - i just don't see ethics as something that can exist in isolation like a Platonic ideal. So what if i answer yes or no - it's never going to happen; there will always be doubts of the efficacy of the torture and the possibility that you are torturing an innocent person. All the theoretically pure scenarios i explore will at best not change this, at worst give me an inflated sense of my moral righteousness, and i probably have too much of that already.
  21. Be interesting to see whether this or the SLS makes it to orbit first, and which takes humans into space first.
  22. Don't forget nearly a third of faeces is intestinal bacteria
  23. I agree pro rata is one metric. I think representation at senior levels is also a valid metric, in as far as it might indicate preferential treatment up the career ladder. Whether that is actually the case is worth investigating. That UK nursing review concluded with "The drivers for this are complex and further work is required..."
  24. I've never looked into the data myself so i thought i'd delve a little into my own profession (i'm sure there's national and professional variations). At least for UK nursing the evidence suggests that there is a gender pay gap, due to women becoming less represented in more senior roles and taking up more part-time work. Within pay scales, pro-rata pay was identical as required by law. I can imagine this generalising to at least some other professions, teaching perhaps.
  25. That is covered in one of the psychology papers i linked to. Seems to boil down to belief perseverance.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.