Jump to content

bascule

Senior Members
  • Posts

    8390
  • Joined

  • Last visited

Everything posted by bascule

  1. No? I was just responding to Pangloss. I'm sorry if my post was *boggle* offensively worded...
  2. That's quite odd... I'd think somewhere like the Wall Street Journal would be happy to run it... quite odd (and almost hilarious) that he'd just go straight to the Drudge Report.
  3. One thing's certain... it wouldn't be building lists every iteration solely for the purpose of iterating them (only to have them garbage collected afterwards). I've wondered why Python does it that way, or if there's some magic when using ranges with for loops. When I talk about hotspotting I mean finding hotspots in your code, such as the one you pointed out in red. These are generally tight loops that, if compiled to C code, can live completely in your CPU's cache. There's lots of tools for speeding up hotspots, such as Pyrex as you've mentioned. I'm not sure how complex your model is, but in ours, we had tens of thousands of these sorts of loops scattered across dozens of modules. If you can avoid using pointers, C++ may not be a bad choice depending on the size of your model.
  4. Did you pull that hypothesis out of your ass? What fish are you talking about? The Bannerfish? Do you have any evidence that any of the Jellyfish's predators are subject to overfishing? Here, read this: http://www.msnbc.msn.com/id/24987863/
  5. Yeah, I avoid network TV for a reason...
  6. Well, I'm about done arguing with Aardvark as he only seems to want to deal in strawmen and not arguments of substance. And I'm not sure how to argue about the U.S. oil futures market with someone who thinks it doesn't exist... Here's a rundown for the casually interested observer: Aardvark insists that speculators can only manipulate the markets through a CONSPIRACY!!! I insist this isn't the case: But even after trying to make myself clear, this isn't a conspiracy, it's a collective effect, Aardvark once again goes for CONSPIRACY, strawmanning me in the process. First, I said nothing about supplies. And here's someone who agrees with me that the collective effect of speculation is driving up the price of oil in the U.S.: http://articles.latimes.com/2008/may/30/business/fi-oilprice30
  7. Bottom line: Aardvark seems to be insisting that unregulated overseas trading of U.S. oil commodities has no effect on the price internationally (and as far as I can tell, domestically) and that requiring all U.S commodities be traded on U.S. regulated exchanges would have no effect on the price of oil. I disagree, and have cited sources which claim the contrary. For starters, Aardvark, do you think that requiring trades of U.S. oil commodities occur on regulated exchanges would have an effect on the domestic price of oil in the U.S.? This is a fun juxtaposition: Ad hominem. You haven't said why Greenberger's argument is wrong, you're simply saying it's wrong because he lacks a "firm grasp on reality". That's pretty much bottom of the barrel fallacious reasoning.
  8. America can also affect the price of oil by paying more or paying less, the point you seem to be missing. Pot... kettle... You know, it'd really help if you'd actually cite some sources to back up your points. So far you haven't done that whatsoever, and instead just slander my sources without actually rebutting them.
  9. *yawn* Changes in American policy resulted in a drop in the price of oil? SAY IT AIN'T SO.
  10. That's a terrible strawman. Can you please go back and reread my posts while employing a higher degree of reading comprehension, rather than making up fantasy viewpoints in your head then attributing them to me? Thanks. What if the remaining 3/4 of the world will pay $150 a barrel, and Americans pay $200? Wouldn't America drive the price of oil up? Now, what if the rest of the world begrudgingly plays a price America has inflated, and the price America is willing to pay goes down. You seem to have a hard time understanding this supply and demand stuff. The global price of oil and what America is willing to pay are highly interdependent. You seem to be treating them as two separate entities.
  11. I believe Britain will soon be unable to reprocess nuclear waste, as they're shutting down the only reprocessing facility in the country. I also believe France and Japan have much higher environmental standards as far as discharging nuclear isotopes into waste water from their reprocessing plants, and in that regard aren't criticized as much. All that said, Japan particularly is producing large amounts of plutonium, and there's not much that can be done with plutonium short of using it to make bombs.
  12. Manually hotspotting in C is certainly a technique I use, however that said I must warn you it's a particularly error-prone one. Interfacing between a language like Python and C is fraught with the potential for errors, even more so than a pure C program as the toolchain available for the latter is substantially more comprehensive. Tracing down bugs in C extensions for dynamic languages is just asking for trouble, so be sure you're prepared before going down this road. The main argument against manual hotspotting, of course, is that languages with better compilers / runtimes can do this automatically without forcing the programmer to go down to the C level and introduce hard-to-debug errors into their programs. In a CPU bound application, given the choice between automatic or manual hotspotting, it really seems silly to me to try to do it by hand. This will eat up considerable development cycles which are better spent elsewhere, such as getting the model correct to begin with. I'm curious if you've looked at R. R is a language which has gained considerable popularity in the scientific computing community, and bears a number of uncanny similarities to Haskell (particularly lazy evaluation). However, this is certainly a fairly common complaint against Haskell. In that regard, I'd recommend OCaml. While it's generally slightly slower than Haskell, it incorporates a number of imperative idioms which make it easier for programmers who are familiar with imperative languages to transition to OCaml. Also, in many cases OCaml will come out on top in regard to performance. If you're looking for a flexible language, Python would be one of my last choices, although I agree with your point: Haskell is an even more inflexible language than Python. Python is a language which has been designed with a single idiomatic style in mind and in that regard flexibility has been completely tossed out the window. In areas where there are multiple approaches to solving the same problem, Guido, the language's creator, has been fairly vocal about doing away with redundant approaches and picking a single idiomatic style. In that regard, the language is extremely inflexible. While Python is a language which originally began with quite a bit of functional ornamentation, the push by Guido lately has been to eliminate it. Furthermore, in scientific computing using Python's functional style may gain you programs which more closely resemble the underlying mathematics of the science they're expressing, but at a cost of performance. Your best best for high performance Python, short of reimplementing your hotspots in C, is to write your code in the most imperative manner possible. That said, by using a language like OCaml (or Haskell) you can retain the clarity of a functional approach while actually gaining performance. Again, the performance of OCaml or Haskell is literally an order of magnitude above strictly imperative performance-oriented Python (2X slower than C vs 20X slower than C). Functional Python is generally going to perform much worse than 20X slower than C. This is certainly a valid criticism and perhaps the main reason why it's probably not pragmatic to push functional languages within a scientific computing environment. It would require scientists to learn a new set of idioms, idioms which had they been exposed to them at first would probably have saved everyone a lot of time and grief, but sadly that's not the case at all and moving to a functional approach would cause more harm than good, at least for the short term. This is a transition I have only seen in the financial sector. Languages like OCaml (for analysis) and Erlang (for messaging) have seen considerable use on Wall Street (Jane Street being one of the biggest success stories). I think, perhaps, the financial sector is better at doing a cost benefits analysis of switching to a functional environment and determined the long term benefits outweigh the short term costs of transitioning. All this said, don't get me wrong: modern dynamic scripting languages like Python are by far my favorite language family. However, in my present job I do not typically deal with what are primarily CPU bound problems in the way I did when I was in scientific computing. Our problems are largely I/O or database bound, and in that regard, a dynamic scripting language is wonderful. I also have great hope that better compiler algorithms can speed up dynamic language execution substantially. Static type inferencing could bring language like Python to the same levels of performance as languages like Haskell, OCaml, or Java. Check out Starkiller, a static type inferencing engine for Python: http://web.mit.edu/msalib/www/urop/
  13. Britain's the main one I can speak of... under intense pressure from hippies like Kraftwerk and U2, they're closing their Sellafield II reprocessing facility.
  14. While I'd be fine with a nuclear power plant in my backyard, my statements were specifically about shipments of nuclear waste going by my house, having seen this: http://www.youtube.com/watch?v=-o8haMIVcL8 If the shipping casks can survive an impact from a ROCKET POWERED LOCOMOTIVE travelling at 84 MPH and a 90 minute fire, I feel fairly confident the risks of spilling nuclear waste in transport are fairly minimal.
  15. Well, that belies the fact that America is driving up the "globally set oil price"... when 1 out of every 4 barrels of oil goes to America, the amount that Americans are willing to pay has a substantial effect on the price. How is it a conspiracy any more than rampant, unregulated speculation?
  16. "Why ask why? Try Bud Dry." By the time I was old enough to even try Bud Dry, the product was already long defunct, and really, I don't think I'd want to even try it.
  17. Nuclear reprocessing results in the discharge of certain radioactive isotopes into the environment (in water), namely Technetium-99 and Krypton-85.
  18. Well, that said, this seems like a major victory for Obama. Here we have the leader of Iraq calling him out by name, saying his plan gets a seal of approval. How's that for some foreign policy cred in the main area the current administration has failed miserably?
  19. http://www.spiegel.de/international/world/0,1518,566841,00.html Mission accomplished? Time to go home? Well, McCain wants to stay there for another 100 years... Obama's approach seems to be making a lot more sense.
  20. My apologies... I meant to say recession
  21. This isn't an either/or situation. CPython is incredibly slow... roughly some 20x slower than C. Even Psycho only gets you down to about 6X slower than C and it's not exactly for primetime. This is just the nature of dynamic languages... they're incredibly hard to optimize for performance. Functional languages like Haskell and OCaml are arguably higher level than Python, and also faster... both exist in the ~2X slower than C range, more or less on par with Java. Haskell and OCaml both employ static typing with type inference, a happy medium between the ease of use of dynamic languages and the performance of static ones. So you can have your cake and eat it too...
  22. I'm having trouble deciding if that's a strawman, a slippery-slope argument, or both...
  23. I've got to say, I'm really disappointed with Obama's position on nuclear power, and I don't think it's a terribly informed one. Obama has cited, among other things, concern over the transportation of nuclear waste as one of his primary reasons for opposing nuclear power. I hope I'm not overly influenced by Penn & Teller's BULLS@!T here, but they have video footage of a DOE test where they crashed a ROCKET PROPELLED LOCOMOTIVE into one of the containers they use for the transportation of nuclear waste, then let it burn for a few hours in a fire. This was supposed to represent the sort of catastrophic accident where we'd expect nuclear waste to be spilled. The container's integrity was uncompromised. Engineering actually works. I would have no qualms with big trucks full of nuclear waste driving by my house on a daily basis.
  24. Certainly sounds like something's broken... possibly one of the fans isn't working
  25. I worked on two different projects when I was employed in scientific computing. The main project (which was actually a set of several different projects we were trying to hook together) was primarily Fortran 90. The other project was written in C++. Both projects were fairly nightmarish from a maintenance standpoint. With the Fortran project I found the main intractable problems stemmed from compiler problems. The compilers had odd internal limits on array sizes and would often produce invalid code or crash (at compile time) if these limits were exceeded. We eventually moved to better compilers, which was a rather painful process as it required much revamping of the build toolchain. C++ was similarly nightmarish due to its weirdness. I remember spending an entire week helping one of our programmers work his way through a template bug in his code. Tracing the problem was extraordinarily difficult as the error was occurring in the STL after going through layer upon layer of the modeling framework. I'm incredibly surprised Java hasn't become more popular for scientific computing. While I typically stick my nose up at Java, it does represent a massive improvement over C, C++, and Fortran in terms of maintainability and ease-of-use, while sacrificing relatively little speed. Java typically runs half as slow as equivalent C code. Running half as slow as C may seem like a big deal until you realize that the speed at which the model runs is irrelevant until you actually run it. I found with our projects that large development delays cost substantially more time than the speed of the models. Perhaps the main difference is that these are atmospheric models and were constantly being added to. All that said, I think it's really terrible how a language as bad as Fortran has a virtual stranglehold on the scientific community, with C++ being a close runner up. One of the things I learned rather quickly was that scientists (at least the one I worked with) are not programmers and only learned programming to accomplish the science they were interested in. This meant dealing with mundane issues like stack overflows, compiler limits/bugs, and linking errors were often intractable problems to them which got in the way of science. These are the sort of problems I ended up dealing with on a regular basis, and ones which made me long for a higher level language. If I were starting a brand new scientific computing project today, with scientists who had never programmed before, I would absolutely pick a functional language. Functional languages like Haskell and OCaml are very fast, certainly on par with Java. But the real win, in my mind, is that I think these languages do a much better job of matching the mental model of scientists than imperative languages do. Functions written in these languages much more closely match the mathematical descriptions of the same systems on paper. There's less mental translation a scientist must do in his/her head to express what they can do on paper in a computer. Furthermore, the issues of distribution and concurrency (typically solved with pathetically low level packages like MPI in Fortran/C programs) are solved at a substantially higher level in these languages. MPI was an enormous source of issues at my job... we tried many different MPI libraries such as mpich and lam-mpi, and all of them had nasty issues, not to mention if any one node in our cluster crashed it took down the entire model. This is a terrible problem for any distributed computing system, as the harsh reality of the situation is that computers break all the time, and the more computers you have in your cluster, the greater the chance of a given one breaking. Distributed programs like scientific models should really be fault-tolerant, as otherwise you'll find yourself wasting enormous amounts of time rerunning your model after each and every hardware failure.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.