Jump to content

Enthalpy

Senior Members
  • Posts

    3887
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Enthalpy

  1. Thanks for your interest! The thrust can be fully oriented independently of Sun's direction by using secondary mirrors, which will probably be present for optical reasons anyway. I suggested a setup there http://www.scienceforums.net/topic/76627-solar-thermal-rocket/#entry752817 but it's merely to suggest that vectoring is possible and rather easy. An optics designer would improve that. I have of course nothing against fibres! Just that last time I threw thoughts at them, what is possible and what isn't were unobvious. I suppose they won't replace the concentrator. As well, the vast collecting area is a design constraint and I wouldn't like to waste even 30% of it, a usual filling factor for fibre bunches. - It's easier at Mercury, sure... - The very efficient use of Solar power converted to kinetic energy is a huge advantage of the Solar thermal engine. Ionic engines for instance would outperform the ejection speed and thus save gas mass, but 30% light to electricity and little converted to kinetic energy makes their Solar panels really impractical for a significant thrust. A nuclear reactor making electricity would require a bigger radiator than the Sunlight concentrators here, while a reactor heating hydrogen wouldn't reach the 2800K for hydrogen dissociation that enable the 1267s isp. In the manned Mars mission script, I need thrust like 100N, and the concentrator area is still feasible (...not small) for the Solar thermal engine. My design for the heater doesn't need transparent materials. It's just tungsten that absorbs light at the vacuum side and heats hydrogen at the other (sure, light at limited incidence and gas at fins) http://www.scienceforums.net/topic/76627-solar-thermal-rocket/#entry753432 "Focus" there designates an immaterial location, the entrance of a hole full of vacuum. Tungsten would be exotic in a car or a PC. For a launcher or satellite, tungsten and its machining are less exotic than niobium alloys for instance. ESA had tried to design such an engine, but had a window and an exchanger made of a rotating bed of ceramic pellets. These were serious feasibility worries and performance limits (isp=800s) at their design. By removing these difficulties, my design looks feasible, and even rather easy. I believe it can become a big thing in space transport, not just a Mercury, Mars or Europa, but also for the geosynchronous orbit. The ruminator and some regenerator details are nice features. As usual, I didn't check if some Sapiens had invented them before. What I'm less pleased with are the concentrators. Made of electrolytic nickel (plus coatings) like satellite dish antennas, they should weigh like 1kg/m2: less would be useful. Though, I wouldn't like to increase them beyond the fairing's diameter. D=4.57m is easy to store and deploy even in big numbers, and the engines can be tested on Earth.
  2. Low-level programming and hardware are seldom used. It's the case for video cards, which serve for video games and for scientific programming, both at supercomputers and in PC. Interface libraries make their use less low-level now, but knowing the underlying hardware must still help a lot. Please don't get too specialized. It's very likely that supercomputers won't use video cards more in 5 years. Video games will in the foreseeable future. Mathematics are here to stay. Chip design is the main area where people are interested in hardware. However, chip design is incredibly automatic these days. I guess few dozen people worldwide implement maths in hardware and soft libraries. It's also an extremely cyclic activity. I can't recommend a career there. Cryptography is (funny) math and its implementation must be very aware of the hardware.
  3. 1a - The house of Ampère, at Poleymieux-au-mont-d'or, now a museum for electricity, displays an X-ray generator whose voltage is stabilized by a parallel arc. The operator adjusted with a screw the arc length, like half a metre, to set the X-ray generator voltage. As you guessed, this must have been uneasy: scary noise, electrocution, production of X-rays there. We're speaking about 100kV+. 1b - Capacitive coupling would be possible. It's more sensitive to conductors than magnetic coupling, for instance a limb must disturb it. 2 - I made the RFID in 1989. Induction supplies power to the chips on the tag or card. Charging GSM phones could have been made then if only GSM had existed. And ol' Nikola had done it long before. 3 - The range equals essentially the diameter of the biggest coil, usually the base station. Then, you may also decide how strong fields you accept in your home, and how much power you want to waste. A near-field doesn't radiate power and ideally only takes the power delivered to the receiver, but in real life you'll use copper to create the field, and the resistivity creates losses. 3b - In 1989, only the electric field was limited by law and science considered that electromagnetic fields had only thermal effects. As well, an RFID card uses to serve for few seconds in a day. I wouldn't cover a house nor a city with permanent strong magnetic fields, as an elementary precaution, and because science and law can evolve. 3c - The induction created by the reader, and by the card itself, lets electronics go crazy. It's like 0.2V/cm2 and is a difficulty when designing an RFID card. Other electronic devices work only because they're far from the RFID, which should hence be small. 3d - At the RFID demo I wasted 7W to provide 100mW to the card. 3e - Chargers for hand-held phones are hopefully more efficient. This requires big coils close to an other. 3f - I wouldn't like to waste even 20% when charging a car. Efficient coupling by 2m2 over 0.5m isn't obvious. If a connector saves 5% or even 2% power over the induction method, the connector likely wins. 4 - Only changing magnetic fields can provide power. Earth's field changes too little for that.
  4. The software I suggested in message #5 would also be interesting for developers to include in their installers or executables, examples: For Windows, MS recommends that applications bring all the necessary runtimes, but this isn't always done, as for instance DotNet is large. Typically, applications fail in silence if such a runtime is missing, exactly what isn't desired. A standard checking routine, included in the installer, that includes knowledge of the runtimes available when the application was compiled, could tell the user "this application will run, once you add CPP 2010 redistributable on your machine". Determining what runtimes a new application requires demands a large knowledge; the checking routine would include in each application the best available knowledge and linder the developer's task. More and more (free) applications are made portable, that is, the executable set of files can run on a machine without installation. Especially to such executables, a clever checking routine included in the executable would be useful, since no installer checks the available modules. Consider in particular all interface libraries that let Linux programmes run on Windows. Sometimes, users put compatibility add-ons near the executables. Then the library functions can be available but not in the expected file. A compatibility checker clever enough for that case would be nice. I'd to insist on synthetic answers from this software or routine. "Please add Dotnet 3.5sp1" or "This application won't run on 98se, it demands Xp sp3" is exploitable by the user, "entry point gkylnsjI_uc is missing in cktbnf32.dll" is not. Marc Schaefer, aka Enthalpy
  5. Fun: I used to play an instrument tuned as 3/2. Good that you told the application. I had nearly thought you worked on the discrete logarithm
  6. One other transport candidate to Mercury is my Solar thermal engine http://www.scienceforums.net/topic/76627-solar-thermal-rocket/ which gets a decent mass fraction there within few months, instead of many years of Venus and Earth flybys as is done presently with chemical engines. Though, I found difficult to bring samples back from Mercury with direct flights there and back; it would better use preset hardware. The big improvement should be a single Venus flyby combined with the Solar thermal engine. One single slingshot adds little constraints on the flight opportunities and must enable the two-way trip, as not much is missing. One paper about slingshot, with ready-to-use formulas on pages 8 and 9, thank you so much: http://maths.dur.ac.uk/~dma0rcj/Psling/sling.pdf I first advance other topics, then must get familiar with the slingshot, so if I come back with it for Mercury, it won't be immediately.
  7. Aluminium is not commonly used because its third oxidation gives a poor voltage. Lithium is an excellent metal for batteries, despite monovalent, because its only oxidation level gives a big voltage. Zinc is a good compromise. Then you have other worries, like oxide layer, solubility of salts... So don't be too disappointed if people don't jump on aluminium for batteries. All metals have been checked long ago. Some development is under way in a Japanese university about sodium, not hoping it's more energetic than lithium, but more available and cheaper.
  8. Did it take nine years to synthesize? Edit: Welcome here, MPSchofield!
  9. And how much embarrassing is it? Was a detectable signal reasonably expected with the achieved sensitivity, or can the experimenters still answer that predicted signals are below the noise?
  10. Engineering is all about figures. How much propellant, how big the geocruiser?
  11. I'm not convinced that this is orthodox physics. Apparently one optics book claims that and it has influenced many readers; maybe the number of followers suffice to qualify the claim as "mainstream", but not as "orthodox" or "true". It must be the same bizarre book that claims that photons are absorbed and reemitted by atoms "with a delay", explaining thus the refraction index. Please bear in mind that not everyone agrees.
  12. Aren't there two different notions in this thread? - The rest energy, for instance of electromagnetic waves, which is observed in the Casimir effect - The dark energy, for which Voyager (or Pioneer maybe?) put an upper bound
  13. The proper functions of the Hamiltonian are the ones with well-defined energy. If you measure their energy, you get always this one value. The wave can differ from the Hamiltonian's proper ones. Then, an energy measurement will give sometimes one value, sometimes other values. Once you've measured the energy, you've forced the particle to decide what energy it had: the one you measured. From now on, it is a proper function of the Hamiltonian - at least if your measure is certain enough. Some other operators are compatible (commutative) with the Hamiltonian, and then you can measure the corresponding values together with the energy. Other operators and particle attributes are incompatible with the Hamiltonian (not commutative). Then you can't get fixed measures for both. Measuring the energy, hence freezing it, makes the other attribute uncertain; measuring hence freezing the other attribute makes the energy uncertain. The typical attribute incompatible with energy is time. The proper function of the Hamiltonian, whose energy is certain, are the "stationary" waves, which don't evolve over time. The wave has a term exp(i2pi*Et/h) that lets the phase rotate, but the rest, which gives the amplitude against the position, is independent of time. Because the amplitude is independent of time when the energy is certain, you can't measure "when" for a solution that keeps stationary, the one whose energy is certain. If your experiment tells "when", then after the measure, the state of the particle is a mix of several states of certain energy (or a mix of other states; the ones of certain energy are just one possible choice), the mix that let the particle react at the observed time. After that measure, an energy measure is uncertain, because the state itself is a mix. Uncompatible attributes often relate to an other by a Fourier transform: energy and time, position and momentum... but not always. Take a 2p orbital, here with a peacock shape, the leftmost one http://winter.group.shef.ac.uk/orbitron/AOs/2p/index.html its orbital momentum, which is the number of turns of the phase in one geometrical turn around the nucleus, is certain (here zero for the peacock 2p) around the z axis (for the left example). Then the orbital momentum is uncertain around the x and y axes: the phase passes from 0° to 180° in half a geometric turn, but it can do so clockwise or anticlockwise, hence the uncertainty. The peacock around z is a sum of doughnut 2p around x, one clockwise and the other anticlockwise, and the measure sees either one or the other. Particle spin along the axes is incompatible as well, but I can't make my picture of it; its compatibiity works like the orbital momentum, fortunately.
  14. But then, one has to choose whether the theory is the cause, or its interpretation, or a model within this theory, or if the experiment was badly set up... That's daily life. When the 3K background radiation was discovered, the antenna builders had no theory in mind, they just observed excessive noise - and because their antenna was the most silent ever built, they had no guide to pinpoint the cause. More, their noise temperature was maybe 30K (for sure a fantastic value for that time, still excellent today) so the excess was tiny, for a measure difficult to conduct. Only years later, someone read about the excess noise and did remember the little known theory about cosmological background, and suggested a relationship. You imagine that the fellow scientists didn't answer "of course" immediately and enthusiastically. It took more observations, both of the background and of related predictions, to convince people slowly. Same for Michelson-Morley: the first reactions were about the result's reliability (the experiment was extremely sensitive), other reasons - and then people tried to adjust existing models with ether dragging and so on. Or more recently, about supraluminal neutrinos: was the observation real? Completely new physics, or small adaptations, or just a mistake? In the future, maybe the string theory will be abandoned. For no hard evidence, but just because some predicted particles are not found. That won't be a proof against - just a bad feeling. And imagine if detectors persist not to see gravitational waves: what next? So "proofs" are weak in physics, which is a stack of theory, models, experiments and tinkering. Real proofs exist in maths only. That's an excellent reason to want several experiments by independent teams give the same result.
  15. A vector can represent a distance or a position, then versus a reference. Usually we don't distinguish these vectors, but here we should. If you scale a size, you don't need any reference nor scaling center. But if you scale a position, you need the position reference, or for the CAD, the scaling center. This difference exists in maths. http://en.wikipedia.org/wiki/Affine_space
  16. For having carried both a stone of uranium ore and a part of depleted uranium, I can tell they're cold (and damned dense for the metal). But 238Pu (which is not the isotope used for fission) glows red hot if insulated a bit, check the picture at wiki about RTG. Other nuclides, used for instance by medicine for imagery and therapy, are even much more radioactive, but never in big amounts. Mined for: no, U is sought as a fissile element, more precisely its 235U isotope. This is not much related to radioactivity. Other nuclides would be much more radioactive, including natural ones. Or fission waste, which is more radioactive than uranium but can't fission any more. As for plutonium, it's not mined (only traces of natural plutonium exist on Earth), but produced at uranium reactors from 238U. Exploiting heat from natural U or Th isn't economical, not even directly at the mine. It's being done from fission waste, but that's a serious risk for very little energy. Uranium will preempt oxygen and leave iron and nickel reduced. But does this suffice to leave the uranium in the mantle? U3O8 is a bit denser than both Fe and Ni and UO2 much more, so uranium oxides can sink, and then decompose to metal and oxygen at heat. Is there any means to observe the composition of the core? I wish the coming neutrino observatories will detect radioactivity neutrinos as well, so they locate the nuclides in Earth. Herndon's explanation of the geomagnetic field is less useful now that laboratory experiments could reproduce the dynamo.
  17. Hello you all! Astronauts going to Mars shall be protected from radiations. Since exotic shields look inconvenient now, and unless this one brings the necessary improvement http://www.scienceforums.net/topic/80982-shield-astronauts/ Nasa seeks to accelerate the manned transfers between both planets, something really hard with chemical propulsion. My Solar thermal engine enables that. http://www.scienceforums.net/topic/76627-solar-thermal-rocket/ For my Solar thermal engine, I use here Isp=1267s computed by tinkering Propep. The heavy mission uses D=10m concentrators to let each engine push 5.3N near Mars and if possible 12.4N near Earth - the launcher's fairing must accommodate this size. Computed as decribed there http://www.scienceforums.net/topic/83284-non-hohmann-to-mars/#entry806564 the short trips are so demanding that besides using the better engine, some hardware must be preset on Martian orbit, and aerobraking seems necessary; a short stay on Mars, around opposition, is ruled out by the short trips, so the crew will need to stay nearly two years there. Marc Schaefer, aka Enthalpy ----- Return leg, with aerobraking ----- Expectedly the most difficult, because the return vessel must be put at Mars first. The Earth reentry vessels dives at 17643m/s - heat shields already worked at 30km/s. Due to Earth's curvature, a capsule would be too brutal. The vessel is a glider, winged to achieve as much lift as drag or more, which uses downlift to stay for long at a good altitude. Remembering that 7910m/s balance 1G, 17643m/s need 4G downforce, and hypersonic L/D=1 (the HL-20 achieves more) would combine to 5.6G. http://www.usu.edu/mae/aerospace/publications/AIAA_2006_1033%20copy.pdf The glider can abandon the return leg habitat before reentry, though landing the habitat would be interesting. For three astronauts, some souvenirs, the glider, the habitat for 80 days with its life support, without propulsion nor propellants, I count 20t. Near Mars, the Solar thermal engine accelerates the vessel from 1323 to 7337m/s. It ejects 13t hydrogen, starting at 34t. This phase lasts for 10 days thanks to 44 concentrators and engines, and adds less than 5 days travel. One LR-10B oxygen+hydrogen engine starts from a 200km Martian orbit at 3454m/s, adding there 1607m/s to reach the 1323/s above Mars' gravity. It burns 15t propellants, so the vessel weighs 50t in Martian orbit, where it was put in advance. ----- Return leg, without aerobraking? ----- Braking to low Earth orbit by a Solar and then a chemical engines is unaffordable, like 250t starting near Mars. The Solar engine braking to a location far above Earth, after a chemical and then a Solar engines accelerate the vessel from low Martian orbit need almost 90t there, an awful lot. The least unreasonable: start from far above Mars, say a Langrange point, accelerate and brake by the Solar engine to a location far above Earth. This accepts almost 70t in Martian orbit, but the crew must first reach this point from Martian surface, needing more propellants and time, and at Earth, they waste time again to board the preset reentry capsule and dive to Earth. One good point: some hydrogen can shield the astronauts during the trip. ---------- I prefer the aerobraking option and evaluate the rest with the corresponding mass. I described elsewhere hydrogen and oxygen storage in tanks of metal+foam+MLI hold by polymer straps. Marc Schaefer, aka Enthalpy
  18. Take the logarithm of both sequences: now you have successive multiples of two irrational numbers, and the question amounts to: "are there two integers that multiply the two irrationals, so that the multiples are close enough". And my answer is: the bound equals zero, because these two integers can always be found, however small the difference shall be. The existence of these two integers is exactly equivalent to finding a ratio of integers as close as desired to a ratio of both irrationals, which is nothing else than a real number - and we now that there are rational numbers arbitrarily close to any real number.
  19. As Nasa calls for bids for a mission to Europa, Jupiter's moon supposed to have an ocean of liquid water, I should like to remember the script I proposed in post #25 http://www.scienceforums.net/topic/76627-solar-thermal-rocket/page-2#entry769456 based on the Solar thermal engine whose current description begins with the present thread http://www.scienceforums.net/topic/76627-solar-thermal-rocket/ The existing script is absolutely straightforward and quick, directly from Earth to capture by Jupiter then Europa - something enabled by the isp=1267s from the Solar thermal engine. Though, gravity assist would bring a probe bigger than 900kg to Europa: slingshot at Venus, possibly Earth and Ganymede. I won't explore this scenario soon.
  20. Hello hitchhikers and tinkerers! Nasa wants now short trips to and from Mars for a manned mission to minimize the radiation dose on the crew since some shields have shown impractical. Also, I had wanted to lower the trip's perihelion to stay shortly on Mars, but with short leg durations, this looks compromised even with the Isp=1267s of my Solar thermal engine http://www.scienceforums.net/topic/76627-solar-thermal-rocket/ So here's my spreadsheet that estimates travel times and speed increments, without nearing to the Sun between the planets - this v5 lets orient the departure from the planet, and is bidirectional. NonHoMarsRadiusSlice5.zip Some speed increment examples for 80-days legs with my Solar thermal engine in mind (but are 80 days necessary?): Add 7337m/s to low-Mars-orbit (120m/s over the optimum), with 60° angle, brake 17643m/s in Earth's atmosphere (saved 840m/s). Add 6346m/s to low-Earth-orbit, with 33° angle, brake 14084m/s in Mars' atmosphere. Between locations above planets' gravity, for instance Lagrange points, add 10308m/s near Mars and 11575m/s near Earth. The departure angle can be tuned a lot to spread the speed increment among Earth and Mars, or to widen the launch window. When aerobraking, the smallest departure speed is favoured, but little more linders much the brutal aerobraking. Marc Schaefer, aka Enthalpy
  21. Magnetostrictive materials change their volume as a consequence of the magnetic field. Only 60ppm in one dimension for cobat, but 2,000ppm for Terfenol-D (which isn't exactly cheap). Provided they store a significant amount of hydrogen (their Young's modulus is aberrant), they might release it at will by the action of a magnetic field. If an electromagnet requires too much power, just pass the material (wire, sheet...) near a permanent magnet. Well, trying isn't difficult, so it can be worth an experiment. Marc Schaefer, aka Enthalpy
  22. Pumping a laser by sunlight would bring heartening power for space probe data transmission. Imagine a probe at Jupiter's moon Europa. Sunlight provides 50W/m2 there. A D=4.4m concentrator to fit unfolded in a launcher fairing brings 700W. From a Nd:Cr:YAG, a team at Osaka university observed 38% conversion into laser light http://www.icenes2007.org/icenes_proceedings/manuscripts.pdf/Session%207B/SOLAR%20PUMPED.pdf but let's take 25%: that's still permanent 180W transmitted power from that difficult location, with the better focussing possibility of light. At Mars, the same evaluation tells permanent 2kW transmitted power. Better than solar cells feeding laser diodes pumping a solid laser. To reduce the cooling need at the YAG, I suggest to cover the concentrator (or put filters on light's path) so that it reflects only the wavelengths that pump the Nd:Cr:YAG. My bad. A paper long ago presented a research effort as already flying technology, I believed it. The attenuation law differs. In a fiber, light stays concentrated at any distance, but the material absorbs it. We luckily have pure silica that is very transparent (glass is opaque at 1m thickness), and big efforts developed laser diodes tuned to the best wavelength; as a result of exp(-distance), Australia can be crossed in one hop. Compare with open-space propagation: no attenuation through vacuum (our atmosphere would be very bad), but light spreads its power density as distance-2 which is more favourable at long range. Consequently, our Moon's distance is measured since Apollo by a laser beam on Earth illuminating a small reflector put on the Moon to send a fraction back. Well beyond our Moon, radio enabled transmissions with the Pioneers and Voyagers. Light shall improve such links.
  23. You thought it was difficult? They did it... In a set of p*q elements, which is not a field, and where the prime p and q are chosen big (like 500 bits), computing ax is quick, but the reverse operation called discrete logarithm is long - that is, no quick method was known. So much that some methods for computer security rely on that, for instance some passports. http://en.wikipedia.org/wiki/Discrete_logarithm That was before. On May 12 at Eurocrypt 2014, Razvan Barbulescu and his mates have described a method in quasi-polynomial time: http://ec14.compute.dtu.dk/program.html they claim as an example that number sizes that would have needed 2128 operations to crack go in 260 http://www.lemonde.fr/sciences/article/2014/05/13/une-percee-en-mathematique-rend-caduques-des-procedures-de-chiffrement_4415604_1650684.html (sorry for ze lãnguage, other reports must exist) Papers, possibly this most recent method: http://arxiv.org/abs/1306.4244 and http://cca.saclay.inria.fr/Data/Razvan.Barbulescu-10-01-2014.pdf It all depends on the number sizes and the difficulty of individual operations. 256 simpler operations were made decades ago by EFF to crack DES. How small are the numbers chosen in existing applications? It's about time to double-check and take safety factors, or better, change the encryption method, because smaller improvements use to follow any breakthrough. Even more disturbing: the discrete logarithm is known to be related with the factorization (finding p and q above), that is, cracking one would help crack the other. http://en.wikipedia.org/wiki/Integer_factorization If someone finds a practical path in that direction (the other, factoring eases logarithm, is simple), that would ruin all the algorithms we have now. [A simple introduction is: Applied Cryptography, by Bruce Schneier] All the public-key cryptography, which exchanges the keys for the simpler private-key ciphers in all working programmes, and authenticates messages or signs software, relies on the difficulty of factoring.
  24. Maser microwave sources wouldn't bring better transmissions than any other. They do serve as clocks, especially in some GPS satellites. Clocks were the goal of NIST as they made their maser on a chip. The whole point of laser transmissions is the shorter wavelength that focusses the beam better. That's why they are developed presently. My first answer deals with unguided transmissions.
  25. Apparently you'd like "particle" to mean "well-located point". This is not what quantum mechanics has kept from the idea of particle. Photons, light, travel exactly as a wave, decribed by the electromagnetism equations, which must hence be kept. Light particles, or photon, serve to account that light appears and disappears in integer multiples of an energy h*F. There isn't much more that that behind the idea of photon. Even the absorption of a photon does not need to be local; when a 10m long antenna absorbs a photon, the position isn't more accurate than 10m. Or when a semiconductor photodetector absorbs a light photon, it uses to happen over many thousand atoms.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.