User:Armondikov/Now/Apr12

From RationalWiki
Jump to navigation Jump to search
Icon bullshit.svg
Thought of the Now

...half-baked ideas from the mind of someone who is almost certainly dumber than they think they are.

Archive


30th April 2012[edit]

Shit atheists say (II)

Here's another well overused trope:

Religion has no supporting evidence!

Okay, I can give this one the benefit of the doubt as being just shorthand for what I'm about to describe, but that doesn't make the excess use of it any better. If you repeat a point enough times it ceases to be a convincing argument and, instead, becomes a litany of faith. As much as we can play something like Creationist Argument Bingo, we can play Skeptic Bingo with phrases like "extraordinary claims require extraordinary evidence".

In short, religion has lots of evidence in its favour; the trick is whether that evidence substantially backs up the claim at hand. Consider the following list of evidence:

  • The word of the Bible.
  • The large number of convinced followers.
  • Our existence and the existence of the world we live in.
  • Apparent design.
  • Personal feelings.
  • Miracles.

The list goes on, and all of this is evidence by its very nature. If God - and for the sake of convention I'll stick with Yahweh - was real then we would expect to see all of the above.

This is where conditional probability comes into its own as remarkably useful, and skeptics ignore this at their own risk. We can create a list of evidence that we expect to see if our hypothesis is correct, but there are many hypotheses that can potentially lead to this evidence. This is the evidence we expect to see if God is wearing red socks - but it's also the evidence we expect to see if God is wearing blue socks. By slow and gradual extension we can demonstrate that this is what we expect to see if God is called Yahweh and if God is called Allah. It's the evidence we expect to see if God sends his only begotten son Jesus to save humanity from original sin, and if God communicated the original word through his prophet Muhammad.

Because of this, the probability of a specific hypothesis being correct given a piece of evidence is not the same as the probability of seeing a piece of evidence if we assume a specific hypothesis is correct. Often enough, religious believers are just committing a confusion of the inverse type fallacy in their evidence - it's not, as atheists declare, a complete lack of evidence. It's far more subtle a problem than that, but it simply makes us feel better and smugly superior to say religious believers are total idiots and believe completely unfounded things instead.


29th April 2012[edit]

Did you really just prove that?

One thing that's really been getting my goat for a while is when what someone has actually proved turns out to have little resemblance to what they say they've proved, or what they set out to. One could make a decent working definition of "religion" based on supporting arguments falling into this trap.

The most blatant offender, of course, is the cosmological argument or the "first cause" argument for the existence of God. Given cause and effect, and that a string of infinite causes is absurd (because you can ask what caused the infinite string to come to being) then there must exist a first cause - that cause must be God. But, every atheist who has ever looked at this will wonder how this proves "God". A theologian might well be tempted to say "by definition, this first cause is God" but given that terrible level of argumentation I can, with equal validity, add "by definition, the first cause is actually the Flying Spaghetti Monster" (I invite anyone tempted by the "by definition" argument to conclusively show that this isn't the case).

We have a first cause, that much can be proved, let's call it "Nikikumba" based on a random bunch of syllables that popped into my head. Nikikumba has but one known property: it is the entity that is proven to exist via the cosmological argument. Now, a believer requires a further step: get from "Nikikumba" to "God, who gave his only begotten son Jesus Christ so that mankind can be redeemed from sin". Or get from "Nikikumba" to "Allah and his prophet Muhammad." Or even get to "his noodly appendage" from there - actually, I'd be more impressed if you can show how not to get to the FSM using the same logic used to get from Nikikumba to Yahweh! So, people have set out to prove God - specifically their god - via this mechanism really haven't proven what they've claimed to prove.

It's not just restricted to the common religions, I've seen the same with a lot of science based woo, from digital physics to quantum consciousness. Many of these arguments also don't go out to prove what their proponents claim they do - they don't make that coherent step that shows that the entity they have demonstrated matches the entity they want to demonstrate

Those professing (or preaching?) that the universe is a giant computer seem concerned about how quantum mechanics seems to violate classical mechanics - in other words, QM doesn't mesh well with our common sense. Yet there is a massive difference between proving that the universe isn't classical underneath - something physicists have agreed on for nearly a century - and proving that it's like a giant computer. To prove that the universe doesn't obey our limited perceptions of it is a very trivial thing. Similarly, there is a huge difference between there being an isomorphism between reality and a mathematical description of it (isn't it a remarkable coincidence that speed just so happens to be equal to distance divided by time?) and reality being a computation. These assertions require a significantly larger deductive step to demonstrate, just as you need a considerable leap in logic to get from Nikikumba to God. Almost invariably, the logic making these leaps is never presented.

27th April 2012[edit]

Shit atheists say.

I'm going to start collecting this stuff. First entry is this:

I don't mean to make an argument for a deity...

Here's the thing about logic and arguments; conclusions should follow from the premises. In fact, the thing about logic is that once you've broken down your derivation properly conclusions should follow so readily from the premises that one can hardly say to have done anything but the blinding obvious.

So, if you're making an argument and you think it's going somewhere you don't like you've got two choices:

  • 1) Recognise where the argument is wrong and change it.
  • 2) Recognise that you've made an argument and accept its alternative conclusion.
None of these involve making a caveat like "I don't mean too argue for...". At best, this is just trying to say you've made a hash of your writing and communication and that you expect to be misunderstood, and at worst it's saying that you're starting with a conclusion and building up supporting evidence and ignoring anything that contradicts it.

26th April 2012[edit]

A brilliant response?

I'm currently struggling to re-find this video, but I'll describe the process anyway. In a debate between a believer and atheist, the following question comes up: "What is our source of morality? What prevents us from murdering and harming each other?"

So, the atheist - dare I say, predictably? - starts by discussing a bit of evolutionary psychology and the in-group-out-group mentality that we developed so that we would prosper, the fact that murder promotes distrust and distrust is bad for us. Yadda yadda, we all should know this crap. The believer then offers this response:

"...love"

Okay, I'll concede to this being succinct and a little witty. But it's not by any stretch "brilliant", as it simply doesn't answer the question. Yudkowsky talks of heat conduction being a "fake explanation"... well, "love" as an answer to this is the fakest of the fake. What does this, as an answer, actually achieve? Not much if we consider other potential questions: "Why did the man murder his wife and the man she was having an affair with?" - "Love!" Very true, although so context dependent someone might accuse me of magical wordplay. Perhaps we can consider a similar one-word answer in another sense: "Why did Hitler hate Jews?" - "Hate!" The second one there should show more clearly what's wrong with the response to "Why don't we kill each other?" It's simply not answering the question, it's just restating it in different terms. You could easily substitute the morality conundrum with an isomorphic question of "Why do we love?" - suddenly answering "Love!" seems less profound and more idiotic - all because the language has changed, not because the question-answer relationship has.

You see, answers are pretty fucking boring when you get to it. Figuring out the question, and exploring the question - even asking why we'd want to ask the question in the first place - is far more interesting and often more enlightening. Is it more thrilling and enlightening to be told that 2+2=4, or to learn the underlying set theory and logic of why number theory just so happens to work? It's intellectual masturbation, sure, but I've never seen someone convincingly argue that masturbation is without its rewards (Category mistake's be dammed!). It's perhaps why I'm currently attracted to Zen philosophy and kōan practice. It's not about providing answers; Zen Masters don't expect their students to generate coherent and logical answers to kōans. It's kind of like an episode of QI, except instead of a klaxon and some deducted points for boring and obvious answers, it's being hit with a stick.

All answers are wrong, any scientist should know that and understand the relativity of wrong. You don't need to delve into Zen-like practices to get that, but it does guide you to empty your mind and first attempt to understand the question before charging head-first for an answer because it either sounds right or is a witty quip that makes you feel better about yourself. You can't answer the question "who are you?" and respond with your name; you're not an arbitrary label, you're far more than that. Of course, I can't continue on that without stating my soft-spot for E-Prime, which might posit that such philosophical difficulties are an illusion of language, rather than genuine problems!

Anyway, it's about understanding the question first, and in that understanding is the real answer. Simply saying that we are moral because of love is circular, and rather stupid, really. I'm more intrigued and impressed by the atheist's answer to the "Why are we moral?" question because it doesn't really seek an answer; it seeks to tear into the question and understand why we might be asking it and what it means. Why are we moral? What do we mean by moral? What acts count as this? How do they relate to our feelings, our survival? If you hit even the most basic question with a hard enough hammer it will shatter into a thousand pieces, and a true answer comes from assembling them again.

26th April 2012[edit]

Other people are idiots...

Hands up if you've ever called a creationist retarded, or claimed that, for example, Andrew Schlafly must have brain damage or something? Anyone?

*Puts hand up*

I don't do this any more, though. For two reasons; firstly, it mocks anyone with actual medical problems and secondly, it doesn't accomplish anything useful. All it is, is trying to sound superior.

Face it. Everyone is an idiot. Everyone is biased. Everyone uses incorrect heuristics. Everyone will defend their own views in the face of evidence. To claim that you're not is, on the surface, lying, but deeper down is probably an indication that you do it more than anyone else. All we can do is minimise it.

Yet, people will happily yell insults instead. "They must have a personality disorder!" "They must be retarded!" "They live in a mental home!!" "But they have a Republican brain!"

Why? Because this is easier and simpler than admitting to an unpleasant truth: these peoples' brains are working just fine. Their brains work fine but have come to a different conclusion to us. They disagree with us, but it's not some nasty external force that has brought them to that conclusion. No disorder, no neurological framework, no head injury. They just simply disagree.

This is uncomfortable. If others disagree for, for lack of a better term, valid reasons, then it leads to one conclusion: they might be right. If they might be right, then I might be wrong. Every action seems like an attempt to just preserve our own dignity rather than anything else. We don't want to be wrong.

25th April 2012[edit]

Computer graphics - a semi-rant.

The quality of a piece of artwork, in any medium, boils down to a few simple factors that multiply together: inherent talent, learned skill, time expended, and the quality and applicability of the tools used. The last two, I feel, are often much overlooked. With a broad wallpaper-paste brush, you can cover a wide area, and quickly, but if you want to paint, say, a person, you could probably only make a rough facsimile (at best something intentionally stylised) given only rough tools. With finer brushes, you can create a far more accurate painting with resplendent detail - but it will take you considerably longer because you're working with finer materials. Of course, I'm intentionally restricting this to "realistic" artwork (I maintain that the people who say that this "doesn't count" as art are just the ones who can't draw) because I want to extend this into computer graphics, where the talent x skill x time x tools equation holds even more strongly than in physical media.

People seem to be under the impression that CGI can do anything. As if you can just switch on a computer and, hey presto, out pops a fully formed Optimus Prime ready to roll out. It's not unlike a singularitarian or transhumanist view that seems to think that technology, on its own, will work miracles in time. All we need is faster processors, more graphics chips and everything will be excellent - and before long, we'll have 100% realistic human actors that no one will be able to distinguish from the real thing.

Bull. Shit.

Improved computing power and the advances made in software (often under-the-hood advances, or those of interest only to graphics geeks) can only do so much, but mainly they're just advances in the tools we have available. It raises the limits on what we could achieve, analogous to switching to smaller brushes, but it doesn't make those strokes for us. If CGI alone was a miracle, then special effects budgets would be going down, instead of up; I'd wager that you could film a shot-for-shot remake of Independence Day, which holds the record for the most extensive model work in all of film history, and while making it look pretty much the same (obvious model shots would be replaced by obvious CG shots) it would probably cost twice as much. This is all because while CG gives us smaller brushes, the artists behind it need more time to use those brushes to fill larger and larger canvases - high definition, 4K definition, 3D and so on.

So let's look at some examples. Here is a pretty tasty shot from the end of the first series of Game of Thrones - and not just because Emilia Clarke makes me think absurdly impure thoughts. That dragon is simply amazing, and above and beyond what you expect from any TV show. It's practically film quality in its execution. Yet, despite having three dragons now floating around, they're conspicuously absent from the second series so far. Why? Because modelling and texturing something to that quality, then animating it, lighting it and compositing it, to that level of quality is an arduous task. One person might be able to pull it off in a couple of months working full time, but more likely you had half a dozen people contributing to that shot and probably pulled it off within a tight deadline. Of course, the real "magic" of CG isn't in such "obvious" shots, but in more subtle ones such as set extensions or pretty much anything Lola VFX has done in the last decade.

So, here's the rant part. Let's, for a moment, consider the Ultramarines movie and the Dawn of War intro. Wait? So the high-tech awesome movie looks shittier than a mere game intro from years back? Horrifying! But CG is supposed to be magic!!

Well, here's the reality check. The CG Society has some excellent featurettes on the making o the two Dawn of War game intros. The first DOW intro is remarkable in the fact that it was made in, roughly, four weeks from the contract being signed to them handing over the finished product - given a week for sketching it out and a week for finalising, most of the work would have been done in two. Though this is with the entirety of Blur Studio (a couple of dozen people, perhaps more?) working flat out 18 hour days to get it done. That's passion, yes, but here's the thing; as impressive at the DOW short is, it's still under two minutes of produced footage, and in one location, with many models reused (the marines weren't as individual as they wanted to be) and with most of the animation references already done because Relic handed them all the animation files from the actual game (in fact, an early animatic was assembled entirely from in-game models and animations). Ultramarines, by comparison, runs nearly thirty times as long and features numerous locations and scenes, extensive voice acting, and individual characters. Simply put, to make a 60 minute movie to the same CG standard as DOW would require thirty times as long, at least. That's over two years of continuous development, and that's if they matched the relentless pace that Blur did to produce the DOW intro! Now think why The Lord Inquisitor is going to take 5 years to finish despite it's relatively modest running time - we got three Transformers movies in that sort of time frame! Time means people, and they need paying, so is it any wonder that modern movies are costing an order of magnitude more than those in the 90s? About 15-20% of the effects budget of X-Men: The Last Stand was blown on just the Golden Gate Bridge scene, so much for computers making things cheap and easy!

CG isn't magic. Everything has to be made. If you build a physical set you can rely on certain physical properties; mass produced debris, randomness, physical texture and so on. But in CG land everything must be deliberate, everything must be placed - check the bonus disc of Wall-E to see some of the artists going over a scene with exactly what they contributed to it, wisps of dust, rolling cans, a particular atmospheric effect. Everything is manually done, and there's only so much automation that a computer can handle.

So, next time you either see "bad" CGI, or don't feel particularly amazed at giant robots battling it out on screen, do stop and think of what goes into this sort of thing, it will break you brain.

18th April 2012[edit]

The Euthyphro Dilemma.

Is something "good" because God says it is, or is God "good" because it's good? If the former, then morality is arbitrary and circular - and God's actions as written in many holy texts, particularly Judeo-Christian ones, don't mesh well with our common notions of "good". If the former, then God is appealing to a source outside Himself - which renders him pointless. Substitute "God" for any source of absolute morality.

But what if you replace "God" with yourself? Someone recently said to me that being an atheist doesn't free you from the Euthyphro Dilemma precisely because you're doing just this. Really?

This is where it becomes difficult to discuss things with people who can learn a great deal and espouse a lot of knowledge, but suck at synthesising new thoughts. Because how does this assertion even work? Is something good because I declare it to be, or am I appealing to an outside source of morality? It's a fine enough question, but it neglects one simple thing; I do not claim to be an absolute "source" or moral judgement. I can make decisions, and I look to others, but I don't treat "moral" as anything particularly magical. I don't treat it as a real thing, per se. I don't imagine some burning light out there in the universe (or even outside it) with "THIS IS MORALITY!" written on a sign post - okay, so I doubt any Christian does, either, but vision is merely a tortured literal interpretation of the idea of a "source" of morality. What is the difference between a physical object and the idea that God is some source of morality? I don't see much of a difference between the two ideas. A physical object is merely a metaphor for God acting as a "source".

Once you boil moral judgements of right and wrong down to what they are, which is just words that we use to describe patterns of behaviour and our approval/disapproval of them, then an idea of a "source" simply doesn't apply. What informs these judgements? Evolutionary psychology, societal pressures, personal experience, empathy... the list goes on, but it would be a grave error to think that these are "sources" in the same sense that people assume God is when they speak of absolute morality.

Now, I'm no moral relativist - I don't think your situation allows you to justify things however you like and I don't excuse certain behaviours, and religious apologists get nowhere back attacking moral relativity in that sense. But neither am I a moral absolutist, as I think the two forms are just a false dichotomy imposed by religions who want to attack a straw man rival and put forward their own book as a solution. We have behaviour, and an instinct that lets us judge whether that behaviour is beneficial or not - sometimes, contrived situations challenge that instinct, but so what? - but I don't ascribe this judgement to an unchanging source. So yes, being an atheist does solve the Euthyphro Dilemma, because it just doesn't apply when you're not imagining a blazing ball of light with "THIS IS MORALITY!" written on it.

17th April 2012[edit]

Fun Fact!

It turns out that if you have a single magnetic resonance frequency dominating a free-induction decay (comme ça), then you lose all frequency discrimination because the FT process can't use the quadrature detection distinguish the positive and negative, resulting in a mirror image of your less intense peaks about that frequency range. Ha! Normally, this appears if you just crank the power up and it clips, or you don't take multiple transients so you're not phase-cycling the problem (in which case you get a mirror image about the transmitter's offset frequency) but in this case it's actually around a solvent peak purely because it's so intense. This wouldn't be a problem if these mysterious mirror images weren't right in the region of the frequency domain that I'm interested in.

Scumbag Science giveth, and Scumbag Science taketh away!

17th April 2012[edit]

Armondikov's Law.

I've decided, my eponymous law is going to be thus:

The more frequently a poster on the internet refers to themselves as "a rationalist", the more likely it is that they are a complete and utter tool.

That's because anyone who knows what they're talking about should be well aware that you can't entirely eliminate, and immunise against, heuristics, biases, and "common sense". You can reduce it, sure, and develop methodologies to overcome such things where they're needed, but all you can really do is try. I haven't yet seen anyone turn themselves into nothing more than a simple set of logic gates. And so, we simply reduce our reliance on our trained instincts when we find them to clash with reality, and in fact really just end up replacing them with newer models that we use as mental short-cuts, at least until we realise (that's the important part) where that fails, and try to build a better idea. Anyone who is aware of the limits of their own knowledge and the limits of their own competence should be know this. No one has the cold and calculating nature of a mere adding machine to be what they're implying when they say they're "a rationalist".

This isn't just some nut-ball definition. Even fledgling skeptics will say things like "I only think what the evidence tells me" - which is clearly some aspiration to only ever be correct and to absorb all possible knowledge and reach proper total, or infinite, certainty about your own beliefs. The trouble with that, though, is that it's a pipe dream. It's not possible. At one point you will hit a wall and "go with your gut", as a human, not a mere calculator. We like to think we'll change our mind about something, but we hardly ever do, really. This isn't a terrible thing to admit, one should cherish being wrong and getting the opportunity to be corrected, but we should cherish it because it's difficult, not deny that it as something only crazies and lower humans like "the religious" do.

So we're constantly bombarded with situations where our own biases and heuristic (short-cut) thinking doesn't work. The real trick to trying to be a "rationalist" is to to just try your best - because you can't really do better than that - to recognise where this might happen, minimise its negative impact, and maximise its positive impact by building better, more applicable, biases where appropriate. After all, eliminating them isn't a good idea; anyone caught trying to overrule their intelligent and heuristic thinking when crossing a road, sitting instead with a pen, paper, calculator and speed camera, is likely to take 6 hours to find a safe time to cross the road and still get knocked over.

Self-declaring yourself to be "a rationalist" implies you've achieved a level of perfection that not only is unobtainable, but is also precluded by the fact you made such a statement in the first place!

13th April 2012[edit]

Thin layer chromatography doesn't work like that.

Recently I had to mark and give feedback to a load of undergraduates who were doing thin-layer chromatography in the teaching labs. People might be familiar with this; if you take filter paper, dab on a spot from a coloured felt-tip marker, then let water rise up through it, the colours will separate out - and you can see how green ink is composed of blue and yellow ink, for example. Anyway, TLC is the same principle, but a little (and only a little) more sophisticated.

But one of the questions in the lab script is "how does TLC separate compounds?", basically, "how does TLC work?" Two answers crept up most often in a near 50:50 split; solubility and polarity. Obviously, to anyone with experience in separation sciences, the latter is the right answer. But why did people put down the former, when they know it isn't right? How do they know it isn't right? I'll summarise the feedback session here (naturally, prising answers out of them isn't this easy, so I'm paraphrasing):

Me: "What is solubility?"
Student: "Erm, how well something dissolves."
Me: "Right, that's a good definition, but doesn't tell us anything. What do we actually measure with 'solubility'?"

(Here we experience a lot of coaxing, until eventually...)

Student: "How much compound will actually dissolve in a set quantity of liquid."
Me: "Until?"
Student: "It's saturated and no more will dissolve."*
Me: "Excellent. That is what we actually measure with 'solubility'. So, how does this property allow TLC to separate molecules? How is 'how much compound can dissolve in a set quantity of liquid' read by the TLC plate when everything is in solution?"
Student: "..."
Me: "Exactly, so you already know that this was a nonsense answer."

They're clutching at vaguely the right idea, but aren't sure of how to go about expressing it. Thing is, you can describe TLC without using any of these buzzwords. It's simply the interactions between the TLC plate and the molecule in question, the more strongly interacting ones are retarded by the plate, the more weakly interacting ones don't and are swept up by the solvent. What kind of interactions? Again, you don't need to mention "polarity" because there's only one interaction that could do this; electrostatic, the relative positive and negative charges that attract to each other. The larger charges will interact more with the TLC plate.** We use "polarity" as a short hand for how big and how distributed these charges are once we've understood it.

The point is that they know what "solubility" is, although they need a little coaxing to get from a mere word to what it really means - i.e., the number that we slap onto it and how we go about getting and measuring that number. They should know, if they thought about the question in terms of what they would expect to be observed if their answer was correct. A molecule in solution is a molecule in solution, and "solubility", as in "how much of a compound can go into solution before it saturates" has nothing to do with the interactions that would retard a compound's progress up a TLC plate. They know this, they just didn't put it together properly because they were searching for an explanation that would earn approval of someone with a mark scheme, not the approval of me, who (and don't tell anyone this) doesn't pay too much attention to mark schemes when dishing out the grades at the end of it.

You don't need correct terminology (although correct usage of terms of art is an okay proxy to understanding) to describe what you're seeing and make a far more informed guess as to what is going on. You can't just rattle around with vague statements and say a prayer for them to be right.

*For instance, salt (NaCl) and sugar (glucose) will have different solubility in water. Although I don't know, off the top of my head, which is actually most soluble. An educated guess says salt will be by a sizeable margin, on account of water being one of the most polar solvents available and NaCl breaking into formally charged ions, rather than glucose which only has relative dipoles on the OH groups, and the fact that Na+ and Cl- will probably require comparatively smaller solvent shells and a bigger gain in entropy by going into solution.
**Strictly, it depends on what solvent you use and what the plate is. You can have "reverse phase" where it's the stationary plate, or column, that's non-polar and the solvent is the polar one.

11th April 2012[edit]

Binge working. It seems my current pattern is to alternate binge working with binge drinking. This, surprisingly, seems to get shit done.

6th April 2012[edit]

A bad workman always blames his tools...

It's the long bank holiday weekend here, with people taking both Friday and Monday off to do, well, whatever it is that normal people do when their offices close and they're booted out of the rat-race by law. Of course, it's also out of term time so that means such conventions apply to the chemistry department here, too. It's nice, actually. I get my best work done at weekends and late in the evening with no one else around. I get to wander dimly lit corridors knowing I'm the only person here, get to steal as much coffee as I like without anyone noticing (and to be fair, that milk in the fridge is going to go off before people come back on Tuesday, so I may as well take it) and can blast some heavy metal to drown out the sound of a load of fume hoods and helium compressors.

But most importantly, I get to use all the fancy stuff that's usually booked out completely that no one can use during the week. Normally, I just live in my basement, and while it's nice to have an entire NMR spectrometer to yourself pretty much permanently, it's not the best machine in the world. Not least because I accidentally shoved a fibre-optic cable inside it that had a ferrous metal support coil around it - it took three people to pull it out and a month to reconfigure the shim stack. So here I am, in the other room with the shiny and fancy AVIII instead. And wow, does that make a difference. The magnet is the same on both - 400MHz, specifically 400.17 MHz for my downstairs monstrosity and 400.12 MHz for this one, fleh - but the electronics actually doing the work are so much more advanced. The console is half the size, there's an air-conditioned temperature control unit that can get it down to minus 20 or so without liquid nitrogen, the software has an automated shimming routine (though I'm having difficulty getting that to work today, for some bizarre reason) and, even better, I don't have to go up half a billion stairs to get between the magnet and the parahydrogen generator to get the hyperpolarisation experments done. In short; awesome.

But then there's the probe we have, too. While I'm convinced my basement-dwelling one is also completely fried (but has the holes drilled in it for the in situ photochemistry work so I don't have much choice), the reciever coil for protons is on the outer position. There's only so much space you have to fit two transciever coils into, so either the proton can go closest to the sample or the heteronuclear one can. Currently, I have something in extremely low concentration and by heck does the position of the coils make a difference here. Downstairs, on the (probably) fucked probe, it takes about 30+ scans to get a decent reading of what's in there, which is about a minute. Not too bad (I've acquired rhodium HMQCs over about 15 hours before), but not brilliant as signal-to-noise goes by the square-root of the number of scans - meaning hellishly dimishing returns. If I want to double the signal from that, I'd need four times as many (120+) scans. Upstairs, however, on the shiny fancy Rolls Royce Phantom of a machine, it takes 4 scans to get the same result. Meaning if I want to double the signal-to-noise I need 16 scans, and to quadruple the SINO I need 64. This thing is pretty fucking sweet.

So hopefully, while everyone else is enjoying the long week of rain (Ha! Fuck you losers expecting sun and barbeques!) I'll be getting more work done in four days than I've done in four months.