Scott Alexander (b. 1984) is a rationalist blogger. After graduating with a bachelor’s degree magna cum laude in Philosophy, he gained an MD, and is currently undergoing a residency as a psychiatrist-in-training. Scott Alexander is his pen name and he does not disclose the hospital he works at. As is customary in the writing of psychiatrists and psychologists, he mashes up details of different patients when he writes about them, so as to fictionalize the accounts and avoid his patients being identified.
He began writing on Less Wrong under the name Yvain, and then branched out into his own blog, Slate Star Codex (a near-anagram of "Scott Alexander"). SSC has become one of the top-tier blogs for LessWrong-style rationalists, this and his related Tumblr being linchpins of the LessWrong Diaspora.
Notable internet publications include his giant anti-neoreactionary FAQ, his map of the rationalist blogosphere, and a long collection of quotations from actual computer scientists on the subject of why we should take AI risk seriously. Additionally, he posted a lengthy and famed criticism of feminism, which had been spurred by a feminist backlash against a blog comment by MIT professor Scott Aaronson.
Alexander is a frequent visitor of local LessWrong-rationalist meetups in the US, and organizes some of them himself.
He does not always censor racist and sexist opinions in his comments section, which some of his fellow LessWrong-style rationalists have a problem with (except on open threads, where race and gender discussions are always banned).
Iranian secularalist Kaveh Mousavi, while agreeing with Alexander that the intellectually bankrupt sections of the social justice community should be heavily critiqued, has nonetheless criticized Alexander himself for having an Americentric view of social issues and of creating a false equivalence between social justice advocates and social conservatives, as well as of downplaying discrimination against women and minorities in Western countries. It is worth noting that though that Alexander has been willing to defend parts of social justice he views as worthwhile, such as uses for trigger warnings and acknowledging discrimination still exists and has massive economic costs.
Alexander is critical of neoreactionaries, having written what is generally regarded as the definitive takedown of neoreaction, though, per the header, he later took back some of the points he made in it. His blogroll is full of neoreactionaries and his comment section contains a lot of neoreactionary discussion, because he knows a pile of them personally, and he keeps discussing their ideas in his blog (there is almost nothing he won't try to apply human biodiversity to, e.g. Harry Potter).
Alexander does not identify as a feminist or an anti-feminist, but feels like he has been unfairly associated with both.
In the post meant to clarify his position on feminism and feminist issues "SSC on Feminism", he described his negative attitudes towards the movement as:
“”I think there’s a whole corner of Internet feminism – the Jezebel, Gawker, and Modal Tumblr User faction – which is really scary. [...]
This strain is absolutely not the entirety of the movement – but it has become a big enough piece of the movement, and sufficiently dangerous to anybody who doesn’t share their views, that I think it really needs talking about and can’t be dismissed as “a few bad apples”. [...]
He is highly critical of communists, and has more generally been persistently critical of what he views as millenarian ideologies, i.e., a catastrophe will destroy the current system, handwave, a new Golden Age will arise from the ashes. Much as with neoreaction, this hasn't stopped him from writing long book reports and getting very interested, for example, in the details of central planning in the USSR.
 Existential risks
Alexander believes that the risks of superintelligent AIs (e.g. the risk of them misconstruing our goals and turning us all into paperclips) have been repeatedly misrepresented and downplayed by the media, that while immediate disaster is unlikely, the threat is worth taking seriously, and now is a good time to research it.
However, Alexander, who echoes the views of Machine Intelligence Research Institute (MIRI), Stephen Hawking, Elon Musk and Nick Bostrom on this, is not an AI researcher, nor a computer scientist (and the same goes for most of the "researchers" at MIRI, including Eliezer Yudkowsky). An actual AI researcher, Richard Loosemore, has criticized the assumptions behind many of the MIRI-style superintelligent AI doomsday scenarios, pointing out that an AI that thought it could correctly interpret the core goals of humanity but got them so hideously wrong would not in fact be worthy of the name "intelligent" at all, and that this is not merely a naming issue but a basic design issue for AIs.
 Effective altruism
Alexander finds the logic of effective altruism difficult to accept intellectually, having come up with a very counterintuitive thought-experiment about it, but is inclined to offer effective altruism his moral support anyway. Alexander is big supporter of charity on similar grounds and often gives speeches on efficient charity, and currently supports the Giving What We Can project which attempts to separate effective charities from inefficient ones.
 In popular culture
- Dark Enlightenment philosopher Nick Land's 2014 psychological horror novella "Phyl-Undhu" includes a technological cult reminiscent of LessWrong, and a character called "Alex Scott" expressing some of Scott's ideas on the Doomsday Hypothesis, with an intelligence at the end of time you can communicate with, and a cultist pushed out of the cult who "wants to have not thought certain things."
- Slate Star Codex
- Slate Star Scratchpad (his Tumblr)
- Slate Star Codex on Reddit
- Scott Alexander on Twitter
- ↑ Five Years And One Week Of Less Wrong
- ↑ https://twitter.com/slatestarcodex
- ↑ http://slatestarscratchpad.tumblr.com/
- ↑ 4.0 4.1 The Anti-Reactionary FAQ
- ↑ http://slatestarcodex.com/2015/01/01/untitled/
- ↑ http://www.newstatesman.com/laurie-penny/on-nerd-entitlement-rebel-alliance-empire
- ↑ Amanda Marcotte (Decemeber 30, 2014). "MIT professor explains: The real oppression is having to learn to talk to women". http://www.rawstory.com/2014/12/mit-professor-explains-the-real-oppression-is-having-to-learn-to-talk-to-women/.
- ↑ http://www.scottaaronson.com/blog/?p=2091#comment-326664
- ↑ Answer by Caio Camargo to "How true is the statement 'the comment threads on Slate Star Codex are a nightmare to read through'?"
- ↑ http://www.patheos.com/blogs/marginoferr/2015/06/16/the-irregular-symmetry/
- ↑ The Wonderful Thing about Triggers. Slate Star Codex, May 30, 2014
- ↑ http://slatestarcodex.com/2013/04/20/social-justice-for-the-highly-demanding-of-rigor/
- ↑ Despite apparently having never read them, but he can tell you all about Jensen.
- ↑ 
- ↑ He specified in the comments: The word “sane” in that context should not be taken to mean “stupid” or even “holds stupid views”, but rather “willing to hold rational discussions about their views with someone they are tempted to consider an evil enemy, based on the Principle of Charity” (),
The evil enemy in question being neoreactionaries.
- ↑ Scott Alexander, Radicalizing the Romanceless. Slate Star Codex, August 31, 2014
- ↑ He apparently regrets the popularity of this phrase saying "NO NEED TO TAKE THIS ONE SENTENCE OUT OF CONTEXT AND TRY TO SPREAD IT ALL OVER THE INTERNET", though it really doesn't improve at all with context.
- ↑ http://slatestarcodex.com/2016/09/28/ssc-endorses-clinton-johnson-or-stein/
- ↑ http://slatestarcodex.com/2014/09/24/book-review-red-plenty/
- ↑ http://slatestarcodex.com/2015/05/29/no-time-like-the-present-for-ai-safety-work/
- ↑ http://ieet.org/index.php/IEET/more/loosemore20140724
- ↑ http://slatestarcodex.com/2013/04/05/investment-and-inefficient-charity/
- ↑ https://www.givingwhatwecan.org/