RationalWiki:Saloon bar/Archive475
Next Pope[edit]
I am wondering who will be chosen as the next Pope. I wouldn't be surprised if the next one will be ultrsaconservative. There where many Catholic groups who didn't like Francis because he advocated empathy, and not treating certain groups like garbage. Many Christians (both Protestant and Catholic) openly believe empathy is a sin, and they are quite open about that these days. What do other people here think? 199.119.233.222 (talk) 10:11, 21 April 2025 (UTC)
- So newsfeeds will spend a couple of weeks centered on a different useless old cult figure entirely satisfied with his own important and hell bent (sic) on saving everyone from some fantasy or other. Pardon me while I go off to a cave for a bit....Aloysius the Gaul (talk) 10:42, 21 April 2025 (UTC)
For what it is worth:
- John Paul II died within a week of Easter 2005 on 2 April (with Ranier III of Monaco dying on 6 April)
- Benedict XVI died on 31 December 20 2022
- Francis died Easter Monday 2025
- And in 1914 Pius X died the same night as the 'Black Pope' (head of the Jesuits) Jesuit leader Franz Xavier Wernz
Make of the coincidences what you will. (Assuming that 'they' were variously trying to keep Ranier alive until after the Pope had died, and trying to make Benedict the last notable dying in 2022/the first of 2023 are unprovable but not impossible.) Anna Livia (talk) 11:25, 21 April 2025 (UTC)
- Whatever your views are personally on the post and religion in general, the Pope isn't 'useless'. They're the head of the largest religious movement currently existing, boss of the most successful/oldest org in human history. And I don't think newsfeeds will spend a huge amount of time on it either; I mean they might do if it's a really slow couple of news weeks but what the fuck's the chance of that happening right now?
- Anyway, what individual Catholic groups or individuals think means little. It is not a democracy; the Pope is elected by a small selectorate picked by previous popes who happen to be under 80 who then choose one of their own to get the Pope hat. And as we remember it's pretty rare for a Cardinal to be appointed before 55, this means almost all the selectorate was appointed either by Francis [who'd I'd class a 'cautious reformist'] or Benedict [who'd I'd class a 'flexible traditionist']. And as a Pope is unlikely to end up appointing loads of new cardinals who are very out of sync with his own views, we can assume the bulk of those selectorate will be at least non-hostile to the viewpoints of the person who appointed them. And as 80% of this selectorate was appointed by Francis and you need 66% to win [90 cardinals, barring one dropping dead during the concave (it's happened)] which means I think we can assume a new 'fierce traditionalist'/reactionary of the stamp of John Paul II is a non-starter.
- But while the various groups don't have a vote, their opinion matters a bit - because almost all the Cardinals shall desire to avoid a major schism and thus a lot shall have at least some grip on what all the factions hate/like and by how much. And there's a huge amount of them; it's only morons who see it through a 2D 'conservative-liberal' axis, when in reality is a kaleidoscope of all different Orders, theological viewpoints and geographical groupings. Thus we can rule out anyone who desires to promote radical reform/modernisation [I personally think Francis pushed it about as hard as he could without cracking the Church].
- If there is tensions on/with the 'liberal' wing of the Church [social justice, inclusivity, modernisation etc] there's also tensions on/with the 'conservative' wing too; not just on topics such as climate change and modern capitalism [in which they are often out of sync with what passes as 'conservative thought' now] but a more pressing question of 'how can we handle wannabe dictatorships like MAGA?'. Should we try to 'remain at the table' in the hopes to guide [or protect our own interests], or should we set our faces against such things? Should we even have a central policy on this? Perhaps we should simply do our best to 'keep out of politics' [but how can one do this when such things as 'love thy neighbour' have become political?]. These questions are important because the Cardinals will be acutely aware of historical 'comprimises' with temporal power which ended up severely backfiring. So it's possible to end up with a new 'traditionalist' Pope who won't shut up calling out JD Vance's shitty interpretation of Catholicism.
- Lastly, like SCOTUS picks, you don't want 'your pick' to be too old or sick - because the longer they reign, the more time they have to 'stack the Cardinal deck' towards their viewpoint. This is more pressing because a lot of the 'conservative' Cardinals seem close to ineligibility age but not close enough to be dead - which might make a rerun in say, 2030 even harder to win.
- So my bet is the next Pope will end up being either a moderate or another 'cautious reformist', white and under 75. KarmaPolice (talk) 14:15, 21 April 2025 (UTC)
- Maybe not "white". One of the candidates being bantered around is Luis Antonio Tagle of the Philippines, and another is Peter Turkson of Ghana. Catholicism's influence is declining in Europe and other "Western sphere" nations but rising or steady elsewhere, and the Powers That Be may want to throw another bone to non-European entities (that probably was one of the influences in the previous pick of Jorge Mario Bergoglio (Argentina), methinks.) Maybe not, of course...
- That being said, white or not, I do think that a continuation of the "cautious reformist" approach of Francis is also quite likely. My "general impression" has been that Francis's most vocal critics have been "stick-in-the-mud" culture war "traditional Catholic" reactionaries, largely coming from Europe or America (especially the American EWTN
sect). These are precisely the places where Catholicism is declining (perhaps *because* of this emphasis on angry culture war bullshit, I speculate? But I digress...) My impression is that Francis's focus on economic justice has been a popular decision in most of the rest of the world. From my viewpoint (not that it matters), there seems to be no good reason for the Catholic church to give in to those with the strange (heretical?) notion that Christianity is merely about growling about how evil those gosh darn homosexuals and abortions are all the time... BobJohnson (talk) 14:52, 21 April 2025 (UTC)
- As an aside, Marjorie Taylor Greene, possessed by the spirit of Martin Luther; pretty heavily implied that the Pope was killed by God. Literally some "Blessed be the Lord for striking down the Whore of Babylon," type shit. --Ozzyboo (talk) 17:39, 21 April 2025 (UTC)
- Of course the Pope is/was useless - what has this one achieved beyond being apparently not a bad bloke? What have any of htem achieved other than giving people excuses for killing witches, rejecting science, burning protestants?? What will the next one "achieve"? I'll bet your left testicle on nothing at all.... just like the rest. And by way of coincidences - died <24 hrs after seeing Vance..... Aloysius the Gaul (talk) 05:51, 22 April 2025 (UTC)
- You do not need to be good to be powerful. And while the pope is no longer as powerful as he was during the medieval age, being the head of the world's largest religious organization DOES give you power regardless if that person or that position is good. A somebody. (talk) 21:14, 29 April 2025 (UTC)
- As an example, The Catholic Church is the largest provider of non-government schools and hospitals. That alone gives them power. A somebody. (talk) 21:19, 29 April 2025 (UTC)
- You do not need to be good to be powerful. And while the pope is no longer as powerful as he was during the medieval age, being the head of the world's largest religious organization DOES give you power regardless if that person or that position is good. A somebody. (talk) 21:14, 29 April 2025 (UTC)
- I recall when JPII died 'some newspaper' mentioned there being a (?Polish) tradition that anyone who died within a week of Easter Sunday (as JPII did) goes straight to Heaven. As Francis made an appearance on Easter Sunday there might have been an element of 'overdoing it.' Anna Livia (talk) 11:19, 22 April 2025 (UTC)
The Conclave[edit]
I know the Conclave is meant to be sealed communications-wise - but is there an 'emergency communications cord'? ('There has been an outbreak of seriously ill Cardinals - medical and more help needed'/'The Tiber has burst its banks - we need to move everybody and the antiquities to safety' etc.) Anna Livia (talk) 11:19, 22 April 2025 (UTC)
- They can break the seal (it's only a silk ribbon fixed with wax AFAIK) or unlock the front door and walk through it. Aloysius the Gaul (talk) 21:22, 29 April 2025 (UTC)
AI and Historical Advancement; Analogous or Not?[edit]
So in one of my classes today an argument about AI being similar as the invention of the calculator occurred, and being very against AI personally I was struck with how sound this argument appeared; that both streamlined thoughts and allowed for the creation of ideas at a more accessible pace, and that the tasks and roles eliminated by both are good because of this. Do you guys have anything against this? I am stumped. Alastor (talk) 17:35, 21 April 2025 (UTC)
- Calculators work and AI doesn't. I know this sounds snarky but it's like true, there's so many differences between a calculator and "AI" (a vague, broad term muddled by years and years of marketing). A calculator just does basic math. Ideas, mathematical theory, etc were not developed off of the calculator, the calculator just makes basic math really really easy. The role the calculator eliminated was 50 people sitting in a room running numbers constantly. It's mundane labor made unnecessary by a calculator, but math itself is not replaced by a calculator. People still have to do math.
- AI bros and corporate marketers want to imply that AI can replace everything from writers to artists to drivers to doctors. They want to make it seem like AI can automate these jobs like machines automated factory production. The fundamental issue with this is that things like medicine, art, writing, etc are not unnecessary, menial tasks like calculating, but important jobs that take specialized skills to learn and master. People enjoy doing these things. Nothing is gained by replacing artists with AI, or doctors with AI, or writers with AI. It doesn't make the mediums more accessible, it just removes humans from the roles they used to fill. Circling back to the original point, AI cannot and will never replace anyone at anything without doing what they did worse. AI art is ugly. AI writing is soulless. AI can straight up hallucinate shit. There is a limit to what AI can do and we're getting close to it. We're not getting past it. --Ozzyboo (talk) 17:53, 21 April 2025 (UTC)
- Not just that, but calculators always gave an accurate answer. They were created by technologists, who had no obvious political agenda by creating them. AI has been created by techno-entrepreneurs who do have obvious political agendas ("Stop hiring humans"[1], Musk's fascism with Grok, OpenAI's overlap with cryptocurrency, massive energy use and disregard for climate change, Zuckerberg's promotion of AI slime and of virtual reality over real reality). Bongolian (talk) 18:13, 21 April 2025 (UTC)
- The idea that analogies are usually accurate is, I think a fallacy. I think they are useful for heuristic research. Similar things can result in very different ends: Water (H2O) and Hydrogen Peroxide (H2O2) are formulaically similar. One is safe to drink, and one is not. Another example is thalidomide, where one isomer can be a sedative, while the other causes birth defects. Analogies can lead to disaster in the simplest constructions. Zatoichi (talk) 18:50, 21 April 2025 (UTC)
- With a focus more on programming, the latest term for replacing traditional programming work is "vibe coding", where the "vibe" thinking and language natural to business people is fed to LLM "AI" and it generates code. This points to what's expected and encouraged at the level of thinking -- replacing technical precision and detail-oriented care and clarity with vague visions and descriptions. The problems are: 1) How well can all the missing detail be auto-filled in? 2) How does thinking develop with such work?
- There's plenty of debate about #1, and general concern about a future filled with quickly generated software systems filled with security holes and unwanted behaviors. Some time back I read about a person who bragged on X about making his online platform with "vibe coding", then panicking shortly after, as people had seen the news, poked around, and began exploiting all the holes in the software. I predict plenty of both "experiments" and pushes in this direction, and plenty of backlash and lively controversy ahead.
- On #2, an essay making the rounds is E.W.Dijkstra's 1978 'On the foolishness of "natural language programming"', which argues that formal language and its use is the basis for the development of not only mathematics after ancient Greece, but also the intellectual discipline and technical culture of modern civilization. Why throw formalisms away, yearning for the use of natural language instead? "When all is said and told, the "naturalness" with which we use our native tongues boils down to the ease with which we can use them for making statements the nonsense of which is not obvious." Dijkstra views such a path as leading mainly to greater "ease of making undetected mistakes", and ties it also to an erosion of intellectual discipline that he remarks briefly on alongside lamenting the "New Illiteracy" of the 1970s. Presumably he would have seen a great risk of cultural regression in technical thought in modern-day uses of "AI". --ApooftGnegiol (talk) 19:42, 21 April 2025 (UTC)
- 'Nothing new under the sun' - Gartner hype cycle.
- On #2, an essay making the rounds is E.W.Dijkstra's 1978 'On the foolishness of "natural language programming"', which argues that formal language and its use is the basis for the development of not only mathematics after ancient Greece, but also the intellectual discipline and technical culture of modern civilization. Why throw formalisms away, yearning for the use of natural language instead? "When all is said and told, the "naturalness" with which we use our native tongues boils down to the ease with which we can use them for making statements the nonsense of which is not obvious." Dijkstra views such a path as leading mainly to greater "ease of making undetected mistakes", and ties it also to an erosion of intellectual discipline that he remarks briefly on alongside lamenting the "New Illiteracy" of the 1970s. Presumably he would have seen a great risk of cultural regression in technical thought in modern-day uses of "AI". --ApooftGnegiol (talk) 19:42, 21 April 2025 (UTC)
- The argument was correct in the respect of putting 'AI' in the came category as calculating machines; ie as a new powerful tool for work. How was complex mathematical calculations done before these items? Rooms of people with pen, paper and log tables. The time, cost and mistakes of this method was enough to spur the UK Government to drop something like £5m in today's money on Charles Babbage's mechanical calculator [they wanted accurate/cheap navigation tables]. That by the early 20th offices started getting mechanical adding machines which hit the employment of bookkeeper's assistants, in a similar manner the typewriter and stencil machines displaced scriveners and the telephone cut down on the use of messenger boys.
- But even then there's the GIGO problem; the adding machine performs accurate calculations but it needs a professionally competent person to work it, just like the typewriter requires someone who can actually spell, knows grammar and understands how to structure such things as letters. And AIs are generally the same; that it is not easy to get them to actually perform what you want unless by sheer luck. I did a test just last month; I tried to use a free image generator to draw for me a very basic storyboard of a short story - that even after I discount all the hours I took having to learn to operate the thing, it still took me a good 3/4 hours to do it. I suspect if it had been done by hand by a vaguely-competent artist we'd have been looking at half-hour, max.
- The problem is, as Ozzy rightly points out is that our AI bros - just like every other iteration of tech-bro seen basically since forever - dreams of deskilling occupations to the point any untrained idiot can do it at minimum wage. And usually, this fails to pan out because running the 'deskilled machine' becomes a skilled occupation in itself - for example automated giant breweries might have gotten rid of a lot of the old skilled brewery workers but it generated need for highly-skilled chemists, engineers, computer operators and logistics managers.
- The argument that 'AI will never suppass what humans can achieve' is both probably correct and irrelevant at the same time. Let's look at brewing again. A decent homebrewer or small 'artisan' concern can, I will adamantly state produce better product than one of the giant combines producing seas of EuroPiss or BrightSour. Yet... the combines grow bigger and bigger - turns out that the vast majority of drinkers either don't overly care about quality and/or they're attracted to the fact if you make a sea of EuroPiss, an individual can/pint is comparatively cheap. AI is the same; AI art might be mediocre, but there's situations where mediocre is 'good enough'. The same can be said about almost all the products of the modern world; from flat-pack chipboard furniture to one-size fits nobody that well clothing made from polycotton.
- But, to continue my brewing analogy - does the mega brewery have to produce EuroPiss? No. It's the management which decided on the EuroPiss, not the equipment - that put a genuine master brewer in charge and teach them how to use all that new kit and you'll soon see them producing a sea of really good beer. And that is the other future of AI at least in the medium-term; where our professional human beings learn how to use this new tool to improve their own qualative/quantitive output.
- I mean, I'm just old enough to remember the debate whether it was even possible to produce 'genuine art' on a computer at all. Similar debates raged in the early years of photography; where some people [mainly painters] arguing that 'it was not art'. And talking about 'genuine art'... it's an open secret that most top-rank artist 'studios' are in some respects mini-production lines where the 'name' only does a portion of the actual work, leaving the assistants to do the grunt work. Are you going to say an artist in c2030 'is not an artist' because they utilised AI in a similar manner? Or if an author uses AI as an research assistant, copyeditor and even a bit of light ghostwriting? KarmaPolice (talk) 20:14, 21 April 2025 (UTC)
- AI currently has a problem that other technologies don't have -- it's more "black boxy" than many technologies. I mean, it's always been true that the artist and the one that designs the tools have separate skill sets. But, to use an example from musician world, the player has considerable amount of physical input in things like playing a violin. Even something more static (like a synthesizer or pipe organ) has knobs or levers that allows you to change the sonics. The art AI that I've seen has been on the level of "you type a few things and click a button and this auto-generated junk pops up". That's just not going to be too helpful for doing anything other than "generic boilerplate shit" (which, to be fair, is a lot of more of the "corporate" art type jobs. AI may not produce the next Great Artist, but an AI that spits out, say, generic greeting cards, seems quite plausible.)
- Now, technology certainly can change quite rapidly in usability. Just look at the synthesizer -- what was once an obscurity which often required considerable technical know-how and finesse to deal with ill-tempered, bulky, expensive doodads has become simple plugins on laptops, and easy enough to use that its pretty much the dominant sound of world pop for the last couple decades. So who knows what the future will hold... but from what I know of LLMs, it will require some kind of different design compared to what we have now. BobJohnson (talk) 20:48, 21 April 2025 (UTC)
- I mean, I'm just old enough to remember the debate whether it was even possible to produce 'genuine art' on a computer at all. Similar debates raged in the early years of photography; where some people [mainly painters] arguing that 'it was not art'. And talking about 'genuine art'... it's an open secret that most top-rank artist 'studios' are in some respects mini-production lines where the 'name' only does a portion of the actual work, leaving the assistants to do the grunt work. Are you going to say an artist in c2030 'is not an artist' because they utilised AI in a similar manner? Or if an author uses AI as an research assistant, copyeditor and even a bit of light ghostwriting? KarmaPolice (talk) 20:14, 21 April 2025 (UTC)
- Perhaps the most effective use of common language is in accident prevention. "Look both ways," is both a warning and a methodology. The existence of AI begs the question, "how smart can coding really be?" The two notions would seem to be incoherent. M.C. Escher's Hand's Drawing Hands shows the act of an inanimate two dimensional object creating itself. Most human language is either used to prevent disasters or to proliferate quotidian narratives. It is very difficult to use common language to describe something to older people that they have not already heard.
- It would seem that the only way to test the humanness of an AI is to create a community of AI and observe their behavior. If they act like humans why not assume they are reasonable code representations of human minds? Humans are social animals. Typically, we are only concerned with the smart ones. Practically the AIs must vary in IQ values to observe a simulated social profile. Television and movie executives are already engaging the possibility of AI generated content. The next step might be an observable culture of AI interacting in a simulation. The ability to simulate reality would be a necessary condition to verify an artificial consciousness is even possible to create Zatoichi (talk) 20:55, 21 April 2025 (UTC)
- It's interesting but not practically relevant yet, since current "AI" is not an attempt to create artificial humans (which is still a distant sci-fi dream), and some completely new design would be needed for that. It's not clear "how smart can coding really be", but current machine learning AI has been like a gigantic look-up table plus some noise and extrapolation on top, which is theoretically very limited. Language expression as the main focus also seems flawed, with language only a tool of the human mind and brain and not its center.
- Simulation and judging similarity to humans leads to questions of what to measure, how to judge it. The Turing test is flawed because the measure is poor and once made the target, deception of the audience, in/through the system design, is key to the system being judged "intelligent". Social behavior in a virtual gathering could be interesting to focus on instead, but social behavior is also only part of human intelligence. --ApooftGnegiol (talk) 22:46, 21 April 2025 (UTC)
- The fact AI is 'black boxy' I think is both correct and irrelevant; that the important bit is not understanding how the black box is made, only how you can manipulate it to get the best results from it [which I accept, normally includes learning it's key weaknesses]. My little experiment told me that it was considerably harder/longer for me to even generate a pretty low-stakes request than if I'd say, contacted a real person to do the rough sketches.
- From said experiment, I would say the main difficulties lies with a) appropriateness and b) usability. That yes [as BobJ puts it] it's very easy to 'type a few things and click a button and this auto-generated junk pops up' but it's actually rather hard to get anything particular - that I spent [for example] a decent 45 mins putting in a ream of prompts to describe my storyboard main character and then do enough tests to ensure it's getting some image uniformity going [and even then it wasn't that great] - I couldn't simply [purely off top of my head] tell the AI 'Halle Berry, with a buzzcut, 25kg fatter, dressed like a outdoorsy butch' even after I showed the AI photos. And even then the AI kept on having issues; it had a strange obsession with putting strawberries in food/drink, all beds were Japanese futons, it seemed to have some odd views [jeans as denim chaps, anyone?], an inability to understand rooms didn't have to be greige and even after repeated prompts to not do it, a ~20% chance they'd give Halle a boob job and a tight crop-top.
- All these fails pointed to a) AIs inability to get what I actually wanted, b) their seriously wonky refrence pools [think this AI was fed too much sub-porn anime, Zoopla pictures and propaganda from the Strawberry Grower's Association] and c) lack of contextual understanding aka 'common sense'. This being the 'glue on pizza' kind of fails; that even after countless billions of development and decades of producing reams of 'commonsense knowledge' datasets AIs still don't get that rocks are inedible, cats have four legs [but many things that have four legs are not cats], that a 2010 manual for a Ford Focus might not be applicable for a 2020 Vauxhall Astra or to respond to someone saying 'I am anorexic' by giving diet tips. Aka pearls of wisdom that the vast majority of 8 year-olds would succeed in.
- This tells me that anything 'creative' or requiring actual 'thought' is going to be well beyond any AIs for quite some time - similar to how after all that development, we still have not developed a robot yet with better motor skills than a 4 year-old. That the best we can realistically hope for is a generation of very single/limited-use AIs ['helpline answerer', 'archival assistant', 'personal assistant' etc] in which the interactions are relatively predictable/limited, the information it's expected to master is curated/closed garden and perhaps most importantly, don't cost the earth to either make or run. Aka not much more than rather spiffy, useful computer programmes. Tools. To serve humans. KarmaPolice (talk) 00:12, 22 April 2025 (UTC)
- Back in the day when modern calculators were still novel and there was still a debate about whether they should be allowed in classrooms or for tests, math teachers would warn students that they would forget how to calculate if they relied on calculators too much. I think that's largely true, and could easily be applied to AI. Rely on AI too much and one will forget how to think/research/do art/write coherent text. Bongolian (talk) 05:48, 22 April 2025 (UTC)
- So, I'm pretty opinionated on this subject as a writer who hangs around in creative communities with other writers and artists. I think a debate on whether AI art is "real art" is an exercise in futility. I personally don't think so, but that's unimportant. There is a much more salient debate on whether AI art (or writing) is good, and the answer is a resounding "No."
- Back in the day when modern calculators were still novel and there was still a debate about whether they should be allowed in classrooms or for tests, math teachers would warn students that they would forget how to calculate if they relied on calculators too much. I think that's largely true, and could easily be applied to AI. Rely on AI too much and one will forget how to think/research/do art/write coherent text. Bongolian (talk) 05:48, 22 April 2025 (UTC)
- This tells me that anything 'creative' or requiring actual 'thought' is going to be well beyond any AIs for quite some time - similar to how after all that development, we still have not developed a robot yet with better motor skills than a 4 year-old. That the best we can realistically hope for is a generation of very single/limited-use AIs ['helpline answerer', 'archival assistant', 'personal assistant' etc] in which the interactions are relatively predictable/limited, the information it's expected to master is curated/closed garden and perhaps most importantly, don't cost the earth to either make or run. Aka not much more than rather spiffy, useful computer programmes. Tools. To serve humans. KarmaPolice (talk) 00:12, 22 April 2025 (UTC)
- Art is inherently a human endeavor. It is one of the things that makes humans unique, our capacity to channel our feelings into creative works like art and writing. Good art fundamentally has to have thought and emotion put into it. It has to have a human element. One of the things artists or writers notice with AI is that the works it outputs are devoid of "soul". An AI does not think about the image or text it generates, it just generates it based on a prompt. Maybe the human put thought into the prompt, but there's a degree of disconnection there. AI is not intelligent. It cannot think, as much as techbros want to humanize LLMs and delude themselves about the "singularity" or some such. This means, to me, that AI cannot create good art, as a rule. Even if it looks good, even if it reads good, it's not good because there is no human element in it.
- This is also why so many creatives dislike AI art and ban AI generated content from their communities. When you (and everyone around you) put countless hours of thought into your work, when you channel your emotions and feelings into your craft, when you make art, the idea that someone putting prompts into an LLM is as much of an artist as you are is insulting. --Ozzyboo (talk) 15:59, 22 April 2025 (UTC)
- The same can be said against photography; that it's an insult against 'real artists' who spend hours with oils when all they do is turn up with a device and press a button. Except we know it's a lot more than that; that the photographer needs to master the skills of operating that device, have experience on how to get the best shot and lastly know what story they wish to tell with that photograph.
- Similar can be said with image-generating AI tools. That I had to learn how to utilise those tools, get to appreciate their limitations and obviously have an understanding of what I wished to see represented on them. I am not arguing that 'shove a few prompts in and press a button' AI art 'is art', anymore than anyone would argue a CCTV film feed was 'art'. What I am arguing that it is possible for AI-assisted creative works to be seen as an art form - because the AI is merely the facilitating tool for the human brain directing it. For as you point out, the AI has zero emotion or imagination. The AI may not ponder over prompts, but the human operator sure does. Or at least, the human operator who is genuinely attempting to creative with it.
- But let's cut to the chase here; a lot of creatives hate AI because it's a huge paradigm-breaker. That a lot of creatives make some living out of producing variants of hackwork - the cartoonist making company logos, the musician making little ditties for adverts, writers producing vapid blog content to order and so on. And I expect AI will be able to do most of these things to an 'acceptable level' quite soon. So it's not 'artistic', who fucking cares? It's an advert for haemorrhoid cream. Again, similar was said about previous developments - photography didn't kill painting, but it sure as hell wiped out a lot of second/third-rate painters who made a living doing portraiture of the provincial middle class. Other inventions were creatively disruptive; the record player led to the loss of a lot of work for jobbing musicians as the demand for live music tanked and television basically killed music-hall. KarmaPolice (talk) 16:59, 22 April 2025 (UTC)
Living person who change their names or pronouns[edit]
In the page on Scott Alexander, David Gerard mentioned a gloriously stranger blogger who used to go by Ozy Frantz but has changed their last name and maybe pronouns since. Are there any policies for how to write about someone like that? Figuring out whether they want to be identified with their old name sounds like work, and I try to respect subjects of RW articles as people even if I think their ideas or actions are crap (so I focus on their handle if they use a handle and their meatspace name if they use a meatspace name and refer to them by their preferred pronouns). The few hundred Bay Area LessWrongers (and colonies in other US cities) produce a lot of writers who post very fast.Polydamas (talk) 05:44, 24 April 2025 (UTC)
- This section on a related issue may give you some general guidance Transphobia#Misgendering and deadnaming. When in doubt, use the person's current preferred name instead of a pronoun. Also, they/them should be acceptable. They/them has a long history of usage to refer to a single person of unspecified gender (dating back to 1450 as per the OED).[2] Bongolian (talk) 06:05, 24 April 2025 (UTC)
- One should always use a person's preferred language, if possible. The above precautions seem adequate in principle. But anyone can change their mind about the language they prefer. With regard to any social practice thought to be important, It is the responsibility of those who know to politely inform those who do not know.Zatoichi (talk) 17:07, 24 April 2025 (UTC)
- From what few trans professionals I've spoken with / read about suggest (Surname) and 'They' is a reasonable generic position for pages etc (with a line in the first paragraph saying 'known as Hannah MadeupName before 2018').
- One should always use a person's preferred language, if possible. The above precautions seem adequate in principle. But anyone can change their mind about the language they prefer. With regard to any social practice thought to be important, It is the responsibility of those who know to politely inform those who do not know.Zatoichi (talk) 17:07, 24 April 2025 (UTC)
- The former is because they've (usually) not disowned their earlier work/achievements and if you're writing you would need to explain why it switched pronouns halfway through the piece. Which draws extra attention to their transition - like all 'identity' things, some want it at the forefront while others do not want it to become the thing which defines them. KarmaPolice (talk) 18:29, 24 April 2025 (UTC)
- It seems like their early Internet presence in 2011 and 2012 was as a feminist blogger, and their current Internet presence is as an Effective Altruist on Substack, and the change of last name and coming out as trans was sometime after they switched focus, but I don't understand exactly what happened when (and I don't want to dig into all the details of their personal life).Polydamas (talk) 03:26, 25 April 2025 (UTC)
- Ozy is a Rationalist subculture member and that's the context in which they interacted with Scott Alexander - David Gerard (talk) 22:40, 25 April 2025 (UTC)
- It seems like Ozy has been all about EA and LessWrong since they opened their Wordpress blog in late 2014 and got a job with an EA organization but how they got there from feminist blogging must be some story.Polydamas (talk) 00:55, 26 April 2025 (UTC)
- I got suspicious and did a quick search of their blog archive and wow, everyone in this world really is dating or working for everyone else.Polydamas (talk) 23:01, 27 April 2025 (UTC)
- It seems like their early Internet presence in 2011 and 2012 was as a feminist blogger, and their current Internet presence is as an Effective Altruist on Substack, and the change of last name and coming out as trans was sometime after they switched focus, but I don't understand exactly what happened when (and I don't want to dig into all the details of their personal life).Polydamas (talk) 03:26, 25 April 2025 (UTC)
503 error yesterday[edit]
There was a 503 error yesterday. DDOS attack again? Koafox (talk) 11:53, 26 April 2025 (UTC)
- No. Just common or garden technical problems. Just because they're out to get you, doesn't mean you're not paranoid. Spud (talk) 12:33, 26 April 2025 (UTC)
- It's just that we've had DDOS atacks before, so I wanted to know if it was one or something else Koafox (talk) 13:12, 26 April 2025 (UTC)
- I know we have. And given what's been happening recently, it's probably what most of us assumed. It just turned out not to be the case this time. Spud (talk) 15:04, 26 April 2025 (UTC)
- David Gerard explained what happened at RationalWiki:Technical_support#Bots_and_503s_and_etc. Christopher (talk) 15:50, 26 April 2025 (UTC)
- I know we have. And given what's been happening recently, it's probably what most of us assumed. It just turned out not to be the case this time. Spud (talk) 15:04, 26 April 2025 (UTC)
- It's just that we've had DDOS atacks before, so I wanted to know if it was one or something else Koafox (talk) 13:12, 26 April 2025 (UTC)
Wikipedia's WMF receives a letter from a Trump-appointed acting DC attorney[edit]
For more info, see the discussion of Wikipedia's Village pump 2600:1700:103A:D800:C65:7A94:1451:5B6 (talk) 12:09, 26 April 2025 (UTC)
Spud is remembering his sister[edit]
Long term users would have known this was coming. Wednesday 30th April 2025 would have been my younger sister's 49th birthday if she hadn't died suddenly and unexpectedly at the age of 42. Since I will always remember how much she enjoyed our Saturday treats when we were children, I will be marking the occasion by having a packet of crisps and a chocolate bar, rather than raising a glass in her memory. If you'd like to do the same, that would be nice. I will also remind you Americans that my sister was lucky enough to visit New England and absolutely loved the hot food and ice cream from Dairy Queen. Looking at their current menu, I'd say my sister would have an Original Cheeseburger Meal Deal with a strawberry sundae. You can mark her birthday by having the same, if you like. Anyway, it would be great if you could do something, anything, to celebrate the all too short life of my sister, who was also a daughter, a friend, a wife and a mother to other people. Someone who loved Back to the Future, the music and fashion of the 1950s, Asterix, stop-motion animation, old American sit-coms of the 1960s (The Munsters, The Addams Family, The Beverly Hillbillies, I Dream of Jeannie, Mr. Ed) and new British comedy of the 1990s. Someone who loved life. Spud (talk) 12:47, 26 April 2025 (UTC)
On TV show roomers[edit]
Saw a post online that there might be a new season of "Steven Universe" this summer. If true it will be added to the growing list of TV shows with new seasons this summer (also including Wednesday, Helvaboss, Hazben Hotel, and Phines and Ferb.) can anyone verify this? Koafox (talk) 19:55, 28 April 2025 (UTC)