Borked links
Borked links
This might be a dumb question, but is there any query you can run (or an already existing 'special page I can access) that would show broken links? (not red links, though it's fine if you have to weed through those). I've been two two pages where references are fully out of date. I'm guessing it's not really possible as often there are re-direct pages to "this page cannot be found" which would mean ironically that the reference link is going *somewhere* just not where we want it to go.
I'm hoping that made sense... it's way too early for me to be up, but if you can't sleep - you wiki!
Are you talking about internal links to redirect pages? Double redirects? I'm a little confused.
Oh, I understand. Yeah, a bot can do that. The best way to do it would be to have the bot add a template like (broken link) next to a broken external link, which would link to and add the page to a category called "Pages with broken external links."
that would be a great thing. Then we could remove the links or try to find other cites.
How does it know that they're broken?
try:
urllib2.urlopen(url)
except urllib2.HTTPError, err:
if err.code == 404 or err.code == 400:
I hope you know Python.
Surely there is some way to obtain the errorlevel in Python without having to catch an exception?
Probably, but would it be faster or simpler?
Simpler, certainly. Faster, I understand that unnecessary use of exceptions in C++ is inefficient, though I am unsure whether that applies to Python.
What's wrong with using an exception?
Python uses exceptions for error handling, and I think it's a more elegant way than having to check the return value against some arbitrary value that's defined as error and may be -1, 0, Null etc. depending on what the function is.
I meant that Blue should try to find some function in Python that just returned 404 or whatever if the link was broken, without raising an exception.
http://docs.python.org/library/httplib.html#httplib.HTTPResponse.status
That's what the pywikipediabot uses.
Basically the Python URL module can raise HTML error codes, and the ones we're looking for are 404 (Not Found) and 400 (Bad Request).
But it picked up a redirect as well. Is that covered under 400?
Perhaps. I'll have to investigate further. I'll hold off on running the not until we're absolutely sure it works, because it will go through every single mainspace article. And then we can use it for fun, CP and recipe. So we should be sure.
Some kind of hidden suppression template may be helpful, in case of links that aren't broken but get picked up anyway.
As in suppress {{broken}}
itself? Not sure what you mean.
I mean you should be able to replace a false positive with '{{unbroken}}' or something and that would prevent it being caught in the next round of the bot.
For instance if you wanted to say "The claim was first made here but has since been removed" or similar without having to remove the template again when you next ran the bot.
I see, yes, that would be good to have. It's analogous to {{nostub}}
.