If you want to run a bot on the English Wikipedia, you must first get it approved. To do so, follow the instructions below to add a request. If you are not familiar with programming it may be a good idea to ask someone else to run a bot for you, rather than running your own.
Instructions for bot operators | |
---|---|
Instructions for approvals group members | |
---|---|
![]() Archives |
---|
Old Format |
Current requests for approval
GTBot
Operator: GeoffreyT2000 (talk · contribs · SUL · edit count · logs · page moves · block log · rights log · ANI search)
Time filed: 23:34, Monday, June 20, 2016 (UTC)
Automatic, Supervised, or Manual: Automatic
Programming language(s): AutoWikiBrowser
Source code available: AWB
Function overview: Create "(month)" redirects
Links to relevant discussions (where appropriate): Wikipedia:AutoWikiBrowser/Tasks#Redirects from unnecessary disambiguations
Edit period(s): periodic
Estimated number of pages affected: 615 on the first run; 1 afterward for each new page added to Category:Months in the 1900s
Exclusion compliant (Yes/No): Yes
Already has a bot flag (Yes/No): No
Function details: This bot will create a redirect tagged with {{R from unnecessary disambiguation}} from "Month Year (month)" to "Month Year" for each article "Month Year" such as January 1901 in Category:Months in the 1900s if the redirect does not already exist.
Discussion
What's the point? I can at least make a case for unnecessary disambiguation redirects to years, because it's not clear whether 1123 should an article about a number or a year, unlike 1123 (year). But that doesn't apply here, since no one is going to question the primary topic of January 1923. I don't see much of a supporting argument in the linked discussion either. — Earwig talk 00:53, 21 June 2016 (UTC)
- Looking at the links to {{R from unnecessary disambiguation}}, some appear to be old moves that were never deleted, for some reason or another (eg ... onyt agoraf y drws ... (Puw)). It may be worth compiling a list of redirects that overly disambiguate and don't have disambiguation page (eg previous example) and seeing what, if anything, could or should be done about them. →Σσς. (Sigma) 01:45, 21 June 2016 (UTC)
GreenC bot 2
Operator: Green Cardamom (talk · contribs · SUL · edit count · logs · page moves · block log · rights log · ANI search)
Time filed: 15:49, Friday, June 17, 2016 (UTC)
Automatic, Supervised, or Manual: Automatic
Programming language(s): Nim and AWK
Source code available: WaybackMedic on GitHub
Function overview: User:Green Cardamom/WaybackMedic 2
Links to relevant discussions (where appropriate): Wikipedia:Bots/Requests for approval/GreenC bot - first revision approved and successful completed.
Edit period(s): one time run
Estimated number of pages affected: ~500,000
Exclusion compliant (Yes/No): Yes
Already has a bot flag (Yes/No): Yes
Function details: The bot is nearly the same as the first bot (User:Green Cardamom/WaybackMedic), with these differences:
- In fix #2, instead of only making changes when other changes are made, it makes changes always. For example, it will convert all web.archive.org http links to secure https even if it's the only change. This modification amounts to commenting out the skindeep() function so doesn't require new code.
- The first bot was limited in scope to articles previously edited by Cyberbot II. This will look at all articles on the English Wikipedia containing Wayback Machine links, somewhere around 500k -- a more exact count will be available after the July 1 database dump. The bot will determine which articles to look at by regex'ing a Wikipedia database dump prior to its running.
Most of the edits will be URL formatting fix #2. Fix #4 will impact somewhere around 5% of the links (based on stats from the first run of WaybackMedic). The rest of the fixes should be minimal 1% or less.
Discussion
- I assume the difference in #2 is just how you're pulling a list of articles, not any coding change to the bot. Is this using the exact same code as the last bot except for commenting out the skindeep bit? Did the issues at the previous trial (bugs relating to alternative archives) pose no problem in the full run? If yes to both, this seems like something that could be speedily approved to run in 25,000 article batches with a 48-72 hour hold between them. If I'm understanding correctly, the only change is the removal of a simple function, and there seems to be no room for new bugs to have been introduced. ~ RobTalk 16:37, 17 June 2016 (UTC)
- Essentially yes. Before if it found a URL needing fix #4 and fix #2 in the same link, it did both fixes on that link (eg. changed the snapshot date (#4) and added https (#2)). If however it found only a fix #2 in a link, it ignored it as being too "skin deep" ie. just a URL format change. So now the bot will fix those skin deep cases. There is no change to the code, essentially, other than it no longer ignores the "skin deep" cases (only fix #2), and it will run against all articles with Wayback links not just a sub-set of them edited by Cyberbot II. The edits themselves will be the same as before, so the code is not changed. There were a couple minor issues that came up during the run that were fixed in the code and Wikipedia articles. I won't run the bot until after July 1 when the next database dump becomes available, since that is where the article list will be pulled from. -- GreenC 17:21, 17 June 2016 (UTC)
- @Green Cardamom: Sorry, I phrased that ambiguously. By #2, I meant the second bullet point above, not fix #2. Nothing in the actual code of this task changed to widen the scope from articles edited by a previous bot to all articles, right? It's just in the manner in which you're pulling articles from the database dump? ~ RobTalk 19:34, 17 June 2016 (UTC)
- Essentially yes. Before if it found a URL needing fix #4 and fix #2 in the same link, it did both fixes on that link (eg. changed the snapshot date (#4) and added https (#2)). If however it found only a fix #2 in a link, it ignored it as being too "skin deep" ie. just a URL format change. So now the bot will fix those skin deep cases. There is no change to the code, essentially, other than it no longer ignores the "skin deep" cases (only fix #2), and it will run against all articles with Wayback links not just a sub-set of them edited by Cyberbot II. The edits themselves will be the same as before, so the code is not changed. There were a couple minor issues that came up during the run that were fixed in the code and Wikipedia articles. I won't run the bot until after July 1 when the next database dump becomes available, since that is where the article list will be pulled from. -- GreenC 17:21, 17 June 2016 (UTC)
- Note Community feedback solicited on WP:VPR due to large run size. — xaosflux Talk 01:34, 18 June 2016 (UTC)
DYKReviewBot
Operator: Intelligentsium (talk · contribs · SUL · edit count · logs · page moves · block log · rights log · ANI search)
Time filed: 12:20, Friday, June 17, 2016 (UTC)
Automatic, Supervised, or Manual: automatic
Programming language(s): Python
Source code available: Available
Function overview: To aid in new WP:DYK nominations by checking for basic criteria such as sufficient length, newness, and citations.
Links to relevant discussions (where appropriate): Wikipedia talk:Did you know#RFC: A bot to review objective criteria
Edit period(s): Fixed intervals (~once per hour)
Estimated number of pages affected: Subpages of Template:Did you know nominations and author talk pages
Exclusion compliant (Yes/No): Yes
Already has a bot flag (Yes/No): No
Function details: The DYK nominations page is perennially backlogged. Nominations typically take several days to a week to be reviewed. This bot will ease the backlog by checking basic objective criteria immediately after nomination so the author is made aware of those issues immediately.
Specific criteria which will be checked are:
- Readable prose
- Article newness or recent 5x expansion
- Citations in every paragraph
- No maintenance templates
- Link to Earwig's copyvio report
- Hook is <200 chars
- Whether the article is a BLP
If there are issues, the bot will leave a note on the nomination page and on the nominator's talk page.
This bot is intended to supplement, not substitute for human review.
Discussion
Administrator note: Flagged "confirmed" as known alt account of User:Intelligentsium. — xaosflux Talk 13:31, 17 June 2016 (UTC)
- How will the copyvio check work? Will above a certain percentage receive a "possible copyright violation" note? Since quotes from a source are typical in good articles (which can come to DYK), there's going to be a high false positive rate there. What are your thoughts on leaving a note even when all criteria it checks for are met so reviewers know which clear-cut objective criteria (readable prose, hook length, and newness, probably) they don't need to check? ~ RobTalk 13:43, 17 June 2016 (UTC)
- Hi. It will link to Earwig's page with a note about the percentage. The relevant message makes it clear there is low confidence in the automated copyvio detection, and reviewers should still manually ensure there is no violation if the bot reports no violation (or vice versa). I agree that a standard notice is a good idea - the bot will edit any DYK page that has not already been reviewed with these comments. Intelligentsium 13:54, 17 June 2016 (UTC)
- Alright, good responses. Please check directly with Earwig that this use complies with our license to use search results on his tool. There was some hubbub about APIs and licenses recently, and I know fully automated tools had some issues. Other than that, this is an obviously helpful bot. ~ RobTalk 14:56, 17 June 2016 (UTC)
- Pinging @The Earwig: to keep all discussion in one place. Is automatically linking to the results page for your copyvio tool in compliance with the Google TOS? Intelligentsium 15:10, 17 June 2016 (UTC)
- Yes, that's fine. — Earwig talk 18:14, 17 June 2016 (UTC)
- The link isn't the potential issue. It's the actual running of the tool to get a percentage, which you said you'd be placing in the review note. Certain search sites require that their API is only used by an actual person and that the search results are displayed in a search-like experience (i.e. not just summarizing a percentage). I assume Earwig saw that bit too when looking this over, though, so you should be good on that front. ~ RobTalk 19:37, 17 June 2016 (UTC)
- Yes, that's fine. — Earwig talk 18:14, 17 June 2016 (UTC)
- Pinging @The Earwig: to keep all discussion in one place. Is automatically linking to the results page for your copyvio tool in compliance with the Google TOS? Intelligentsium 15:10, 17 June 2016 (UTC)
- Alright, good responses. Please check directly with Earwig that this use complies with our license to use search results on his tool. There was some hubbub about APIs and licenses recently, and I know fully automated tools had some issues. Other than that, this is an obviously helpful bot. ~ RobTalk 14:56, 17 June 2016 (UTC)
- Hi. It will link to Earwig's page with a note about the percentage. The relevant message makes it clear there is low confidence in the automated copyvio detection, and reviewers should still manually ensure there is no violation if the bot reports no violation (or vice versa). I agree that a standard notice is a good idea - the bot will edit any DYK page that has not already been reviewed with these comments. Intelligentsium 13:54, 17 June 2016 (UTC)
FYI I'm going to run a short test in my userspace to ensure the code to save pages is working correctly. Intelligentsium 17:45, 17 June 2016 (UTC)
- Here is an example run, which you can see below. Any feedback is welcome.
- User:Intelligentsium/Marine mammal
- User:Intelligentsium/Gaëlle Ghesquière (this revealed a bug which has since been addressed)
- User:Intelligentsium/Michelle Tisseyre
- User:Intelligentsium/César Camacho Quiroz
- User:Intelligentsium/Catch Me If You Can (Girls' Generation song)
- User:Intelligentsium/Pennsylvania Shell ethane cracker plant
- The source code is also posted here. I'm not a professional programmer and much of this was written yesterday so please excuse any sloppiness. Intelligentsium 19:43, 17 June 2016 (UTC)
- For full disclosure there are a few known issues
- Unable to handle multi-article nominations. I'm not sure how best to implement that as sometimes single articles have commas, sometimes multinoms are made under only one article, and sometimes the link is a redirect.
- Maintenance template grepping is a hack because I was lazy - it looks for dated templates as content templates usually are not dated (this does introduce false positives, for example
{{use mdy dates}}
) - The char count is not exactly the same as Shubinator's tool as his tool parses the HTML while mine uses wikitext. Let me know if there is a significant (>5%) discrepancy
- Sometimes the paragraph division is off, possibly because a single return in the editor doesn't break the paragraph in display.
- I mostly ignore exceptions since there are many, many ways a nomination can be malformed
- Intelligentsium 19:57, 17 June 2016 (UTC)
- For full disclosure there are a few known issues
- You wrote in the discussion that reviewers need to manually use Shubinator's tool and Earwig's tool to perform these standard checks. These issues could be pointed out easily by a bot for nominators to work on, rather than having to wait several days/weeks until a human reviewer gets around to raising them. What if pasting the output of Shubinator's tool and Earwig's tool was made standard in DYK submissions? Not to say that I have any issues—I fully support this bot—I'm just a bit surprised that you actually went to the trouble of this BRFA before what I saw as the most obvious solution.
- I also recommend mwparserfromhell to parse wikitext instead of those nasty regular expressions. You may find using ceterach on Python 3 to make handling unicode much smoother, as well. →Σσς. (Sigma) 03:39, 18 June 2016 (UTC)
- Seconded that mwparserfromhell is a wonderful library to use. I, too, once used regex to parse wikitext, but one of the many problems with doing so is that the expressions constantly have to be updated as editors find new and exciting ways to write malformed wikitext. Regex-based wikitext parsing is really technical debt, and once you switch over, it'll be so much easier. Enterprisey (talk!) (formerly APerson) 03:44, 18 June 2016 (UTC)
- Thanks, I'll look into the mwparser. @Sigma: I'm not sure I understand your comment. Using Shubinator's and Earwig's tools is standard review practice but because there are hundreds of submissions and as many of the users who participate at DYK are new users, the reviewer ends up having to perform the check. Intelligentsium 04:04, 18 June 2016 (UTC)
- Here are some updated results
- User:Intelligentsium/Cortinarius rubellus
- User:Intelligentsium/Rebel Girl (song)
- User:Intelligentsium/Petite messe solennelle
- User:Intelligentsium/Ríe y Llora
- User:Intelligentsium/Tommy_Best
- User:Intelligentsium/Dean_Fausett
Intelligentsium 00:59, 19 June 2016 (UTC)
- Source code updated based on various feature requests; available here. Intelligentsium 15:58, 20 June 2016 (UTC)
Antigng-bot
Operator: Antigng (talk · contribs · SUL · edit count · logs · page moves · block log · rights log · ANI search)
Time filed: 23:51, Monday, May 30, 2016 (UTC)
Automatic, Supervised, or Manual: Automatic
Programming language(s): C
Source code available: Yes
Function overview: Add {{R_from_ambiguous_page}} or {{R from incomplete disambiguation}} to redirects that qualifies for them.
Links to relevant discussions (where appropriate):
Edit period(s): Daily
Estimated number of pages affected: [1], each link before number "1" or "2" (indicates there's no redirect template in the page) is subjected to this task
Exclusion compliant (Yes/No): No
Already has a bot flag (Yes/No): No
Function details: The bot will go through [/w/api.php?action=query&list=allredirects]. For each redirect returned by the query, it will check if the redirect target contains "(disambiguation)" while the title doesn't have one and the redirect page doesn't have any redirect templates. If yes, it will add {{R_from_ambiguous_page}} or {{R from incomplete disambiguation}} to the page, depending on whether the page title has "()".
Discussion
- This will be a bit more detailed to think about a trial. Why will you override exclusion requests? Do you plan on editing in every namespace? — xaosflux Talk 00:10, 31 May 2016 (UTC)
- Please manually make 10 of these edits with your own account, and list your diff's below. — xaosflux Talk 00:10, 31 May 2016 (UTC)
- Looking at mw:API:Client code, I see that there aren't any libraries listed for C. Are you using a particular library, or making requests manually? Would it be possible for you to link to the source code, say on GitHub or Bitbucket? Enterprisey (talk!) (formerly APerson) 04:22, 18 June 2016 (UTC)
- Never mind, I found it myself. Here it is, for the curious. Enterprisey (talk!) (formerly APerson) 04:39, 18 June 2016 (UTC)
Checking for the suffix "(disambiguation)" seems like you would miss some cases, why not check for the __DISAMBIG__ page property? Legoktm (talk) 04:25, 18 June 2016 (UTC)
- Because I need to differ {{r_from_ambiguous_page}} from {{r from incomplete disambiguation}}.--Antigng (talk) 11:43, 18 June 2016 (UTC)
Assuming zh:User:Antigng-bot/network and zh:User:Antigng-bot/redirect are real bot code, I'm sorry, but I can not endorse this running here. Leaving matters of style and maintainability aside, you have raw HTTP/1.0 via sockets, a hardcoded server IP, and 1000 threads making concurrent API requests. You are clearly a capable enough programmer to understand why these are very dangerous things. Let me be clear: C is surely a non-traditional language for bots, but there's nothing wrong with that per se. I love writing low-level code; it's fun and an interesting learning experience, too. The concern here is beyond that. Raw HTTP/1.0 means your bot breaks when the API switches to HTTPS-only, which is right around the corner has already happened, so I don't even think your bot can work anyway. A hardcoded server IP means you can't take advantage of the WMF's load balancing, and your bot breaks when servers change. Massively concurrent API requesting abuses expensive resources, and goes way against the bot policy; use Wikimedia Labs replica tables or dumps if possible, else deal with the fact that the bot can't run as fast as you'd like. That's okay—this isn't a high-priority task. You can still take advantage of modern web standards while sticking with C. Instead of raw socket code, why not use libcurl? Hell, even if you're insistent on not using any external libraries, getaddrinfo fixes at least one of my concerns, though it'd be unwise to try doing SSL yourself. — Earwig talk 05:03, 18 June 2016 (UTC)
- Well, I'm not going to use the labs any more. I will set a reverse proxy on my laptop and proxy all raw http requests to https://en.wikipedia.org/. --Antigng (talk) 11:41, 18 June 2016 (UTC)
- Note also phab:T137707...:Jay8g [V•T•E] 03:56, 19 June 2016 (UTC)
APersonBot 9
Operator: Enterprisey (talk · contribs · SUL · edit count · logs · page moves · block log · rights log · ANI search)
Time filed: 03:29, Monday, May 30, 2016 (UTC)
Automatic, Supervised, or Manual: supervised
Programming language(s): Python
Source code available: https://github.com/APerson241/APersonBot/blob/master/update-participants/update-participants.py
Function overview: Moves inactive project participants to a new "Inactive participants" section.
Links to relevant discussions (where appropriate): WP:BOTREQ#Idea: WikiProject stale participant member remover bot
Edit period(s): Weekly, perhaps
Estimated number of pages affected: 500
Exclusion compliant (Yes/No): No
Already has a bot flag (Yes/No): Yes
Function details: Checks each listed participant for activity (i.e. any edits made in the last 3 months), then moves the inactive participants into a section titled "Inactive participants". The order of the participants is preserved.
Discussion
- The linked discussion noted this may not be suitable for all projects, are you planning on having wikiprojects "opt in" to this? — xaosflux Talk 04:44, 30 May 2016 (UTC)
- I'm completely willing to make it opt-in, but I don't think it's a good idea. As The Transhumanist noted, I was looking at it as an out-of-date contact list. Contact lists are only as useful as they are accurate. In the scenario brought up in that comment (only 4 active editors remain in a 50-editor project) it's not likely that I'd be able to get consensus for opting in at the project talk page, and if, say, I wanted to find the 4 active editors to ask them about opting in, I would have done the bot's job for it already. Enterprisey (talk!) (formerly APerson) 15:07, 30 May 2016 (UTC)
- Pinging Jonesey95, who originally said it should be opt-in. Enterprisey (talk!) (formerly APerson) 15:10, 30 May 2016 (UTC)
- I'd like to see some buy-in from some actual projects before you would be able to just start editing every/any project page - please list this at WP:VPR. As far as trialing, are there any projects that will opt-in that the trial can address? — xaosflux Talk 23:03, 30 May 2016 (UTC)
- VPPR discussion started. Enterprisey (talk!) (formerly APerson) 03:26, 31 May 2016 (UTC)
- I'd like to see some buy-in from some actual projects before you would be able to just start editing every/any project page - please list this at WP:VPR. As far as trialing, are there any projects that will opt-in that the trial can address? — xaosflux Talk 23:03, 30 May 2016 (UTC)
- I'm not comfortable with the idea of automatically retiring people when they take a break. But not so bothered if it applies both ways, so when a person resumes editing they automatically get moved back to the active list. ϢereSpielChequers 23:30, 31 May 2016 (UTC)
- The bot would definitely move users under the inactive heading back to their original spots if it saw any activity. Enterprisey (talk!) (formerly APerson) 01:04, 1 June 2016 (UTC)
- When moving people between lists, is there a enwiki wide standard for ordering? (e.g. alpha, last edit, etc) that you will be following? — xaosflux Talk 03:27, 1 June 2016 (UTC)
- At the moment, I preserve as much of the original ordering as possible. For example, if editors A, B, C, and D are listed and B and D are inactive, then the list will look like this:
-
- A
- C
- ===Inactive participants===
- B
- D
-
- There is, of course, a problem if B becomes active again: how am I going to tell the bot that B has to go between A and C? I could tag each editor with their position on the original list when the bot makes its first edit, which would work in every case, even when the editors are ordered by, say, the date they joined the project. However, that introduces a lot of clutter. The bot could also try to figure out the ordering by inspection; if everyone seems to be in alphabetical order, then the bot should use alphabetical ordering. However, I think the best idea is to scan for the last revision before the bot edited to get an idea of the order everyone should be in (taking into account the situation where bot and human edits are interwoven) and use that for ordering. Enterprisey (talk!) (formerly APerson) 04:42, 1 June 2016 (UTC)
- I'm curious whether you've extracted an algorithm for this yet. You might be able to comment the line out instead of removing it entirely, and then uncomment it if they become active again. →Σσς. (Sigma) 07:52, 4 June 2016 (UTC)
- Great idea! I'll see if I can implement that. Enterprisey (talk!) (formerly APerson) 18:25, 5 June 2016 (UTC)
- I'm curious whether you've extracted an algorithm for this yet. You might be able to comment the line out instead of removing it entirely, and then uncomment it if they become active again. →Σσς. (Sigma) 07:52, 4 June 2016 (UTC)
- At the moment, I preserve as much of the original ordering as possible. For example, if editors A, B, C, and D are listed and B and D are inactive, then the list will look like this:
- When moving people between lists, is there a enwiki wide standard for ordering? (e.g. alpha, last edit, etc) that you will be following? — xaosflux Talk 03:27, 1 June 2016 (UTC)
- The bot would definitely move users under the inactive heading back to their original spots if it saw any activity. Enterprisey (talk!) (formerly APerson) 01:04, 1 June 2016 (UTC)
- {{BAGAssistanceNeeded}} Enterprisey (talk!) (formerly APerson) 21:45, 16 June 2016 (UTC)
- @Enterprisey: From the discussion above it looks like you were still working on some coding - has this completed? — xaosflux Talk 02:26, 17 June 2016 (UTC)
- Yes, it's mostly completed now. Enterprisey (talk!) (formerly APerson) 02:35, 17 June 2016 (UTC)
- {{OperatorAssistanceNeeded}} Question, not all projects are just a list of "names". Will you be preserving notes, signatures etc? e.g. look at these lists:
- Wikipedia:WikiProject_Firearms#Participants
- Wikipedia:WikiProject_Canadian_law#Participants
- Wikipedia:WikiProject_Atheism/Participants
- Wikipedia:WikiProject_Cryptozoology/Members
- Wikipedia:WikiProject_Medicine/Participants
I don't see a great way to ever set this bot loose on every wikiproject, and you don't want them to opt-in; so will this be defacto limited to only projects that use a specific ordering and styling method you are expecting? — xaosflux Talk 04:39, 20 June 2016 (UTC)
- My bot code actually successfully parses all of those lists of participants; what I do is scan for standard username-looking things (strings of characters that aren't "User0" or whatever) and then move entire lines around (preserving notes and signatures in the process). I suppose I could ask at the WikiProject Council if any projects are interested. Then again, the entire point of this bot task is to update participant lists for inactivity, and by definition, inactive users aren't around to update their own participation in projects. Enterprisey (talk!) (formerly APerson) 04:45, 20 June 2016 (UTC)
-
- I'm curious how your bot would update the tables on Wikipedia:WikiProject_Medicine/Participants. Looking at wikitext_to_usernames(), I can't see how it would deal with all the table markup. →Σσς. (Sigma) 06:26, 20 June 2016 (UTC)
- I'm probably going to have to make it split by |- if it detects that participants are in a table. Enterprisey (talk!) (formerly APerson) 11:39, 20 June 2016 (UTC)
- @Enterprisey: To approve this in to trial, I'm going to need to see a list of projects you will trial with, you will need to notify the project talk of this bot request, and you should include projects with different layouts. Your bot page and edit summaries should detail the task, including how to opt out of it on a per-project basis. — xaosflux Talk 01:55, 21 June 2016 (UTC)
Josvebot 13
Operator: Josve05a (talk · contribs · SUL · edit count · logs · page moves · block log · rights log · ANI search)
Time filed: 02:30, Monday, May 16, 2016 (UTC)
Automatic, Supervised, or Manual: automatic
Programming language(s): Java
Source code available: WPCleaner
Function overview: Will almost do the same work as Wikipedia:Bots/Requests for approval/Yobot 16 exept a few differences. See function details below. The bot will fix some of the WP:CHECKWIKI-errors automaticly.
Links to relevant discussions (where appropriate): Wikipedia:Bots/Requests for approval/Yobot 16
Edit period(s): Continuous
Estimated number of pages affected: ~1-500 pages per week
Exclusion compliant (Yes/No): Yes
Already has a bot flag (Yes/No): Yes
Function details: These are errorswhich can be fixed with WPCleaner's automatic bot edit mode. Yobot has already gotten approval to fix these kind of errors, so there is precedence to allow bot to fix these errors. WPCleaner also marks the issues as fixed on the checkwiki-database.
Error | Description |
1 | Template contains useless word Template: |
2 | Tags with incorrect syntax |
4 | HTML text style element <a> |
6 | DEFAULTSORT with special letters[9] |
9 | Categories more at one line |
16 | Unicode control characters[10] |
17 | Category double |
20 | Symbol for dead |
22 | Category with space |
54 | Break in list |
85 | Tags without content |
88 | DEFAULTSORT with blank at first position |
90 | DEFAULTSORT with lowercase letters |
a.^ Josvebot is already approved to fix tis error
Josvebot could also fix 91 and 524 without changing any of the WPCleaner´s settings, but it won't becuase of too many bad fixes. Josvebot is also already approved to fix error 37, but supervised.
Discussion
@Magioladitis: This looks like a good one for you to review :D — xaosflux Talk 11:37, 17 May 2016 (UTC)
Josve05a me, Bgwhite and NicoV cooperate so that CHECKWIKI, AWB and WPCleander will produce the same lists. This is not the case right now. i still believe AWB is better because it can deal multiple errors are the same time and also do all this little stuff people say they have to be done but not as sole tasks. I would like to hear from Bgwhite too. -- Magioladitis (talk) 13:31, 17 May 2016 (UTC)
- If set properly, WPCleaner will fix multiple issues at the same time as well, plus all it takes is one click, and it will load all the errors and articles, while in AWB you have to generate each error manually one-by-one. And then manually mark them as fixed, and since the errors are listed in the database, someone will fix these issues at some time, whether or not if it is the only edit or not. (t) Josve05a (c) 13:57, 17 May 2016 (UTC)
- I use AWB to fix the errors. I then use WPCleaner to mark which ones were fixed. Best of both worlds, but it takes more time. AWB does fix alot more errors than WPCleaner and is overall a better tool. Josvebot is already approved for several errors listed above.
- This should be viewed as a standard "fix checkwiki" bot request. Josvebot should be able to fix any CheckWiki error with either AWB or WPCleaner. Instead of approving to fix certain errors one at a time, just do them all. Both tools are proven and get the job done. Bgwhite (talk) 20:37, 17 May 2016 (UTC)
Josve05a I think it is more important that you keep reporting bugs and feature requests for WPCleaner. A bot ht will solely use this tool at the moment is not good idea. Maybe soon when we will hve the list generation coordinated.I think you should be patient for a short while. -- Magioladitis (talk) 22:24, 1 June 2016 (UTC)
Bots in a trial period
BU RoBOT 20
Operator: BU Rob13 (talk · contribs · SUL · edit count · logs · page moves · block log · rights log · ANI search)
Time filed: 16:04, Friday, June 10, 2016 (UTC)
Automatic, Supervised, or Manual: Automatic
Programming language(s): AWB
Source code available: AWB
Function overview: Auto-assesses class of articles as requested by WikiProjects
Links to relevant discussions (where appropriate):
- Wikipedia:Village_pump_(proposals)/Archive_131#Auto-assessment
- Specific project discussions are linked at User:BU RoBOT/autoassess
Edit period(s): As requested by projects
Estimated number of pages affected: For most projects, a couple thousand at most, but it depends entirely on the project.
Exclusion compliant (Yes/No): Yes
Already has a bot flag (Yes/No): Yes
Function details: Auto-assesses the class within {{WikiProject Biography}} using class parameters in other WikiProject templates. Only auto-assesses to "standard" classes (stub, start, C, B, GA, FA, FL). Skips articles that have multiple "standard" classes. Will not auto-assess any other project templates. Similar task has been approved and successfully run in the past at Wikipedia:Bots/Requests for approval/BU RoBOT 12 and Wikipedia:Bots/Requests for approval/BU RoBOT 15. I will only run this task at the request of WikiProject members after they obtain consensus (or a lack of opposition after several days, since many projects aren't all that active). I originally planned to submit individual BRFAs for each project, but I've received four requests in a very short period of time, so doing so would flood the approval system.
Discussion
- I wouldn't say this needs another request for approval as it is basically the same as BU RoBOT tasks 12 and 15, only with a different WikiProject (am I right)? Therefore, if those tasks worked correctly, I don't really think a separate BRFA is required each time. So, I recommend a speedy approval. Rcsprinter123 (drone) 16:18, 11 June 2016 (UTC)
- Having a general task for this does seem OK - provided it is endorsed by the wikiprojects and perhaps with a maximum limit? Maybe tagging runs up to 5000 pages? Any guidelines others would like to impose? — xaosflux Talk 16:36, 11 June 2016 (UTC)
- I'm completely ok with that. On a typical project, only around 1/3 of the unassessed pages are assessed by this task. For large projects, this is usually lower (WP:WikiProject Biography wound up being closer to 1/5). Very few projects have more than 15,000 unassessed pages, so a 5,000 edit limit per WikiProject is sensible and would either greatly reduce or possibly even eliminate the necessary BRFAs. If you want to formalize the guidelines I'm using myself, here they are:
- Before any assessing can occur, the relevant WikiProject must have a discussion on their talk page for at least five days.
- There must either be consensus for assessing at that discussion or no opposition over the five day period. (I'm a frequent closer of RfCs, TfDs, and CfDs, so I'm comfortable with double-checking consensus for this.)
- I will auto-assess only according to the rules at User:BU RoBOT/autoassess unless specifically requested by a WikiProject. If their request is simply adding or removing a class to auto-assess to, I think that's fine to do under this approval (i.e. don't assess to B-class, if the project in question has their own criteria they want to use), but I wouldn't add anything more complicated without seeking additional approval (i.e. only inherit classes from a specific set of WikiProjects or something like that).
- A WikiProject member must add the project to User:BU RoBOT/autoassess for me to start assessing. Clear consensus at the project without a specific action from a project member to opt-in isn't good enough. In other words, the project must determine amongst itself that they have consensus (although I'll double-check that before starting). I will be verifying that the person who lists a project didn't recently become a member, and if they did, I will verify they've worked in that topic area for a while.
- ~ RobTalk 17:22, 11 June 2016 (UTC)
- I'm completely ok with that. On a typical project, only around 1/3 of the unassessed pages are assessed by this task. For large projects, this is usually lower (WP:WikiProject Biography wound up being closer to 1/5). Very few projects have more than 15,000 unassessed pages, so a 5,000 edit limit per WikiProject is sensible and would either greatly reduce or possibly even eliminate the necessary BRFAs. If you want to formalize the guidelines I'm using myself, here they are:
- {{BAGAssistanceNeeded}} ~ RobTalk 22:52, 19 June 2016 (UTC)
Approved for trial (5000 edits or 30 days). OK to trial using your new sign up process-edit limit is per project. — xaosflux Talk 01:52, 21 June 2016 (UTC)
- @Xaosflux: Alright, so if I understand correctly, the trial is 30 days with a potentially unlimited number of edits, but capped at 5,000 per project. Is that correct? Should I mark this as trial complete at some point before the 30 days if there's a sufficiently large number of edits to judge the trial? ~ RobTalk 01:55, 21 June 2016 (UTC)
FastilyBot 10
Operator: Fastily (talk · contribs · SUL · edit count · logs · page moves · block log · rights log · ANI search)
Time filed: 21:53, Wednesday, June 1, 2016 (UTC)
Automatic, Supervised, or Manual: Automatic
Programming language(s): Java
Source code available: Will be linked to Bot's user page
Function overview: Reviving Wikipedia:Bots/Requests for approval/Fbot 5
Links to relevant discussions (where appropriate): Requested by Cloudbound
Edit period(s): Continuous - Weekly
Estimated number of pages affected: 1-2k
Exclusion compliant (Yes/No): Yes
Already has a bot flag (Yes/No): Yes
Function details: This is a uncontroversial maintenance/categorization task, intended to facilitate tracking of orphaned free files via {{Orphan image}}
. Files tagged with {{Orphan image}}
are categorized in Category:Wikipedia orphaned files, where other users can either de-orphan the files and/or move them to Commons. This task also performs a complementary function to Wikipedia:Bots/Requests for approval/FastilyBot 4. -FASTILY 22:46, 1 June 2016 (UTC)
Discussion
- So the task is to look for unused free files which are not tagged with {{orphan image}} and add that template, right? Some questions:
- How do you define 'free'? Is the file automatically 'free' if it doesn't appear in Category:All non-free media? There are plenty of files which neither appear in Category:All non-free media nor in Category:All free media.
- How do you define 'unused'? Is a file unused as long as it isn't used in the main namespace, or must it be unused in all namespaces?
- Looks like a useful task. --Stefan2 (talk) 22:25, 1 June 2016 (UTC)
- Yes, that is correct.
- For simplicity, I'll only be using Category:All free media as a generator.
- Unused is defined as no file usage in the main namespace
- -FASTILY 22:37, 1 June 2016 (UTC)
- So e.g. userphotos for a userpage are unused? I'm not sure if that is desirable. --Stefan2 (talk) 23:09, 1 June 2016 (UTC)
- Fair point. To keep false positives low, I'll have the bot only flagging files with zero
fileusage
links in any namespace -FASTILY 08:54, 2 June 2016 (UTC)- Since Multichill created daily galleries of new files, maybe file use in Multichill's userspace shouldn't count? I'm not sure if there are other pages which should be excluded.
- I assume that you design task 4 and task 10 so that your bot isn't edit warring with itself. --Stefan2 (talk) 20:07, 2 June 2016 (UTC)
- I have started an ignore list for the task; feel free to add any other titles you can think of. The bot won't be edit-warring with itself -FASTILY 10:39, 6 June 2016 (UTC)
- Fair point. To keep false positives low, I'll have the bot only flagging files with zero
- So e.g. userphotos for a userpage are unused? I'm not sure if that is desirable. --Stefan2 (talk) 23:09, 1 June 2016 (UTC)
{{BAGAssistanceNeeded}} Since there are no other objections, could this please be approved for trial, thanks -FASTILY 23:35, 14 June 2016 (UTC)
Cewbot
Operator: Kanashimi (talk · contribs · SUL · edit count · logs · page moves · block log · rights log · ANI search)
Time filed: 07:32, Thursday, June 9, 2016 (UTC)
Automatic, Supervised, or Manual: automatic
Programming language(s): JavaScript
Source code available: Yes
Function overview: When there is a interlanguage link template existing link, convert it to an internal link.
Links to relevant discussions (where appropriate):
Edit period(s): weekly
Estimated number of pages affected: 4K+ (When running on jawiki, the task reduce the targets from 4K to 1.6K.)
Exclusion compliant (Yes/No): Yes
Already has a bot flag (Yes/No): No
Function details: This task will check every page in Category:Interlanguage link template existing link. If the foreign page of interlanguage link template is existing and properly, convert it to an internal link. The task will generate an error report like this (Japanese). (I will translate the report to English.) And I also want to help bot requests.
Discussion
- Please make 10 edits manually with your own account and link to the diffs here, I want to make sure we understand the work that you will be doing. — xaosflux Talk 17:51, 9 June 2016 (UTC)
- Thanks for your attention. I have do some test edits: #, #, #, #, #, #, #, #, #, #, #; and the source code on GitHub. Please check them and tell me if there are any problem or question. Thank you. --Kanashimi (talk) 01:57, 10 June 2016 (UTC)
Approved for trial (75 edits or 10 days).. Please flag your edits as minor, please link to this page in your edit summaries. — xaosflux Talk 02:16, 17 June 2016 (UTC)
Administrator note: Unblocked bot account; flagged confirmed for skipcaptcha. — xaosflux Talk 02:17, 17 June 2016 (UTC)
AGbot
Operator: Andrew Gray (talk · contribs · SUL · edit count · logs · page moves · block log · rights log · ANI search)
Time filed: 22:49, Sunday, May 29, 2016 (UTC)
Automatic, Supervised, or Manual: Supervised for first runs, automatic if successful
Programming language(s): Python (for PWB), bash scripts (for content generation)
Source code available: Standard pywikibot, uploading locally assembled text files (code)
Function overview: Maintaining lists of possible citations for articles
Links to relevant discussions (where appropriate):
Edit period(s): Daily or weekly
Estimated number of pages affected: ~50
Exclusion compliant (Yes/No): no - will only be editing specified pages
Already has a bot flag (Yes/No): No
Function details: I am working on a local script which generates correctly-formatted citation templates based on known matches between external sources and Wikipedia articles, via Wikidata. The bot will upload index pages of these (a first-pass example is at User:Andrew Gray/odnb), showing the article and the possible citation(s). These can then be reviewed by editors for manual inclusion in articles when useful & appropriate, either as a new source or as a nicely-formatted replacement for an existing bare citation.
This can potentially cover a number of resources, but for the first stage I'm working on the Oxford Dictionary of National Biography (~42k enwiki articles matched to subjects via P1415, between 4-10k of which cite the ODNB in some way). Later stages will cover links to the older Dictionary of National Biography on Wikisource (currently ~9k links), and to the History of Parliament (currently ~7k links), both of which are frequently-used sources. In theory, the same system could be extended to a number of other high-quality resources if there is demand and there is suitable metadata on Wikidata.
This processing will all be done offline and the bot will simply have to upload the indexes. They will be stored in a suitable location (possibly userspace, possibly under Wikipedia:WikiProject Dictionary of National Biography), and the bot will not edit articles itself. These lists will be refreshed on a periodic basis - either daily or weekly. Andrew Gray (talk) 22:49, 29 May 2016 (UTC)
Discussion
Approved for trial (200 edits or 30 days)Userspace only. You can trial this, please start by making your lists in the bot's own userspace. — xaosflux Talk 22:59, 30 May 2016 (UTC)
rezabot 3
Operator: Yamaha5 (talk · contribs · SUL · edit count · logs · page moves · block log · rights log · ANI search)
Time filed: 10:25, Friday, February 19, 2016 (UTC)
Automatic, Supervised, or Manual: Automatic
Programming language(s): Python
Source code available: interwikidata.py
Function overview: I want to move interwikis to wikidata
Links to relevant discussions (where appropriate):
Edit period(s): daily
Estimated number of pages affected: unknown
Exclusion compliant (Yes/No): Yes
Already has a bot flag (Yes/No): No
Function details:
Discussion
Some years ago for old interwiki.py code, user:Rezabot had flag at many local wikis (such as en.wikipedia) also it was global after starting wikidata all of interwiki bots were stopped. now I want to run it with new interwiki code (interwikidata.py) Yamaha5 (talk) 10:25, 19 February 2016 (UTC)
- I thought all interwikis had already been migrated to Wikidata; is there really a need for this? — Earwig talk 18:50, 20 February 2016 (UTC)
- Can you make a few edits with your user account, and show the diffs below to more fully explain what you are trying to do? — xaosflux Talk 16:23, 21 February 2016 (UTC)
- Couldn't the old interwiki migrate bot be reactivated, if that were the case, instead of having to go through this process? →Σσς. (Sigma) 02:16, 23 February 2016 (UTC)
- This request will soon expire from lack of participation, please review the questions above. — xaosflux Talk 02:31, 28 February 2016 (UTC)
Note: This bot appears to have edited since this BRFA was filed. Bots may not edit outside their own or their operator's userspace unless approved or approved for trial. AnomieBOT⚡ 05:19, 28 February 2016 (UTC)
-
- @Xaosflux, Σ, and The Earwig:There are many article and pages which have old interwiki like these or these or these and other 250 langs :). also newbies add oldinterwiki to articles like these which was cleaned two week ago!
- for bot edits please check this
- Yamaha5 (talk) 20:03, 28 February 2016 (UTC)
- please some one take a look on this request!Yamaha5 (talk) 15:14, 4 March 2016 (UTC)
- {{BotTrialComplete}}
- Special:Contributions/rezabot
- Yamaha5 (talk) 15:15, 4 March 2016 (UTC)
- {{OperatorAssistanceNeeded}} See below: — xaosflux Talk 02:45, 6 March 2016 (UTC)
- This trial was never approved, and should not be running. — xaosflux Talk 02:45, 6 March 2016 (UTC)
- These edits appear to be doing harm, by removing what appear to be VALID links such as on Surface_weather_analysis. Please explain why these links SHOULD NOT be present, and "because they are interwiki links" is not an acceptable answer here. — xaosflux Talk 02:45, 6 March 2016 (UTC)
-
- {{BAGAssistanceNeeded}}
- 1-why this trial edits are not valid? what should i do?
- 2-Interwiki links on Surface_weather_analysis was incorrect and should be remove because they had cofilict on wikidata please check d:Q11157129 and d:Q189796 these items have article on enwiki so frwiki, cswiki ,... shouldn't link to both of them.
- this code is standard code and it is tested on many wikis. Yamaha5 (talk) 18:16, 7 March 2016 (UTC)
- 1) You have to be approved for a trial, before making trial edits. — xaosflux Talk 04:19, 8 March 2016 (UTC)
- 2) Unfortunately, I don't read all these languages. For example on this edit: Special:Diff/708263000 you removed the links to many other languages - and these links are not coming in from Wikidata, making this article have less links. Are you saying these other links are not about this subject and that is why they do not belong? — xaosflux Talk 04:19, 8 March 2016 (UTC)
- This article completely messed and had incorrect interwikis for example these are links which are removed by bot:
- [[cs:Meteorologická mapa]] existed at > d:Q865144
- [[de:Wetterkarte]] existed at > d:Q865144
- [[es:Frente (meteorología)]] existed at > d:Q865144
- [[fr:Front (météorologie)]] existed at > d:Q189796
- [[ko:일기도]] existed at > d:Q865144
- [[nl:Weerkaart]] existed at > d:Q865144
- [[pl:Mapa synoptyczna]] existed at > d:Q865144
- [[zh:天氣圖]] existed at > d:Q865144
and Surface weather analysis existed at > d:Q11157129 so because of interwiki conflict these links at that article should be removed and bot's edit was correct. these lang-links connected to Surface weather analysis (d:Q11157129) and Weather front (d:Q189796) and Weather map (d:Q865144)
Yamaha5 (talk) 21:50, 9 March 2016 (UTC)
- Hm, so how does the bot actually work? It seems like a process that would require human review, perhaps by merging Wikidata items if there is overlap or figuring out if some articles are misclassified. I know I've manually dealt with this sort of thing in the past. — Earwig talk 17:58, 14 March 2016 (UTC)
-
- Bot checks if there is any conflict on the page's itewiki leave it except case which all items have enwiki link.
- for this example d:Q11157129 , d:Q189796, d:Q865144 had enwiki link so we can't merge them on wikidata. bot only works on this kind of conflicts and leave the rest.
- For mentioned conflicts it will check if all interwiki links exist on wikidata it will clean locally if one of them doesn't exist it will leave that page.
- for this example bot checked wikidata items of cs,de,es,fr,ko,nl,pl,zh if all of them have link to their own items so we can clean that local wiki's page.
- Note I can deactivate conflict solver part and only import interwiki from without-conflict pages to wikidata and clean it locally Yamaha5 (talk) 07:03, 16 March 2016 (UTC)
Are you using the pywikibot-core version of interwikidata.py, or is it custom code? Legoktm (talk) 07:40, 13 April 2016 (UTC)
- @Legoktm: It is custom code based on pywikibot-core version Yamaha5 (talk) 08:02, 14 April 2016 (UTC)
@Ladsgroup: already has a bot doing similar things. I would like to hear from them. -- Magioladitis (talk) 06:39, 19 April 2016 (UTC)
- Hey, I wrote that interwikidata.py, the current version in master is useless. The modified version actually just deletes everything (causing this). I have another modified version that I will upload it somewhere or try to get that through code review. I ran it yesterday and it is being ran on weekly basis
:)
Ladsgroupoverleg 08:07, 19 April 2016 (UTC)
OK I would prefer if @Ladsgroup: does this task since they have written the code and they are directly related to Wikidata. -- Magioladitis (talk) 13:47, 20 April 2016 (UTC)
- @Magioladitis: you mean if some one run this bot the other user couldn't!! I am bot developer (now my other codes runs on many wikis and wikidata) and I had global bot so I can manage it. why you do not trust to other users? Yamaha5 (talk) 05:56, 25 April 2016 (UTC)
Yamaha5 the script you want to run is outdated and Ladsgroup is the one who has written it and can fix it. If you can write a code that can safely remove the links I am OK either way. -- Magioladitis (talk) 06:10, 25 April 2016 (UTC)
-
- I said that I customized that code and it works fine on fa.wikipedia. but here they don't allow me to have test edits! Yamaha5 (talk) 07:11, 25 April 2016 (UTC)
- It doesn't, Reza. Let me work on this and once I'm finished with testing and fixing all bugs. I would give you the script (or put in pywikibot) and afterwards it only depends on BAG to decide whether they want several bot operators for this task or not. I must mention, I'm running the cleaning on a daily basis for both Persian Wikipedia and English Wikipedia.
:)
Ladsgroupoverleg 12:51, 27 April 2016 (UTC)
- It doesn't, Reza. Let me work on this and once I'm finished with testing and fixing all bugs. I would give you the script (or put in pywikibot) and afterwards it only depends on BAG to decide whether they want several bot operators for this task or not. I must mention, I'm running the cleaning on a daily basis for both Persian Wikipedia and English Wikipedia.
- I said that I customized that code and it works fine on fa.wikipedia. but here they don't allow me to have test edits! Yamaha5 (talk) 07:11, 25 April 2016 (UTC)
Yamaha5 how many pages do you estimate they will be changed? -- Magioladitis (talk) 06:50, 27 April 2016 (UTC) {{BAGAssistanceNeeded}}
- @amir. this edited at 6 April 2016 and these pages remained.
- @Magioladitis: for pages which contain
[[fa:
it should be less than 20 per week. for other langs likefr:
andde:
andit:
should be less than 20,000 pages. - Note: I can only run this code on pages which have
fa:
interwiki Yamaha5 (talk) 11:33, 30 April 2016 (UTC)
Approved for extended trial (100 edits). Magioladitis (talk) 20:21, 20 May 2016 (UTC)
APersonBot 6
Operator: APerson (talk · contribs · SUL · edit count · logs · page moves · block log · rights log · ANI search)
Time filed: 22:10, Saturday, March 5, 2016 (UTC)
Automatic, Supervised, or Manual: automatic
Programming language(s): Python
Source code available: https://github.com/APerson241/APersonBot/blob/master/wp-go-archiver/wp-go-archiver.py
Function overview: The bot archives WP:GO to a subpage and clears it out for a new week.
Links to relevant discussions (where appropriate):
Edit period(s): Weekly
Estimated number of pages affected: 1
Exclusion compliant (Yes/No): n/a
Already has a bot flag (Yes/No): Yes
Function details: The bot follows the directions at Template:Editnotices/Page/Wikipedia:Goings-on to archive WP:GO.
Discussion
Approved for trial (15 days). Very straight forward, report back after running if there were any issues or complaints. — xaosflux Talk 00:24, 6 March 2016 (UTC)
@APerson: It would be nice if the bot would check whether the page is already archived (on a certain week) or not, so it could avoid double archiving. (like there) Also if the bot follows Template:Editnotices/Page/Wikipedia:Goings-on, than it should archive at 00:00:01 and not more than 2&1/2 hours later. Armbrust The Homunculus 13:28, 13 March 2016 (UTC)
- Most recent run has completed without errors; the bot was 3 minutes late due to some MediaWiki-enforced timeouts that I only saw later in the logs. APerson (talk!) 00:07, 20 March 2016 (UTC)
- Marking as
Trial complete. --slakr\ talk / 03:33, 24 March 2016 (UTC)
- @APerson: I take it the two moves on the 13th issue has been sorted? --slakr\ talk / 02:52, 29 March 2016 (UTC)
Approved for extended trial (30 days). Juuust in case. I think 14 days (on something that gets archived only twice during that period) was probably a little short. This should hopefully give a better sample, though I don't foresee any major issues if everything's fixed. =) --slakr\ talk / 03:24, 29 March 2016 (UTC)
- I guess I should post an explanation on here about why the most recent run was a bit late: everything went well (the cron job successfully found the file, which is an improvement over last time) except I forgot to chmod +x the actual shell file (which we're using for the first time this week). Anyway, everything should be working perfectly next time. APerson (talk!) 03:31, 3 April 2016 (UTC)
- @APerson: For some reason the bot didn't archive the page today at all. Any idea, why? Armbrust The Homunculus 22:59, 17 April 2016 (UTC)
- Considering that the bot worked perfectly last week, I have no idea. I haven't looked at the logs yet, because I'm away from a computer that can SSH at the moment. I'll be back with an answer tomorrow. I suspect there was something funny going on with login sessions on tools, but I have no idea. APerson (talk!) 03:24, 18 April 2016 (UTC)
- Armbrust, I just confirmed that the fault was with the login session, not with the bot's code. I've logged it in again; it should be working fine for next week. Interestingly enough, task 5 also encountered some screwiness with login sessions, but there it was confirmed that login sessions were an entirely one-time issue. I hope that's also the case here.
APerson (talk!) 02:01, 19 April 2016 (UTC)
- @APerson: Unfortunately, the same thing happened again. Armbrust The Homunculus 07:59, 25 April 2016 (UTC)
- @APerson: And this week again. {{OperatorAssistanceNeeded}} Armbrust The Homunculus 13:36, 2 May 2016 (UTC)
- Armbrust, I just confirmed that the fault was with the login session, not with the bot's code. I've logged it in again; it should be working fine for next week. Interestingly enough, task 5 also encountered some screwiness with login sessions, but there it was confirmed that login sessions were an entirely one-time issue. I hope that's also the case here.
- Considering that the bot worked perfectly last week, I have no idea. I haven't looked at the logs yet, because I'm away from a computer that can SSH at the moment. I'll be back with an answer tomorrow. I suspect there was something funny going on with login sessions on tools, but I have no idea. APerson (talk!) 03:24, 18 April 2016 (UTC)
- {{OperatorAssistanceNeeded}} Did you happen to resolve the issues related to the login/session stuff? --slakr\ talk / 02:46, 8 June 2016 (UTC)
- Slakr, I think they have been addressed. For the most part, whenever there's been a problem with login sessions, I can point to some sort of maintenance work on the servers. Needless to say, the bot's code doesn't contain any login-related bugs. Enterprisey (talk!) (formerly APerson) 02:52, 8 June 2016 (UTC)
- @APerson: The bot made the same mistake the last two time when in "re-started" the page. It added the FPs from the previous week back to the page. (1 & 2.) Regards, Armbrust The Homunculus 09:20, 12 June 2016 (UTC)
- Figured out why. In the past, the format was always DD MMM, but now that you're using 4 letters for the month, the bot isn't happy. I just fixed it. Enterprisey (talk!) (formerly APerson) 15:08, 12 June 2016 (UTC)
A user has requested the attention of a member of the Bot Approvals Group. Once assistance has been rendered, please deactivate this tag. Seems like most of the important bugs have been fixed; any comments? Enterprisey (talk!) (formerly APerson) 03:58, 20 June 2016 (UTC)
Bots that have completed the trial period
Monkbot 11
Operator: Trappist the monk (talk · contribs · SUL · edit count · logs · page moves · block log · rights log · ANI search)
Time filed: 14:28, Monday, May 16, 2016 (UTC)
Automatic, Supervised, or Manual: automatic
Programming language(s): awb/c#
Source code available: User:Monkbot/task 11: CS1 multiple authors/editors fixes
Function overview: fix cs1|2 author/editor parameters in articles listed in Category:CS1 maint: Multiple names: authors list and Category:CS1 maint: Multiple names: editors list
Links to relevant discussions (where appropriate): no recent discussions
Edit period(s): primarily one-time with additional runs as necessary
Estimated number of pages affected: at this writing there are 107,538 + 8,268 pages in the two categories
Exclusion compliant (Yes/No): yes
Already has a bot flag (Yes/No): yes
Function details: User:Monkbot/task 11: CS1 multiple authors/editors fixes
Discussion
Comment to Trappist: I support this work. Have you considered testing for and skipping the pathological case in which |firstn=
is present with no corresponding |lastn=
, i.e. articles in Category:CS1 errors: missing author or editor? If not, I suggest trying to do so. "Correcting" author lists in articles with this case present will probably result in malformed citations with no error messages. – Jonesey95 (talk) 20:50, 16 May 2016 (UTC)
- Point. Now, any template that has
|firstn=
after empty parameters have been removed is ignored. I actually haven't seen any of these in the wild (yet) but no doubt, perhaps as one of the artifacts of citation bot, there are|author=name, name, name
|first2=first name
. - —Trappist the monk (talk) 22:08, 16 May 2016 (UTC)
Yay? Nay? As of this morning, I've made some 2800 edits manually with the bot's script (see Special:Contributions/Trappist the monk).
—Trappist the monk (talk) 10:33, 26 May 2016 (UTC)
Trial complete. The first 125 edits were made using articles listed in Category:CS1 maint: Multiple names: authors list; the second 125 edits were made using articles listed in Category:CS1 maint: Multiple names: editors list.
These anomalies are noted:
- H. A. Willis:
|author1=Curle Smith, H. Nora
looks like two authors with a comma separator. Since there is only one comma, this author name is not the reason that the article is a member of Category:CS1 maint: Multiple names: authors list. I reverted this edit; refined the bot code, and ran the bot again over the page where it properly did not make an edit. - Steve Hagen:
|author=Brussat, Frederic and Mary Ann
omits Mary Ann's last name though the bot cannot know that. This is a case of garbage-in-garbage-out. cs1|2 do not support such naming conventions; complete author names are required. - Ron Holland: gigo; three names marked-up as two authors.
- Buddy Holly: I neglected to remove a debug statement; the edit was reverted and the bot retried
- to fix the Buddy Holly bug, it was necessary to disable the selective skipping in the code; I neglected to re-enable the selective skipping so edits that would not normally be made were made to:
- Hospital Food
- House of Fraser
- Housefly
- Houston Astros
- Houston College Classic
- Howard Johnson (baseball)
- Howard Lake (Washington)
- HTC HD2
- Huascarán National Park
- Hudson_County, New Jersey
- Hudson Yards, Manhattan
- Hudson's Oldfield mouse
- 11th Fighter-Interceptor Squadron
- 13th School Group
- 14th/32nd_Battalion_(Australia)
- 14th_Battalion_(Australia)
- 14th Operations Group
- Lester S. Willson: gigo:
|editor=William H. Powell, Lt Col, U.S. Army
should not include rank and affiliation - Lexis diagram: gigo:
|editor=Demographic Research, vol. 4, art. 3, pp 97-124
is not the name of an editor
—Trappist the monk (talk) 13:45, 1 June 2016 (UTC)
- I have modified the bot so that it skips templates with author/editor parameter values containing digits or the word 'army'. Items 2, 3, 6 & 7 above have been corrected manually.
- —Trappist the monk (talk) 14:16, 1 June 2016 (UTC)
Cyberbot II 5a
Operator: Cyberpower678 (talk · contribs · SUL · edit count · logs · page moves · block log · rights log · ANI search)
Time filed: 01:46, Tuesday, March 15, 2016 (UTC)
Automatic, Supervised, or Manual: Automatic
Programming language(s): PHP
Source code available: here
Function overview: Addendum to 5th task. Cyberbot will now review links and check to see if they are dead. Based on the configuration on the config page, Cyberbot will look at a link and retrieve a live status from the given source. It will update a DB value, default 4, about that link.
Links to relevant discussions (where appropriate): none
Edit period(s): continuous
Estimated number of pages affected: Analyzes 5 million articles, the initial run will probably affect half of that.
Exclusion compliant (Yes/No): Yes
Already has a bot flag (Yes/No): Yes
Function details: When the bot checks a link it runs that value against the bot's DB, and assigns it a value from 0 to 4. 0 represents the site being dead, 1-3 represents the site being alive and 4 indicates an unknown state and is the default value. Every pass the bot makes over a URL, if the URL is found to be dead at that moment, the integer is decreased by 1. If found to be alive, the value gets reset to 3. If it is 0, it no longer checks if it is alive, as a dead site found to be dead at least 3 times, is most likely going to remain dead and thus the bot will conserve resources.—cyberpowerChat:Online 01:46, 15 March 2016 (UTC)
Discussion
- Checking if a link is really dead or not is a million dollar question because of soft 404s which are common. There is a technique for solving this problem described here and code here (quote):
-
- Basically, you fetch the URL in question. If you get a hard 404, it’s easy: the page is dead. But if it returns 200 OK with a page, then we don’t know if it’s a good page or a soft 404. So we fetch a known bad URL (the parent directory of the original URL plus some random chars). If that returns a hard 404 then we know the host returns hard 404s on errors, and since the original page fetched okay, we know it must be good. But if the known dead URL returns a 200 OK as well, we know it’s a host which gives out soft 404s. So then we need to test the contents of the two pages. If the content of the original URL is (almost) identical to the content of the known bad page, the original must be a dead page too. Otherwise, if the content of the original URL is different, it must be a good page.
- -- GreenC 04:40, 27 March 2016 (UTC)
- Hi @Green Cardamom:. That is a good point. We've discussed this and decided, for now, to not check for soft 404s. It's never going to be 100% reliable. So for now, we're checking for: hard 404s (and other bad response codes) and redirects to domain roots only. It's less than optimal, but at least we can be sure we don't end up tagging non-dead links as dead. It turns out it's quite easy for a search engine or big web scrapers to detect soft 404s and various other kinds of dead links (ones replaced by link farms etc.). For this reason, we're seeking Internet Archive's help on this problem. They've been very helpful so far and promised to look into this and share their code/open an API for doing this. -- NKohli (WMF) (talk) 03:16, 28 March 2016 (UTC)
- That would be super to see when available as I could use it as well. Some other basic ways of detecting 404 redirects is to look for these strings in the new path (mix case): 404 (eg 404.htm, or /404/ etc), "not*found" (variations such as Not_Found etc), /error/, . I've built up a database of around 1000 probable soft 404 redirects and can see some repeating patterns across sites. It's very basic filtering, but catches some more beyond root domain. -- GreenC 04:10, 28 March 2016 (UTC)
- Awesome, thanks! I'll add those filters to the checker. -- NKohli (WMF) (talk) 04:23, 28 March 2016 (UTC)
- That would be super to see when available as I could use it as well. Some other basic ways of detecting 404 redirects is to look for these strings in the new path (mix case): 404 (eg 404.htm, or /404/ etc), "not*found" (variations such as Not_Found etc), /error/, . I've built up a database of around 1000 probable soft 404 redirects and can see some repeating patterns across sites. It's very basic filtering, but catches some more beyond root domain. -- GreenC 04:10, 28 March 2016 (UTC)
- Hi @Green Cardamom:. That is a good point. We've discussed this and decided, for now, to not check for soft 404s. It's never going to be 100% reliable. So for now, we're checking for: hard 404s (and other bad response codes) and redirects to domain roots only. It's less than optimal, but at least we can be sure we don't end up tagging non-dead links as dead. It turns out it's quite easy for a search engine or big web scrapers to detect soft 404s and various other kinds of dead links (ones replaced by link farms etc.). For this reason, we're seeking Internet Archive's help on this problem. They've been very helpful so far and promised to look into this and share their code/open an API for doing this. -- NKohli (WMF) (talk) 03:16, 28 March 2016 (UTC)
- Are the checks spaced out a bit? Something could be down for a few days and then come back up for a while. Also, can we clarify the goal here; is this to add archival links to unmarked links, or to tag unmarked links as dead which have no archival links, or to untag marked-as-dead links? — Earwig talk 05:57, 29 March 2016 (UTC)
- Cyberbot can do all three, but the onwiki configuration only allows for the first two. Since Cyberbot is processing a large wiki, the checks are naturally spaced out.—cyberpowerChat:Online 14:44, 29 March 2016 (UTC)
- {{BAGAssistanceNeeded}} Can we move forward with this?—cyberpowerChat:Online 14:03, 5 April 2016 (UTC)
- "naturally spaced out" I would want some sort of minimum time here in the system... ·addshore· talk to me! 06:58, 8 April 2016 (UTC)
- I can program it wait at least a day or 3 before running the check again. That would give the link 3 or 9 days, in case it was temporarily down.—cyberpowerChat:Online 15:31, 8 April 2016 (UTC)
- Let's try 3 days of spacing. Is it easy to trial this component as part of the bot's normal runtime? Can you have it start maintaining its database now and after a week or two we can come back and check what un-tagged links it would have added archival links for or marked as dead? — Earwig talk 23:28, 9 April 2016 (UTC)
- Unfortunately the bot isn't designed that way. If the VERIFY_DEAD setting is off, it won't check anything, nor will it tag anything. If it's on it will do both of those things. I can create a special worker to run under a different bot account so we can monitor the edits more easily.—cyberpowerChat:Limited Access 23:36, 9 April 2016 (UTC)
- How often does the bot pass over a URL? (Ignoring any 3-day limits.) In other words, are you traversing through all articles in some order? Following transclusions of some template? — Earwig talk 01:32, 10 April 2016 (UTC)
- Ideally, given the large size of this wiki, there would be unique workers each handling a list of articles beginning with a specific letter. Due to technical complications, there is only one worker that traverses all of Wikipedia, and one that handles only articles with dead links. So it would likely hit each URL much longer than 3 days, until the technical complication is resolved. What I can do is startup the checking process, and compile a list of urls that have a dead status of 2 or 1, which mean the URL failed the first and/or seconds passes.—cyberpowerChat:Limited Access 02:10, 10 April 2016 (UTC)
- That's similar to what I meant by "Can you have it start maintaining its database now...", though as you suggest it might make more sense to check what's been identified as dead at least once so we don't need to wait forever. Okay, let's try it.
Approved for trial (14 days, 0 edits). — Earwig talk 19:50, 10 April 2016 (UTC)
Trial complete.—cyberpowerChat:Online 19:27, 24 April 2016 (UTC)
- {{BAGAssistanceNeeded}} The bot has proven to be reasonably reliable, and the mentioned issues below have been addressed and installed.—cyberpowerChat:Online 01:51, 4 May 2016 (UTC)
- That's similar to what I meant by "Can you have it start maintaining its database now...", though as you suggest it might make more sense to check what's been identified as dead at least once so we don't need to wait forever. Okay, let's try it.
- Ideally, given the large size of this wiki, there would be unique workers each handling a list of articles beginning with a specific letter. Due to technical complications, there is only one worker that traverses all of Wikipedia, and one that handles only articles with dead links. So it would likely hit each URL much longer than 3 days, until the technical complication is resolved. What I can do is startup the checking process, and compile a list of urls that have a dead status of 2 or 1, which mean the URL failed the first and/or seconds passes.—cyberpowerChat:Limited Access 02:10, 10 April 2016 (UTC)
- How often does the bot pass over a URL? (Ignoring any 3-day limits.) In other words, are you traversing through all articles in some order? Following transclusions of some template? — Earwig talk 01:32, 10 April 2016 (UTC)
- Unfortunately the bot isn't designed that way. If the VERIFY_DEAD setting is off, it won't check anything, nor will it tag anything. If it's on it will do both of those things. I can create a special worker to run under a different bot account so we can monitor the edits more easily.—cyberpowerChat:Limited Access 23:36, 9 April 2016 (UTC)
- Let's try 3 days of spacing. Is it easy to trial this component as part of the bot's normal runtime? Can you have it start maintaining its database now and after a week or two we can come back and check what un-tagged links it would have added archival links for or marked as dead? — Earwig talk 23:28, 9 April 2016 (UTC)
DB Results
In an effort to more easily show what is going on in Cyberbot's memory, I have compiled a list of URLs with a live status of 2 or 1, which indicate they have failed their first, or second, pass respectively.
I've looked through the first chunk of these results. It looks like there are several false positives. The 2 most common types appear to be:
- Redirects that add or remove the 'www' hostname. This is bug in the soft-404 detection code. I'll create a Phabricator ticket for it.
- Timeouts. Several pages (and especially PDFs) seem to take longer than 3 seconds to load. We should consider increasing the timeout from 3 seconds to 5 or 10 seconds. We should also just exclude PDFs entirely. I gave up on http://www.la84foundation.org/6oic/OfficialReports/1924/1924.pdf after waiting 3 minutes for it to load.
There are also some weird cases I haven't figured out yet:
- http://au.eonline.com/news/386489/2013-grammy-awards-winners-the-complete-list sometimes returns a 405 Method Not Allowed error and sometimes returns 200 OK when accessed via curl. In a browser, however, it seems to always return 200 OK.
- http://gym.longinestiming.com/File/000002030000FFFFFFFFFFFFFFFFFF01 always returns a 404 Not Found error when accessed via curl, but always returns 200 OK from a browser.
I confirmed that these are not related to User Agent. Maybe there is some header or special cookie handling that we need to implement on the curl side. Kaldari (talk) 00:28, 14 April 2016 (UTC)
- Accord to Cyberpower, the bot is actually using a 30 second timeout (and only loading headers). I'll retest with that. Kaldari (talk) 00:45, 14 April 2016 (UTC)
- Timeouts should be handled in a sane and fail-safe way. If something times out, any number of things could be going on, including bot-side, host-side, and anything in between. Making a final "time to replace this with an archive link" is premature if you're not retrying these at least a couple of times over the course of several days. Also, you might try to check content-length headers when it comes to binaries like PDFs. If you get back a content-length that's over 1MB or content-type that matches the one you're asking for (obviously apart from things like text/html, application/json), chances are the file's there and valid—it's highly unlikely that it's a 404 masquerading as a 200. Similarly, if an image request returns something absurdly tiny (like a likely transparent pixel sorta thing), it might also be suspicious. --slakr\ talk / 04:14, 16 April 2016 (UTC)
- Actually, it looks like yields two back-to-back 301 redirects. Following 5 redirects is sufficiently enough for most likely 99.99% of links I would guess. For example, if you're using curl, it's most likely CURLOPT_FOLLOWLOCATION + CURLOPT_MAXREDIRS, or on the command line,
curl -L --max-redirs 5
. --slakr\ talk / 06:03, 16 April 2016 (UTC)
- Actually, it looks like yields two back-to-back 301 redirects. Following 5 redirects is sufficiently enough for most likely 99.99% of links I would guess. For example, if you're using curl, it's most likely CURLOPT_FOLLOWLOCATION + CURLOPT_MAXREDIRS, or on the command line,
- I'm not sure I follow with the timeouts. If it is a temporary think, the second pass will likely not timeout, and the status resets. When the bot checks a URL, it needs to receive a TRUE response 3 times consecutively, where each check is spaced apart at least 3 days, for it to be officially classified as dead and the bot to act on it.—cyberpowerChat:Offline 04:24, 16 April 2016 (UTC)
- @Cyberpower678 and Slakr: The timeouts were a result of me testing URLs with checkDeadlink() which was the wrong function to test with, and having a very slow internet connection (since I'm in Central America right now). There should be no timeout issue with the actual bot as it's using a 30 second timeout and only downloading the headers. It looks like the real issue with http://www.la84foundation.org/6oic/OfficialReports/1924/1924.pdf is the user agent string, which will be fixed by [11]. As soon as you pass it a spoofed user agent, it returns a 200. I still have no idea what's happening with http://www.eonline.com/au/news/386489/2013-grammy-awards-winners-the-complete-list, though. I'm not sure how it's returning a different status code for curl than for web browsers (although it isn't 100% consistent). Kaldari (talk) 15:56, 18 April 2016 (UTC)
- There could be bot detection mechanisms at work. Google bot detection and mitigation. Some techniques to fool remote sites you are not a bot. A legitimate looking agent string helps, not making too many repeat requests of the same site, not too fast. -- GreenC 14:48, 23 April 2016 (UTC)
- Cyberbot only scans each page every 3 days. That should be spaced apart far enough.—cyberpowerChat:Limited Access 15:28, 23 April 2016 (UTC)
- There could be bot detection mechanisms at work. Google bot detection and mitigation. Some techniques to fool remote sites you are not a bot. A legitimate looking agent string helps, not making too many repeat requests of the same site, not too fast. -- GreenC 14:48, 23 April 2016 (UTC)
- @Cyberpower678 and Slakr: The timeouts were a result of me testing URLs with checkDeadlink() which was the wrong function to test with, and having a very slow internet connection (since I'm in Central America right now). There should be no timeout issue with the actual bot as it's using a 30 second timeout and only downloading the headers. It looks like the real issue with http://www.la84foundation.org/6oic/OfficialReports/1924/1924.pdf is the user agent string, which will be fixed by [11]. As soon as you pass it a spoofed user agent, it returns a 200. I still have no idea what's happening with http://www.eonline.com/au/news/386489/2013-grammy-awards-winners-the-complete-list, though. I'm not sure how it's returning a different status code for curl than for web browsers (although it isn't 100% consistent). Kaldari (talk) 15:56, 18 April 2016 (UTC)
- Timeouts should be handled in a sane and fail-safe way. If something times out, any number of things could be going on, including bot-side, host-side, and anything in between. Making a final "time to replace this with an archive link" is premature if you're not retrying these at least a couple of times over the course of several days. Also, you might try to check content-length headers when it comes to binaries like PDFs. If you get back a content-length that's over 1MB or content-type that matches the one you're asking for (obviously apart from things like text/html, application/json), chances are the file's there and valid—it's highly unlikely that it's a 404 masquerading as a 200. Similarly, if an image request returns something absurdly tiny (like a likely transparent pixel sorta thing), it might also be suspicious. --slakr\ talk / 04:14, 16 April 2016 (UTC)
These exist but are in the "dead" list:
- http://thomas.gov/cgi-bin/bdquery/z?d112:SN00968:@@@P
- http://thomas.loc.gov/cgi-bin/bdquery/z?d107:SJ00046:@@@P
- http://thomas.loc.gov/cgi-bin/query/B?r112:@FIELD%28FLD003+s%29+@FIELD%28DDATE+20111217%29
- http://timesofindia.indiatimes.com/entertainment/hindi/movie-reviews/Vicky-Donor/movie-review/12729176.cms
- http://mathrubhuminews.in/ee/ReadMore/19766/indian-made-gun-makes-waves-in-expo/E
- http://mfaeda.duke.edu/people
- https://www.yahoo.com/music/bp/chart-watch-extra-top-christmas-album-234054391.html
- http://news.bbc.co.uk/2/hi/uk_news/england/coventry_warwickshire/6236900.stm
The first thomas hit was random; when I clicked the others thomas ones, I was looking for @@s. Those following ones, from different domains, were semi-pseudo-random (I just started going down the list clicking random new domains), at a rate of 2 out of 5 marked false "dead". That's a high false-positive rate, and this list is very certainly not exhaustive.
- This one has a throttle on "suspected robots" when I'm proxying off a datacenter. Perhaps exceptions should be made for similar patterns of text.
--slakr\ talk / 05:21, 6 May 2016 (UTC)
- Several updated were deployed during and after the trial completed. I just ran every link through phpunit, including the throttled one, and they all came back as alive. I re-ran the throttled one persistently, and kept getting a live response. So the flagged links, are no going to be considered dead by the bot.—cyberpowerChat:Offline 06:23, 6 May 2016 (UTC)
Approved for extended trial (14 days)Userspace only. — basically same as before. --slakr\ talk / 06:27, 6 May 2016 (UTC)
- @Cyberpower678: Sorry, I look at this now and see it was outputting to Wikipedia:Bots/Requests for approval/Cyberbot II 5a/DB Results and not userspace; that's totally fine; I meant you can keep writing wherever it was writing before, too. It doesn't just have to be userspace; the main thing is not to have it actually editing articles. --slakr\ talk / 01:52, 7 May 2016 (UTC)
Trial complete.—cyberpowerChat:Online 15:38, 26 May 2016 (UTC)
- It would seem it has an issue with allafrica.com. Tracked in Phabricator.—cyberpowerChat:Online 15:56, 26 May 2016 (UTC)
- Nevermind. It's a paywall, and checkIfDead is not reliable with paywalls. A feature is in the works, unrelated to the checkIfDead class, so this domain can be ignored during this trial.—cyberpowerChat:Limited Access 16:19, 26 May 2016 (UTC)
- It would seem it has an issue with allafrica.com. Tracked in Phabricator.—cyberpowerChat:Online 15:56, 26 May 2016 (UTC)
- @Cyberpower678: Sorry, I look at this now and see it was outputting to Wikipedia:Bots/Requests for approval/Cyberbot II 5a/DB Results and not userspace; that's totally fine; I meant you can keep writing wherever it was writing before, too. It doesn't just have to be userspace; the main thing is not to have it actually editing articles. --slakr\ talk / 01:52, 7 May 2016 (UTC)
DB Results 2
So we have currently marked all the false positives. Some links are also paywalls, which is also a feature in the works right now. So the community tech team is currently analyzing why the false positives are false positives, while I am testing and debugging the new paywall addon.
- Paywall detection has now been implemented. The bot relies on the
{{subscription required}}
tag on already cited sources. When it's detected, it's domain get's flagged and all subsequent URLs with that domain gets skipped, even if they're not tagged. This tag only results in an internal operation, and has no external visible result, other than links in a given domain, not being checked. Users can still tag those links as dead, and the bot will respond to it however.A user has requested the attention of a member of the Bot Approvals Group. Once assistance has been rendered, please deactivate this tag. If I can draw the attention of a BAGger to T136728 which tracks the current results. I'd say the results are pretty good. Can BAG comment here too?—cyberpowerChat:Limited Access 18:44, 18 June 2016 (UTC)
Approved requests
Bots that have been approved for operations after a successful BRFA will be listed here for informational purposes. No other approval action is required for these bots. Recently approved requests can be found here (), while old requests can be found in the archives.
- BU RoBOT (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 22) Approved 01:57, 21 June 2016 (UTC) (bot has flag)
- APersonBot (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 8) Approved 17:51, 17 June 2016 (UTC) (bot has flag)
- BU RoBOT (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 21) Approved 13:35, 17 June 2016 (UTC) (bot has flag)
- BU RoBOT (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 18) Approved 02:12, 17 June 2016 (UTC) (bot has flag)
- FastilyBot (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 11) Approved 02:07, 17 June 2016 (UTC) (bot has flag)
- BU RoBOT (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 19) Approved 16:46, 11 June 2016 (UTC) (bot has flag)
- BU RoBOT (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 17) Approved 17:45, 9 June 2016 (UTC) (bot has flag)
- SSTbot (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 1) Approved 01:07, 7 June 2016 (UTC) (bot has flag)
- EsquivalienceBot (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 2) Approved 03:22, 1 June 2016 (UTC) (bot has flag)
- Bot1058 (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 3) Approved 14:56, 29 May 2016 (UTC) (bot has flag)
- FastilyBot (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 9) Approved 22:42, 24 May 2016 (UTC) (bot has flag)
- GreenC bot (BRFA · contribs · actions log · block log · flag log · user rights) Approved 20:27, 24 May 2016 (UTC) (bot has flag)
- BU RoBOT (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 15) Approved 21:05, 22 May 2016 (UTC) (bot has flag)
- BU RoBOT (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 16) Approved 15:03, 22 May 2016 (UTC) (bot has flag)
- UTRSBot (BRFA · contribs · actions log · block log · flag log · user rights) Approved 03:37, 20 May 2016 (UTC) (bot has flag)
- FastilyBot (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 5) Approved 11:35, 17 May 2016 (UTC) (bot has flag)
- JJMC89 bot (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 5) Approved 04:02, 17 May 2016 (UTC) (bot has flag)
- Josvebot (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 12) Approved 02:38, 11 May 2016 (UTC) (bot has flag)
- BU RoBOT (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 13) Approved 14:41, 7 May 2016 (UTC) (bot has flag)
- BU RoBOT (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 14) Approved 00:41, 6 May 2016 (UTC) (bot has flag)
- FastilyBot (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 8) Approved 14:48, 5 May 2016 (UTC) (bot has flag)
- AnkitAWB (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 2) Approved 13:56, 4 May 2016 (UTC) (bot has flag)
- BU RoBOT (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 12) Approved 11:44, 1 May 2016 (UTC) (bot has flag)
- BU RoBOT (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 10) Approved 00:34, 1 May 2016 (UTC) (bot has flag)
- DarafshBot (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 5) Approved 08:46, 29 April 2016 (UTC) (bot has flag)
- JJMC89 bot (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 4) Approved 16:47, 27 April 2016 (UTC) (bot has flag)
- KharBot (BRFA · contribs · actions log · block log · flag log · user rights) Approved 23:13, 24 April 2016 (UTC) (bot has flag)
- APersonBot (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 7) Approved 06:48, 19 April 2016 (UTC) (bot has flag)
- AnomieBOT III (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 2) Approved 03:58, 13 April 2016 (UTC) (bot has flag)
- SporkBot (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 7) Approved 16:14, 11 April 2016 (UTC) (bot has flag)
Denied requests
Bots that have been denied for operations will be listed here for informational purposes for at least 7 days before being archived. No other action is required for these bots. Older requests can be found in the Archive.
- nakon-bot01 (BRFA · contribs · actions log · block log · flag log · user rights) Bot denied 22:11, 25 May 2016 (UTC)
- WikiMonitor (BRFA · contribs · actions log · block log · flag log · user rights) Bot denied 06:32, 21 May 2016 (UTC)
- CheckBot (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 4) Bot denied 06:44, 27 April 2016 (UTC)
- wargo32.exe (BRFA · contribs · actions log · block log · flag log · user rights) Bot denied 03:30, 16 April 2016 (UTC)
- Hot Riley Bot (BRFA · contribs · actions log · block log · flag log · user rights) Bot denied 23:31, 10 April 2016 (UTC)
- TamizhBOT (BRFA · contribs · actions log · block log · flag log · user rights) Bot denied 07:24, 10 March 2016 (UTC)
- sanskritnlpbot (BRFA · contribs · actions log · block log · flag log · user rights) Bot denied 22:40, 3 March 2016 (UTC)
- KoehlBot (BRFA · contribs · actions log · block log · flag log · user rights) Bot denied 15:50, 17 January 2016 (UTC)
- Bottastic 6 (BRFA · contribs · actions log · block log · flag log · user rights) Bot denied 15:40, 12 December 2015 (UTC)
- Helperbot (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 3) Bot denied 03:48, 4 December 2015 (UTC)
- Redirectbot (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 2) Bot denied 01:21, 2 November 2015 (UTC)
- AWB - mass spelling fix (BRFA · contribs · actions log · block log · flag log · user rights) Bot denied 21:11, 1 November 2015 (UTC)
- Redirectbot (BRFA · contribs · actions log · block log · flag log · user rights) Bot denied 23:03, 22 October 2015 (UTC)
- Tulsibot (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 2) Bot denied 11:41, 15 October 2015 (UTC)
Expired/withdrawn requests
These requests have either expired, as information required by the operator was not provided, or been withdrawn. These tasks are not authorized to run, but such lack of authorization does not necessarily follow from a finding as to merit. A bot that, having been approved for testing, was not tested by an editor, or one for which the results of testing were not posted, for example, would appear here. Bot requests should not be placed here if there is an active discussion ongoing above. Operators whose requests have expired may reactivate their requests at anytime. The following list shows recent requests (if any) that have expired, listed here for informational purposes for at least 7 days before being archived. Older requests can be found in the respective archives: Expired, Withdrawn.
- NihiltresBot (BRFA · contribs · actions log · block log · flag log · user rights) Withdrawn by operator 03:23, 1 June 2016 (UTC)
- CheckBot (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 2) Withdrawn by operator 00:56, 25 May 2016 (UTC)
- Matthewrbot (BRFA · contribs · actions log · block log · flag log · user rights) Expired 12:31, 7 May 2016 (UTC)
- BU RoBOT (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 11) Withdrawn by operator 00:34, 1 May 2016 (UTC)
- Nyubot (BRFA · contribs · actions log · block log · flag log · user rights) Withdrawn by operator 16:49, 29 April 2016 (UTC)
- CheckBot (BRFA · contribs · actions log · block log · flag log · user rights) Withdrawn by operator 00:05, 27 March 2016 (UTC)
- JJMC89 bot (BRFA · contribs · actions log · block log · flag log · user rights) Withdrawn by operator 23:19, 13 February 2016 (UTC)
- TyAbot (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 2) Withdrawn by operator 21:25, 22 January 2016 (UTC)
- JackieBot (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 1) Expired 16:06, 17 January 2016 (UTC)
- Community Tech bot (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 2) Withdrawn by operator 22:27, 6 January 2016 (UTC)
- ASammourBot (BRFA · contribs · actions log · block log · flag log · user rights) Withdrawn by operator 19:27, 7 December 2015 (UTC)
- Luke081515Bot (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 2) Expired 03:37, 2 December 2015 (UTC)
- BU RoBOT (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 6) Withdrawn by operator 04:50, 1 December 2015 (UTC)
- RelentlessBot (BRFA · contribs · actions log · block log · flag log · user rights) Withdrawn by operator 08:03, 25 October 2015 (UTC)
- BU RoBOT (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 5) Withdrawn by operator 01:27, 7 October 2015 (UTC)
|