Вы находитесь на странице: 1из 37

NYMBLE: BLOCKING MISBEHAVING USERS IN ANONYMIZING NETWORKS

ABSTRACT:
Anonymizing networks such as Tor allow users to access Internet services privately by using a series of routers to hide the clients IP address from the server. The success of such networks, however, has been limited by users employing this anonymity for abusive purposes such as defacing popular websites. Website administrators routinely rely on IP-address blocking for disabling access to misbehaving users, but blocking IP addresses is not practical if the abuser routes through an anonymizing network. As a result, administrators block all known exit nodes of anonymizing networks, denying anonymous access to misbehaving and behaving users alike. To address this problem, we present Nymble, a system in which servers can blacklist misbehaving users, thereby blocking users without compromising their anonymity. Our system is thus agnostic to different servers definitions of misbehavior servers can blacklist users for whatever reason, and the privacy of blacklisted users is maintained.

EXISTING SYSTEM
Existing users credentials must be updated, making it impractical. Verifier-local revocation (VLR) fixes this shortcoming by requiring the server (verifier) to perform only local updates during revocation. Unfortunately, VLR requires heavy computation at the server that is linear in the size of the blacklist.

PROPOSED SYSTEM
We present a secure system called Nymble, which provides all the following properties: anonymous authentication, backward unlinkability, subjective blacklisting, fast authentication speeds, rate-limited anonymous connections, revocation auditability (where users can verify whether they have been blacklisted), and also addresses the Sybil attack to make its deployment practical In Nymble, users acquire an ordered collection of nymbles, a special type of pseudonym, to connect to websites. Without additional information, these nymbles are computationally hard to link, and hence using the stream of nymbles simulates anonymous access to services. Websites, however, can blacklist users by obtaining a seed for a particular nymble, allowing them to link future nymbles from the same user those used before the complaint remain unlinkable. Servers can therefore blacklist anonymous users without knowledge of their IP addresses while allowing behaving users to connect anonymously. Our system ensures that users are aware of their blacklist status before they present a nymble, and disconnect immediately if they are

blacklisted. Although our work applies to anonymizing networks in general, we consider Tor for purposes of exposition. In fact, any number of anonymizing networks can rely on the same Nymble system, blacklisting anonymous users regardless of their anonymizing network(s) of choice

Advantages of Proposed System: Blacklisting anonymous users. We provide a means by which servers can blacklist users of an anonymizing network while maintaining their privacy. Practical performance. Our protocol makes use of inexpensive symmetric cryptographic operations to significantly outperform the alternatives. Open-source implementation. With the goal of contributing a workable system, we have built an open source implementation of Nymble, which is publicly available. We provide performance statistics to show that our system is indeed practical.

MODULES DESCRIPTION 1. Nymble Manager


Servers can therefore blacklist anonymous users without knowledge of their IP addresses while allowing behaving users to connect anonymously. Our system ensures that users are aware of their blacklist status before they present a nymble, and disconnect immediately if they are blacklisted. Although our work applies to anonymizing networks in general, we consider Tor for purposes of exposition. In fact, any number of anonymizing networks can rely on the same Nymble system, blacklisting anonymous users regardless of their anonymizing network(s) of choice.

2. Pseudonym Manager
The user must first contact the Pseudonym Manager (PM) and demonstrate control over a resource; for IP-address blocking, the user must connect to the PM directly (i.e., not through a

known anonymizing network), ensuring that the same pseudonym is always issued for the same resource.

3. Blacklisting a user
Users who make use of anonymizing networks expect their connections to be anonymous. If a server obtains a seed for that user, however, it can link that users subsequent connections. It is of utmost importance, then, that users be notified of their blacklist status before they present a nymble ticket to a server. In our system, the user can download the servers blacklist and verify her status. If blacklisted, the user disconnects immediately. IP-address blocking employed by Internet services. There are, however, some inherent limitations to using IP addresses as the scarce resource. If a user can obtain multiple addresses she can circumvent both nymble-based and regular IP-address blocking. Subnet-based blocking alleviates this problem, and while it is possible to modify our system to support subnet-based blocking, new privacy challenges emerge; a more thorough description is left for future work.

4. Nymble-authenticated connection
Blacklistability assures that any honest server can indeed block misbehaving users. Specifically, if an honest server complains about a user that misbehaved in the current linkability window, the complaint will be successful and the user will not be able to nymble-connect, i.e., establish a Nymble-authenticated connection, to the server successfully in subsequent time periods (following the time of complaint) of that linkability window. Rate-limiting assures any honest server that no user can successfully nymble-connect to it more than once within any single time period. Non-frameability guarantees that any honest user who is legitimate according to an honest server can nymble-connect to that server. This prevents an attacker from framing a legitimate honest user, e.g., by getting the user blacklisted for someone elses misbehavior. This property assumes each user has a single unique identity. When IP addresses are used as the identity, it is possible for a user to frame an honest user who later obtains the same IP address. Non-frameability holds true only against attackers with different identities (IP addresses). A user is legitimate according to a server if she has not been blacklisted by the server, and has not exceeded the rate limit of establishing Nymble-connections. Honest servers must be able to differentiate between legitimate and illegitimate users. Anonymity protects the anonymity of honest users, regardless of their legitimacy according to the (possibly corrupt) server; the server cannot learn any more information beyond whether the user behind (an attempt to make) a nymble-connection is legitimate or illegitimate

HARDWARE REQUIREMENTS:
PROCESSOR RAM MONITOR : : : PENTIUM IV 2.6 GHz 512 MB DD RAM 15 COLOR

HARD DISK FLOPPY DRIVE CDDRIVE KEYBOARD MOUSE

: : : : :

20 GB 1.44 MB LG 52X STANDARD 102 KEYS 3 BUTTONS

SOFTWARE REQUIREMENTS:
Front End : Java, RMI, JFC (Swing) Server : apache-tomcat-6.0.18(Web Server) Backend : Ms-Access Tools Used : Eclipse 3.3 Operating System: Windows XP/7

REFERENCE:
Patrick P. Tsang, Apu Kapadia, Cory Comelius and Sean W. Smith, Nymble: Blocking Misbehaving Users in Anyonymizing Networks, IEEE Transactions on Dependable and Secure Computing, Vol.8, No.2, March-April 2011.

Nymble: Anonymous IP-Address Blocking


Anonymizing networks such as Tor provide privacy to users by hiding their IP addresses from servers. For example, Tor uses a volunteer network of nodes, which help redirect users' communications thereby making it difficult to infer the users' IP addresses. Unfortunately, some users have abused such networks to deface websites such as Wikipedia. Since servers are unable to block anonymous users, their normal response is to simply block the entire anonymizing network, denying anonymous access to honest and dishonest users alike Nymble is a credential system that can be used in conjunction with anonymizing networks such as Tor to selectively block anonymous users while maintaining their privacy. Nymble offers the following properties:

Anonymous blacklisting: A server can block the IP address of a misbehaving user without knowing the identity of the user or his/her IP address. Privacy: Honest and misbehaving users both remain anonymous. Backward anonymity: The blacklisted user's previous activity remains anonymous/unlinkable, and is refused future connections. Blacklist-status awareness: A user can check whether he/she has been blocked before accessing services at the server. Subjective judging: Since misbehaving users are blocked without compromising their privacy, servers can provide their own definition of "misbehavior".

We hope that Nymble will make anonymizing networks such as Tor more acceptable to servers that are concerned about abuse from a handful of misbehaving users. Indeed, a few bad apples should not spoil the fun for the rest of us!

Nymble Overview
The purpose of the Nymble project is to allow for responsible, anonymous access online. It provides a mechanism for server administrators to block misbehaving users while allowing for honest users to stay anonymous; in fact even the blocked users remain anonymous.The name "Nymble" comes from a play on the word "pseudonym" and "nimble". Instead of giving users a simple pseudonym, the Nymble system assigns users "nymbles"; that is, a pseudonym with better anonymity properties.

The Problem: Abuse of Anonymizing Networks


Tor is an anonymizing networkit hides a client's identity (actually, your computer's IP address) from the servers that it accesses. Tor keeps a client's IP-address anonymous by bouncing its data packets through a random path of relays. Each relay knows only of the relay that sent it data and the next relay in the random path. As long as the entry and exit nodes do not collude, the client's connections remain anonymous. Tor provides anonymity, but some people abuse this anonymity. Since website administrators depend on blocking the IP addresses of misbehaving users, they are unable to block misbehaving users who connect through Tortheir IP address is hidden after all. Frustrated by repeated offenses through the Tor network, the usual response for websites such as Slashdot and Wikipedia is to block the entire Tor network. This is hardly an optimal solution, as honest users are denied anonymous access to these websites through Tor (or any anonymizing network for that matter). For an extensive list of the many legitimate uses of Tor, see Who uses Tor?

The Solution: Using Nymble for Blacklisting Anonymous Users


By providing a mechanism for server administrators to block anonymous misbehaving users, we hope to make the use of anonymizing networks such as Tor more acceptable for server administrators everywhere. All users remain anonymous misbehaving users can be blocked without deanonymization, and their activity prior to being blocked remain unlinkable (anonymous).

How Nymble Works


Nymble is based on two administratively-separate "manager" servers, the Pseudonym Manager (PM) and the Nymble Manager (NM). The PM is responsible for pairing a user's IP address with a pseudonym deterministically generated based on the user's IP address. The NM pairs a user's pseudonym with the target server. As long as the two managers are not colluding, the user's connections remain anonymous to the PM, pseudonymous to the NM (note that the user does not communicate directly with the NM, and connects to the NM through Tor), and anonymous to servers that the user connects to.

Pseudonym Manager
The user (in this case, Alice) must first demonstrate control over a resource, that is the Alice's IP-address. To do this Alice must first connect directly with the PM before receiving a pseudonym. The PM has knowledge of existing Tor routers, and thus can ensure that Alice is communicating with it directly. Note that the PM has no knowledge of the user's destination, similar to the entry node in Tor. The PM's sole responsibility it to map IP addresses to pseudonyms. The reason for this is explained next.

Nymble Manager
Alice then connects to the NM through Tor presenting her pseudonym and her target server. The NM does not know the IP address of the user, but the pseudonym provided by the PM guarantees that some unique IP address maps to the pseudonym. She receives a set of nymble ticketsas her credential for the target server. These nymble tickets are unlinkable, and therefore Alice can present these nymble tickets (once each) to gain anonymous access at the target server.

The nymble ticket provides cryptographic protection as well as a trap door that can be accessed using a linking token.

Blacklisting a User
Servers can present a user's nymble ticket to the NM as part of a complaint. The NM extracts a "linking token" from the nymble ticket, that will allow the server to link future connections by the blacklisted user. The NM also issues servers with blacklists, which users can examinebefore performing any actions at the server. By checking servers' blacklists, blacklisted users are assured that their privacy is not compromised. We now explain the process of blacklisting in a little more detail. We first explain how nymble tickets are bound to certain "time periods" and "linkability windows."

Time in the nymble protocol is divided into linkability windows of some duration (default is 1 day). A linkability window is then further divided into smaller time periods (default is 5 minutes). We illustrate the concepts in the diagram bellow; the linkability window is represented by the large, transparent rectangle while the time periods are labeled t0, t1, t2, etc.

A user's connections within a time period are tied to a single nymble ticket. If and when a user misbehaves, the server may not realize it for some amount of time and may not report it until a later time period. However, after receiving a linking token the server is able to block all future connections until the next linkability window. This is done for two reasons:

Dynamism: IP-addresses can be reassigned to different, well-behaved users making it undesirable to permanently blacklist IP-addresses. Forgiveness: It ensures that bad behavior is forgiven after a certain amount of time.

Nymble is a system that allows websites to selectively blacklist users of anonymizing networks such as Tor without knowing the user's IP-address. Users not on the blacklist enjoy anonymity while blacklisted users are not allowed future connections for a duration of time while their previous connections remain unlinkable. Since Nymble allows websites to blacklist anonymious users of their choice, and since users are notified of their blacklist status, Nymble gives websites the power to define their own definition of "misbehavior". Our hope is that Nymble's properties well make the usage of anonymizing networks such as Tor more acceptable.

Nymble Security FAQ

What are the privacy implications of using Nymble?


Nymble is a research project in its infancy. Do not rely on it for strong anonymity guarantees. That being said, here is what Nymble aims to provide: Nymble's main goal is to protect users' privacy with respect to the servers they connect to:

Client's IP addresses are anonymous to servers, whether they have been blacklisted or not.

Clients must trust Nymble to provide this anonymity against servers.

But why should I trust Nymble?


Nymble has been designed to limit the amount of information that individual Nymble entities can infer, by splitting the trusted functions:

The Pseudonym Manager (PM), knows the client's IP address, but not what servers the client intends to access. The client should be aware that the number of Nymble-enabled servers might be quite small at first, and that the PM is aware that the client intends to connect to one of these servers. The Nymble Manager (NM), knows the client's pseudonym assigned by the PM, but does not know the client's IP address. It knows what servers the client intends to access, because it needs to issue the client with a credential to access these sites.

Furthermore, these entities maintain minimal state (a few secret keys, and server-specific information), and "forgets" this state at the end of the day. Nymble is therefore resistant to deanonymization attacks as a result of equipment seizure, for example.

How do I know the PM and NM won't collude and expose my privacy?


Nymble is currently in "test mode", and in fact the PM and NM are hosted on the same machine. In the future, we plan to host the PM and NM in separate domains, and hope to build confidence that the PM and NM cannot collude maliciously. For now, help us make a difference by beta testing Nymble. Once the kinks are ironed out, we will move to the next phase of separating the entities. (Details for beta testing will be posted soon)

How does Nymble's privacy compare with Tor?


Nymble introduces additional entities that clients of Tor must trust. Clients must trust the PM and NM to not collude with each other, or with servers. Assuming that the PM and NM are not malicious or vulnerable to attack, clients can connect to servers through Tor and enjoy the same level of anonymity against servers as provided by Tor. Nymble does apply a rate limit on anonymous connections. In its current form, Nymble allows only one anonymous connection to a particular server every five minutes. Users are warned that their connections will be pseudonymous if they choose to connect to the same server multiple times within the same five-minute interval.

Why does Nymble apply a rate limit on anonymous connections?


Nymble allows servers to blacklist misbehaving users so that they cannot return and cause further damage. If there were no rate limit on anonymous connections, users could connect 500 times (for example) in 5 minutes, deface 500 pages and disappear for good. The damage is already done, and blacklisting the user is probably less meaningful. We believe that most "well-behaved" users would find a 5-minute rate limit reasonable to perform anonymous edits on a Wikipedia-like site, for example, and that servers would have enough time to blacklist misbehaving users before they cause too much damage.

Wikipedia talk:Blocking policy/Tor nodes


From Wikipedia, the free encyclopedia
< Wikipedia talk:Blocking policy

Contents
[show]

[edit]Tor

nodes

I currently have a list of Tor exit nodes, amassing about 24 pages in Microsoft Word at the moment. I am prepared to parse through this list and use a script to hardblock these IPs such that the TOR nodes will be rendered useless on the English Wikipedia. Should I proceed, and if so, how long should the blocks be? Rylng () 02:49, 8 January 2008 (UTC) How complicated would it be to check back, and see if the IPs are still Tor exit nodes? <eleland/talkedits> 02:50, 8 January 2008 (UTC) I don't know. Presumably, once the blocks are in fact done, a review of my block log of the IPs in question will show what the IPs are, and (if the MediaWiki page does not change), the "TOR check" link should work. For example, I randomly checked an IP on the list, and found that it is currently not an exit node, and I will be removing it from the list.Rylng () 02:58, 8 January 2008 (UTC) I've done that by hand before. I'm glad someone is using a script to do it. TOR exit nodes are open proxies, and can be fully blocked (both anon and logged-in) indefinitely. Raul654 (talk) 02:51, 8 January 2008 (UTC) Raul. Short blocks for static IP's, a year an length. Nothing INDEF because IP's will change, static and dynamic. And no more than a month or so for Dynamics, they will change surely. This makes the most technical sense IMHO. M-ercury at 02:52, January 8, 2008 I usually block open proxies for 5 years at the most. Less in the case of proxies that were not intended to be open. Mr.Z-man02:59, 8 January 2008 (UTC) Didn't someone previously report that most Tor exit node IPs ceased to be exit nodes after 6 months or so, hence very long-term blocking was counter productive. Unless someone has a comprehensive

program to review exit node blocks over time, it makes sense to set a not-too-distant expiration time. Dragons flight (talk) 03:14, 8 January 2008 (UTC) According to w:nl:Gebruiker:RonaldB/Open proxy fighting (see User:RonaldBot), the average lifetime of a Tor node is around one week. I have seen other studies with similar results. -- zzuuzz (talk) 03:23, 8 January 2008 (UTC) The vast majority of Tor nodes are no longer Tor nodes after just a few days. Some are even Tor nodes for a month or two. The proportion which remain Tor nodes for more than a year is absolutely minimal. If you intend running a blocking script, please keep the blocks short and also continue to run a matching unblocking script. -- zzuuzz (talk) 03:06, 8 January 2008 (UTC) Agree here, "Once a tor, always a tor" is not correct. Anyone can download a tor script, and run it on almost any platform. Get bored, and uninstall it. IP's will also shift, especially true for dynamic IP's. Let us be wise when we do this. Short blocks, and do rechecking, if you must block.
8, 2008 M-ercury at 03:10, January

Pre-emptively blocking is a pretty bad idea. --jpgordon 03:08, 8 January 2008 (UTC)

Agree

M-ercury at 03:10, January 8, 2008

Who said it was pre-emptive? Tor has been used by all sorts of bad editors. Raul654 (talk) 03:14, 8 January 2008 (UTC)

Pre emptive would be loading all 2500 IP's from the master list and blocking them en mass. We can not ignore the above comments regarding how long a node is really a node. Let us think technically about this and block accordingly if we block. Respectfully, 1300 IPs.Rylng () 03:32, 8 January 2008 (UTC) Thank you :) It is variable. Regards,
M-ercury at 03:41, January 8, 2008 M-ercury at 03:18, January 8, 2008

No, pre-emptive would be blocking TOR before it was ever used for vandalism. That time has long since passed. It is a very well established network for vandals. The fact that the exit nodes change often simple means that when we block the master list often - preferably using a script. As for unblocking, if someone wants to automate that too, fine, but I'm not going to lose any sleep over the possibility that someone might run a TOR exit node, get his IP blocked, and then want to edit later. Raul654 (talk) 03:22, 8 January 2008 (UTC) You can automate the block, and block the master list often. Set the block for a week, and re run the master every week. That is fine. But what about the possibility that someone might run a TOR exit node, get his IP blocked, then the dynamic IP is reassigned. Multiply that by one third to one half (sometimes three QTRS) of the master list. This will not work. Regards,
M-ercury at 03:24, January 8, 2008

Is there any way to mark the dynamic IPs and soft block them? If it were run weekly and dynamic IPs were soft blocked, the potential for collateral damage is limited. --B (talk) 03:51, 8 January 2008 (UTC) Usual would be a 5 year hardblock. If you wind up updating it frequently, that would be even better. Dynamic IPs must be dealt with in some way. Prodego talk 03:55, 8 January 2008 (UTC) Any idea how many times a dynamic is reassigned in a five year period? This is far too long for dynamic. As for updating frequently, could you clarify? Regards,
M-ercury at 04:00, January 8, 2008

If someone's willing to go through the list I have and find the dynamic IPs, I'll gladly remove them from the list and make a second one.Rylng () 04:03, 8 January 2008 (UTC) All IPs are dynamic, it's just a question of timing. All Tor nodes should also be considered dynamic, since most of the static IP Tor nodes will no longer be Tor nodes by the time most of the dynamic IPs have changed hands. I'd like to ask you to come back in a week and tell us how many of the IPs in your list are still exit nodes. I'll bet it is down 10% today alone (For an illustration of the problem please take a look at today's WP:OP page[1] - probably all of these IPs were running Tor within the last few weeks, now less than half are). Leaving aside the question of whether it is even a good idea, it is ineffective to run a one-off script to block all current IPs running exit nodes. Whatever process being used needs to be regularly synchronised to the Tor directory to have any effect, and to prevent enormous collateral. But then I don't really think that's such a good idea either. -- zzuuzz (talk) 05:11, 8 January 2008 (UTC)
Could y'all take this discussion over to a policy page, perhaps Wikipedia:Blocking/TOR nodes where you can work out a plan, or edit war, or whatever? This question gets asked so often, isn't it about time we figured out a standard Talk answer and saved it? Jehochman 04:16, 8 January 2008 (UTC) Jehochman, your statement was appeared out of line and read of bad faith (edit warring?). Please be more careful and contribute constructively. Thanks, M-ercury at 04:18, January 8, 2008 I want to know if someone goes over all these blocked tor nodes and open proxies to see if they are indeed still such. 1 != 204:56, 8 January 2008 (UTC) Well, none of the list is blocked, yet.Rylng () 05:16, 8 January 2008 (UTC) If we do go ahead and block them all, perhaps we should allow account creation to minimize unintended consequences. Bovlb (talk) 05:21, 8 January 2008 (UTC) ABSOLUTELY NOT. That would defeat the purpose of account-creation blocking IPs from known sockpuppetteers to deplete their supply of sockpuppets. Raul654 (talk) 05:28, 8 January 2008 (UTC) Feh. Hardblock, 3 to 6 months minimum. I have periodically run through the list, blocking all that were not already blocked (by hand, unfortunately) and after a year of doing so have been contacted by maybe 5 IPs asking to be unblocked. After verifying, I always due. By the way, this came up out of a checkuser that was

run for me. The number of vandals and sleepers was quite large, all using tor. If we are serious about stopping vandalism (as opposed to getting barnstars for reverting it) we will block all tor exit nodes we find. The harm to the encyclopedia will be much less from a few inconveniently blocked IPs than from the vandalism and sleeper socks. Thatcher 05:59, 8 January 2008 (UTC) Hardblock up to six months or so. Under no circumstances softblock. Better would be to use a bot that checks the Tor status once per day and does the blocking / unblocking automatically. Kusma (talk) 08:47, 8 January 2008 (UTC) A few things need to be kept in mind when mass/auto-blocking Tor:

The Tor directory is not authoritative on which address traffic will exit from - traffic is allowed to exit from a completely separate address than the one advertised in the directory. Some Tor node operators explicitly set their exit policies to disallow exiting traffic to WP by blocking access to Wikimedia's IP range, so that local users who share the same IP locally can still edit WP without collateral damage. And of course, the problem of dynamic IPs. Some Tor nodes always operate from a consistent address and can safely be blocked for a long period of time; some, however, are dynamic and jump all over the place, and each IP will probably get reassigned to another customer at any time.

If a block-bot were to operate, it should preferably be robust enough to handle these cases (it is possible, however). krimpet 09:13, 8 January 2008 (UTC) If a blockbot were to do all that and periodically unblock IPs which are no longer Tor exit nodes, would that allay some concerns? east.718 at 11:35, January 8, 2008 If a similar process can check for unblocking, why not block indefinitely? No matter what length we choose, we have to check for unblocking anyway because some only last a few days. If we don't block indefinitely, then we have to check for re-blocking as well (in the case of static IPs). Wknight94 (talk) 12:51, 8 January 2008 (UTC) Indefinate it a block and forget mentality. Too much risk for collateral damage. M-ercury at 13:05, January 8,
2008

I think that would be fine. Ideally, a block bot would load a master list and block for a week, then reload and block every week and so on. Please mind Krimpet's words as far as the TOR Node Authority and or directory reporting things other than what is actual. Regards, M-ercury at 13:05, January 8, 2008 I will say this again since Mercury, not appreciating my sense of humor, dismissed my request. Please reduce this discussion to a written process and save it somewhere so we can reference this information in the future. For a long time we have been getting conflicting instructions on how to handle TOR nodes. Repeating the same arguments Talk over and over again is unproductive. Jehochman 13:09, 8 January 2008 (UTC) I favor blocking for a long time, and unblocking when proven that they are no longer TOR nodes. If we can write a script to block and place them in a category, we can also write another script to periodically check all

IPs in the TOR node category and unblock those that are no longer TOR. The idea of repeatedly blocking for a week is silly. Jehochman
Talk

13:13, 8 January 2008 (UTC)

It is difficult to assume a sense of humor, and good faith, when you attack other ideas as silly. Please rephrase. Sense of humor is sometimes not conveyed properly here with a text only environment. With regards to your proposal, would you like to take the lead, then we can cross post to ANI, VPP, VPT, T:CENT, and Wiken-L. Regards, M-ercury at 13:16, January 8, 2008 If you're willing to run your script regularly, surely it would make most sense to block indefinitely when identified as a Tor exit node, with a good clear message as discussed below, and then unblock when a future run of the script identifies that the IP address is no longer running Tor? --Stormie (talk) 02:54, 9 January 2008 (UTC) I do not see any good reason to pre-emptively block tor nodes. Block ip addresses that cause problems. Block users that cause problems. Avoid paranoia. --Blue Tie (talk) 13:23, 8 January 2008 (UTC) I'd suggest using the list as a checklist for checkusers to investigate and block as abuse is found - David Gerard (talk) 13:31, 8 January 2008 (UTC) Respectfully, preemptive checkuser on the TOR list is not a good idea. We don't do things this way. Remember the policy states"While this may affect legitimate users, they are not the intended targets and may freely use proxies until those are blocked". M-ercury at 13:35, January 8, 2008 Of course. However, while pursuing a vandal last night with the help of a checkuser we found a bunch of tor nodes, some of which had been blocked as such months ago but were still active. We uncovered some suspicious-looking possible sleeper accounts, at least one admin, and some vandals and trolls. (I intend to keep the names of the suspicious accounts private until and unless they show up on ANI or something.) It is certainly not the case that all, or even most, tor nodes are short-lived. Thatcher 16:28, 8 January 2008 (UTC) I would definitely support pre-emptive blocking (for a period of three months or so) of any Tor nodes. Fnagaton 16:11, 8 January 2008 (UTC) Does anyone keep any data on how many vandals use Tor exit nodes? (e.g. if tomorrow we were forbidden to block Tor exit nodes how large would the problem be?). Do only checkusers have this kind of info? EdJohnston (talk) 22:18, 8 January 2008 (UTC)

Propose policy change


With the data located at this place and the above technical data, and comments, I propose we limit TOR node blocking to one week. The rationale being, if the average span of a node is one week, but we keep blocking nodes, and the master directory with indef, the affected foot print will grow larger and larger. Thanks, M-ercury at 13:29, January
8, 2008

In my experience, this is a bad idea. I see them stay active as TOR nodes for ages. When I checkuser TOR nodes, I typically see abusers on them going back months - David Gerard (talk) 13:31, 8 January 2008 (UTC) Is there any good data you can redact and show us to counter the proposed change to policy? M-ercury at
13:48, January 8, 2008

I also think this is not a good idea (though admins are welcome to do it, some do already). We should definitely reduce the number of indef-blocks, unless the IP has a long history of being Tor (most of the long term exit nodes are indef-blocked, normally by checkusers). I tend to block for a year to make sure, though 6 months is a reasonable option which I'm leaning towards based on experience, and a month would also be quite reasonable. Initial blocks shouldn't really be any longer than a year. It's really the insta-indef blocks and multi-year blocks that are the problem. -- zzuuzz
(talk)

13:47, 8 January 2008 (UTC)

I think I could go with a year, as a max. M-ercury at 13:48, January 8, 2008 Once again, nobody but you actually supports this. Raul654 (talk) 15:19, 8 January 2008 (UTC) Are you saying that I'm all alone? Raul, look around, there is enough support for a non -indef solution to warrent a discussion and a plan. Please be helpful in this discussion. Let us not personalize this. Regards, M-ercury at 18:01, January 8, 2008 Since the list of TOR nodes are published why can't they be blocked, but monitored by a bot to see if they are still being used as such. A bot can report when it is no longer a node. I ran a TOR node once, so that people in China and other places can use the internet, if I found out that got me booted from WP I would probably shut it down, but then how long would I wait? If we just block and forget then we end up with a long list of blocked IPs that are not proxies. 1 != 2 17:05, 8 January 2008 (UTC) I thought the plan was to block indef but still check all of them on a regular basis and unblock the ones that are no longer exit nodes. If we block for a week and a Tor node is operating for a year, all we're going to do is block the same IP 52 times. If we block indef and its only an exit node for a week, we unblock after a week; if its an exit node for a year, we unblock after a year. If a script checks all the current blocks when it runs, there is no reason to use such short blocks. Mr.Z-man 20:58, 8 January 2008 (UTC)

TOR nodes could be allowed, but anon-only, account creation blocked for a set period and not indefinitely - with people having to request an account via the unblock-en-l talk mailing list if they wish to edit?? Fair suggestion?? Thanks, --Solumeiras 21:54, 9 January 2008 (UTC)

[edit]Template Template:Torblock

suggestion

This IP address has been blocked due to the open proxies policy because it is a Tor exit node which allows editing Wikipedia. If this is your IP address and you wish to edit Wikipedia from it,

please either stop operating Tor as an exit node or follow these instructionsto prevent people from using this exit node to edit Wikipedia. Then request unblocking by going to #wikipedia-en-

unblockconnect. When it is verified that this can no longer be used to edit Wikipedia it will be
unblocked. If you are not using Tor, simply request unblocking and you will be unblocked when it is confirmed that this IP is not a Tor node. (Do we have a page with instructions for exit node operators to disable editing wikipedia?) Random832 14:32, 8 January 2008 (UTC) Most exit node operators have probably never heard of Wikipedia. This template basically duplicates information which is already in {{tor}}, specifically the link to Wikipedia:Open_proxies and m:WikiProject on open proxies/Help:blocked, though I accept the linked help instructions need to be improved. -zzuuzz
(talk)

14:40, 8 January 2008 (UTC)

In addition to below: When pre-emptively blocking via the bot on nl:w, I don't place templates on the talk page. Instead I provide a clickable link in the remark/comment line, that refers to a page telling the user why this IP has been blocked. I.e. this one for TOR. - RgdsRonaldB-nl (talk) 14:42, 8 January 2008 (UTC) So you know, a template placed in the comment line will appear fully expanded when the user attempts to edit. There are a number of templates on en.wiki specifically for this purpose. Random832 14:50, 8 January 2008 (UTC) Them not having heard of wikipedia isn't a problem; they'll only see the template if they attempt to edit wikipedia. And those who don't care about editing wikipedia (and thus won't do this) can stay blocked. And the reason for duplicating this information is that if this is on their talk page and (ideally) the block summary, they're more likely to see it. Random832 14:49, 8 January 2008 (UTC) What I was saying is that the number of exit node operators who try to edit is probably miniscule. The template {{tor}} already serves the purpose you mentioned. I use it as the block reason (it's already in the dropdown), and on the talk page. -- zzuuzz
(talk)

15:04, 8 January 2008 (UTC)

Blocking policy applied at nl:w


I keep a list of a.o. TOR exit nodes in a database containing also date info, such as date acquired, date lastconfirmed. That db is updated daily (if I'm not on holidays). Analysis of the entries has learned me the following: There are long living TOR exit nodes, short(er) and very short living ones. Apparently people are trying, so the IP appears for a while in the reporting and others run consistently an exit node. Since I learned that, I apply the following blocking policy on nl:w (and on invitation the same on he:w):

Hard block if the IP appears for more than 2 days in the list. That allows people to experiment with TOR without being blocked immediately Unblock if the blocked IP does not appear as exit node anymore for a period of two months.

If an IP that has been unblocked earlier and appears again as exit node, then it is blocked again (btw SQL recently fixed).

Due to the statistical nature of TOR that seems adequate as can be seen from this. The automated batch process is run 1-2 times per week. The result is that at any moment in time approximately 3000 IP's are blocked. See the current list of blocked TOR exit nodes and view the history to get a feel for the dynamics. Btw: I'm observing that the TOR network is rapidly growing. It has more than doubled in size in a year, from some 1200 to more than 2500 nodes (all, i.e. onion and exit nodes together). A similar blocking policy is applied for so called exit nodes, i.e. the end-node of a cascade of ordinary open proxies. Those exit-nodes are generally used by multiple open proxies as can be found on the internet and hence behave more or less similar as TOR. These can only be found by scanning, i.e finding confirmation whether a published IP is indeed an open proxy or not. That's what I'm doing 7/24. A third category that is pre-emptively blocked are the IP's used by anonymizers. Although proxy.org publishes some IP's, my scanner finds more IP's. For the time being IP's found this way are indefinitely blocked, as they appear to be predominantly IP's used by hosting providers. Rgds RonaldB-nl (talk) 14:33, 8 January 2008 (UTC) That seems utterly sensible. I don't run bots myself but hopefully one of the concerned admins who posted above will bring your system online here. Thatcher 16:24, 8 January 2008 (UTC) As a member of WP:BAG I request that you file a WP:BRFA but dont file a WP:RFA. Since I know that this will bring every wacko that uses tor out against you, I am going to see if I can just get a B-crat to do this. command 16:57, 8 January 2008 (UTC) See here for an earlier approved bot-status. Is currently taking care of Wikipedia:Open proxy detection. RonaldB-nl (talk) 17:48, 8 January 2008 (UTC) Id like another BRFA for automatic blocking/unblocking. command 18:05, 8 January 2008 (UTC) Why bother? The community is going to stomp on it just like every other adminbot. east.718 at 22:57, January 8,
2008

I suppose I should make a comment here but I find it all very depressing. There are some pages on wikipedia I can only load using Tor, and I'm not in mainland China or anything. I suspect the Tor fighters all have very nice, expensive internet connections. I'm not expecting the group think to move, but you would have thought from reading WP that Tor had no benefits whatsoever. I presume from one of the comments above that if I connect via Tor I'll be accused of being some loser's sockpuppet. Sometimes I really wish a lousy internet connection on some of you people.. Secretlondon (talk) 01:21, 9 January 2008 (UTC) Also please note that we are not preventing TOR users from reading wikipedia. All that we are doing is preventing them fromediting not reading. command 02:24, 9 January 2008 (UTC) As a bureaucrat, the blocks on the nodes will not affect you whatsoever, but I understand the issue that you raise.Rylng () 02:09, 9 January 2008 (UTC) I don't: what pages exist that can only be edited using Tor for some users? Why? Kusma (talk) 09:43, 9 January 2008 (UTC)

I would assume in Secretlondon's case, some of the pages that take longer times to load would be an issue for users with poor internet connections. However, I understood that the blocks may harm people who require Tor to access Wikipedia (it is the primary reason that they all haven't been blocked in the past). Rylng () 11:06, 9 January 2008 (UTC) How is the issue of editing pages through Tor connected to poor internet connections? Kusma (talk) 13:11, 10 January 2008 (UTC) Tor can help with DNS issues, because the hostname is resolved by the Tor server (via SOCKS 4a) instead of the user's normal ISP. It's not a very effective solution, but it can help a lot with flaky ISP DNS servers. -zzuuzz
(talk)

13:54, 10 January 2008 (UTC) I agree with something like the NL solution, although I'd go with unblocking after they've been a non-exit node for less time than two months, personally. Also, I would suggest that account-holders be allowed to edit throughout the block; account creation I am ambivalent about. Also also, there is the issue of Tor nodes which disable editing of Wikipedia, which are safe for us. I should note that I consider an NL-like solution to actually be a valid use of indefinite blocks (since the time interval required is indefinite) that doesn't equate to "permanent". I could help with writing a bot for this if needed, although I doubt that's the real bottleneck here. Whatever solution we come up with and agree on, I also agree with the sentiment that we can't let good editors from China and other places get hurt by it. If we can at least have a link to Wikipedia:Advice to users using Tor to bypass the Great Firewall when notifying users about being blocked due to Tor, that might help them a little. As User:Gmaxwell rightly points out, we aren't overrun with vandals coming through Tor, and User:rspeer is likewise correct that we should try to accomodate these disadvantaged users. The advice on the abovelinked projectspace page seems to provide Chinese users with a workable, if complex, way of getting in and doing some good editing. So, it seems like the main problems with vandals are anonymous (IP) editing and the capcaity to mass-create accounts? Let's make those the restrictions on Tor proxies, then, and any vandal accounts that slip through regardless can be dealt with in the usual manner. Failing that, the Wikipedia:IP block exemption proposal would achieve basically the same effect, yes? All that would be necessary to add to the "Great Firewall" advice is that the person getting your account setup for you needs to request that hardblock exemption flag also, I guess. I'm starting to ramble a bit now, so I'll shut up. :-) --tiny plastic Grey Knight 13:09, 25 January 2008 (UTC) [edit]Options There have been a few suggestions as far as timed blocking, and a few suggestions for a bot. I encourage the bot maker to go ahead and do the appropriate approval request. That would be great. As far as timed blocks, there appear to be more suggestions for that as opposed to indef blocking without the bot. Lets throw some times out there and see if we can come to a consensus. Any ideas? M-ercury at 23:14, January 8, 2008 [edit]Flat

out question

In the next 24 hours, I plan on running the script I have on the list of Tor exit nodes. My plan of attack is as follows:

Hard block (anon only off) Block length: Five years Block reason: {{tor}}

Honestly, if the IPs are no longer Tor nodes, then we can expect a handful of {{unblock|This is not a Tor node. Please unblock this IP.}} showing up. This seems to be the simplest option, instead of programing a bot to check for exit nodes or blocking this list every week or every month. Should I go through? Should the blocks be longer? Should the template be different? Rylng () 02:21, 9 January 2008 (UTC) Where is your data source for the nodes? M-ercury at 02:24, January 9, 2008 I have Proxy.org's list of exit nodes.Rylng () 02:29, 9 January 2008 (UTC) Why don't you get them directly from the up to date directory? M-ercury at 02:43, January 9, 2008 Meaning?Rylng () 02:54, 9 January 2008 (UTC) The TOR authoritative directory server has the most up to date tor exit node information for the tor clients to use. Why not use the data they are publishing. If we are blocking based on a list, then it needs to be from TOR, and not from a third party. M-ercury at 03:03, January 9, 2008 I do not know where this data is. The list I have is the only list I have found that was easy to copy and paste.Rylng () 03:20, 9 January 2008 (UTC) It is easy to find, should start at this place for how tor works, and it should lead you to finding the authoritative directory. I am questioning the reliability of your data. There is no consensus here for your method however, and I urge you not to do this thing, at least until a consensus is generated. M-ercury at 03:32,
January 9, 2008

This one [2], and this one [3] both give 1092 IPs --Stephen 04:57, 9 January 2008 (UTC) The last time this was discussed (with a lot more participants, I note), at WT:NOP, there was about a dead-even split between whether hardblocks or softblocks should be applied to TOR nodes, let alone whether preemptive blocking should occur. I do not agree to this change without resolution of that question, there seemed at that point no consensus that hardblocks should be applied to nodes not being currently and actively used for vandalism at all. The best resolution, perhaps, would be to see ipblock-exempt as a permission that admins or crats could grant to regular users, but until that happens, I do not support preemptive blocking of all TOR nodes. This needs a bot approval and a real RfA if it is to go forward, with full explanation of what it is intended to do. This will not go through the back door. Seraphimblade Talk to me 07:16, 9 January 2008 (UTC) What bot would be necessary? There is already a script that had been written for these circumstances (similar, but not exactly the same) that I have used in the past when I found lists of open proxies, and have

been used in the past by other users. There is no preemptive blocking, as we know that there have been exit nodes that were abused. Is there a reason to not block Tor because it's preemptive or because there are individuals who use it in good faith?Rylng () Also, this was not originally going to be a "back door" thing as you accuse it. I had posted this on WP:AN until it was moved here and the conversation continued without having been contacted.Rylng () 07:33, 9 January 2008 (UTC) No. I would also urge you not to do this. I would suggest that you spend a little time learning something about the Tor directory - it's one of the most dynamic and short-lived lists of open proxies on the Internet. Just take a look at the data here, and notice the uptime. These are a completely different thing from webhosted proxies. The idea that we should block a shed-load of dynamic IPs for five years and wait for unblock appeals is not what we do here. I also echo some of Seraphimblade's concerns - although I disagree completely about using softblocks, I do agree that there is no consensus for a total ban of any use of Tor to edit. I would go further and say that such an idea has been soundly rejected. Any blocking bot should also get community approval as it would be a fundamental shift in policy. There's no big 24 hour rush here, we can afford to consider the best way forward, not the simplest to script. -- zzuuzz (UTC) This is like jailing the Japanese in the US during World War II. Not that they did anything wrong... but hey, they look funny and they MIGHT do something bad. And then I see, above, that someone wants to do it though a process that avoids community review and consensus, even though wikipedia supposedly runs by consensus. This whole thing is just a really bad idea in principle. --Blue Tie (talk) 10:32, 9 January 2008 (UTC) That is in my opinion a really poor analogy. Tor nodes have been abused in the past. Open proxies have been abused in the past. If abuse can be prevented, why not do it? The only reason that these blocks may be harmful is to the Chinese who require Tor to edit. I was seeking consensus here (or rather, in my initial posting at WP:AN). Right now its split on "how long?" and "how should it be checked?" I've submitted the list I had to another administrator, and another administrator will be supplying me with a list of static exit nodes to work with. There is also nothing to suggest that the list of IPs I have are alldynamic IPs. All the list contains are exit nodes, prepared to go through a script that was written explicitly for this purpose (blocking several IPs for being open proxies). I can see now that there is a slim chance that anyone would be given any go ahead. I just have everything prepared, and in the end, the task would more than likely be delegated to several users (at the default setting in the script the blocks would take more than two hours to complete). I have run this script in the past on lists of open proxies, and there have only been a few issues that I later corrected after correspondence with the users affected.Rylng () 11:04, 9 January 2008 (UTC) Poor analogy? Let's see. Tor nodes have been abused in the past. Pearl Harbor was bombed in the past. Open proxies have been abused in the past. American Freedom has been abused in the past by
(talk)

10:15, 9 January 2008

Japanese spies. If abuse can be prevented why not do it? If Japanese evil can be prevented by locking them all up, why not do it? Yep, hard to see any way that the thinking is similar. --Blue Tie (talk) 11:14, 9 January 2008 (UTC) I would like to point out Ryulong, that you have indefinitely blocked two Tor nodes in the last two days 219.112.19.106 (talk contribs block log) and 217.80.223.164 (talk contribs block log). Please now check them again. Neither of these IPs is running Tor any more. With the ability to block comes the responsibility not to block unnecessarily. -- zzuuzz
(talk)

16:51, 9 January 2008 (UTC)

Both of those IPs were Tor nodes at the time and were being abused (their edits show that). I get it now, and I don't give a damn about my plans anymore. If you want to still do this, fine. I'm going to delete the list from my personal files. If Wikipedia doesn't want to prevent abuse, then so be it.Rylng () 22:21, 9 January 2008 (UTC) Since Tor uses both hosted servers on static IPs and personal routers on dynamic IPs, a bot that runs regularly (every 24h or week) would be better than a script run on a "whenever I feel like it" basis. A bot could issue indef blocks to all Tor nodes it finds, but check all current blocks when it runs so that IPs that are no longer Tor nodes can be unblocked. Mr.Z-man 21:14, 9 January 2008 (UTC) As I've described here, the approach I've adopted takes care of dynamic IP's (as well as trial exit nodes). Before I implemented this refinement, we have got one complaint on OTRS-nl regarding an IP which had been used as exit node, but since then as an onion node only (which I could verify). Since then zero. Be careful with defining what the most authorative list is. If the one I'm using is down/stuck for a couple of days, which I've noticed twice or three times in a year time, all other lists I've found on the internet are frozen as well. If that happens I start my scanner to acquire new exit nodes, but I must admit that this is a cumbersome and time consuming process. - RonaldB-nl (talk) 01:05, 10 January 2008 (UTC) I think your approach is the most sensible. -- lucasbfr
talk

13:04, 10 January 2008 (UTC)

First do no harm
My suggestions: 1. Any initiative to start blocking many IPs based on a list of Tor nodes should be start with a pilot project blocking perhaps just 50 to 100 IPs until we know more about what we're doing. 2. Tor nodes don't vandalise, vandals vandalise. There are many legitimate reasons to be using Tor. In investigating certain really bad sites for WP:SPAM, I've used Tor for an extra layer of anonymity. 3. The person running a script, a bot, or doing this by hand must commit to checking the list every two days and unblocking all inactive nodes. 4. The checking, blocking and subsequent rechecking and unblocking all needs to be logged on some centralized pages so the community can review how the experiment is running. 5. Editors with accounts should still be able to edit through these nodes pending broader community consensus on editing through Tor nodes. 6. Once we have some experience, then we can decide whether to expand this initiative to all Tor nodes.

--A. B. (talk) 03:49, 10 January 2008 (UTC) New to the discussion but I agree with point 5 above. Users with an account shouldn't be blocked from editing through a tor node, just like with blocked IPs. However, account creation should be blocked. Think outside the box 13:47, 10 January 2008 (UTC) Sounds fine to me. But I'm new to the discussion too. --Puchiko (Talk-email) 19:20, 11 January 2008 (UTC) Per #2, remember that blocking doesn't prohibit you from reading pages, just editing. Per #5, blocking anon only would not work for a few reasons, but I've yet to see much opposition to creating an ip-block-exempt usergroup (to allow users to edit from a blocked IP address) so I can't imagine that getting consensus for that would be hard. (Harder would be convincing a sysadmin to do the change after the drama from the rollback group) Mr.Z-man 21:13, 11 January 2008 (UTC) "Want to move the world, first move yourself." If Wikipedia wants there to be human rights (which isn't clear from its core principles), let Wikipedia establish it on their own site before worrying about aiding in breaking the law in foreign governments, even if such governments are corrupt. Zenwhat (talk) 23:04, 12 January 2008 (UTC) I agree that we need to be cautious in implementing something like this. Any process that is implemented should be sure not to block Tor nodes that have disabled editing Wikipedia. We've heard before from people who edit Wikipedia (because they're interested in giving the world access to knowledge) and run a Tor node (because they're interested in giving the world access to knowledge); we should not punish these people for their consistency, especially if they're responsible enough to make sure that Tor and WP don't mix. On a larger scale, I think some Wikipedians need to rethink their knee-jerk position on Tor. There's Wikipedia vandalism, and there's large-scale government censorship, and I know which one I'd rather see prevented. rspeer / ds 10:11, 13 January 2008 (UTC) "large-scale government censorship" = Red herring fallacy. Fnagaton 11:11, 13 January 2008 (UTC) I don't see the fallacy. Governments like China's censor the Internet, particularly Wikipedia. Tor works around censorship of the Internet. If we want to prevent Tor from working around Wikipedia blocks -- which is a much smaller issue than the problem Tor solves -- we should find a way to do it that does not involve blocking all IPs containing Tor exit nodes. Presumably we edit Wikipedia because we want others throughout the world to read it, so we shouldn't fight the tool that allows them to, we should work with it.rspeer / ds 02:05, 15 January 2008 (UTC) [edit]? ? Preceding unsigned comment added by 219.112.19.106 (talk) 09:08, 8 January
2008 (UTC) ==

(free translation: May I ask, what is your problem with/hatred against Chinese people?) Please write in English. This is not about whether people in China should edit here. It is about blocking an easy access to an infinity of sockpuppet accounts. That Internet users in China can only use Tor to read and not to write is collateral damage that seems unavoidable. Kusma (talk) 09:16, 8 January 2008 (UTC) Of course it's avoidable. Softblock Tor, get faster about blocking disruptive users (whether they're socks or not, then it won't matter, act up a few times and you're gone not to come back.) I think rspeer said it above when given the choice between "enabling repressive governments" and "enabling Wikipedia vandals and sockpuppeteers", bring on the footwear hordes anyday.Seraphimblade Talk to me 20:46, 13 January 2008 (UTC) Uh, no. As someone who has spent more time than anyone else manually recursing through checkuser to block people who register dozens or hundreds of sockpuppets (including about a dozen hours this week tracking down three particularly problem sockpuppeteers - [4][5][6]-- the last thing that should happen is that they are given access to a fresh source of account registration. If I see these TOR nodes come up, I *will* be blocking them indefinitely. Raul654 (talk) 20:56, 13 January 2008 (UTC) Then you will be going against consensus and subjecting yourself to dispute resolution to include arbitration if RFC does not work. M-ercury at 20:58, January 13, 2008 No, I will not be, because (a) policy still supports blocking them indefinitely, and (b) because desipite all your attempts otherwise (such as trying to change the subject from Ryulong's original question of how to go about blocking them into a referendum of whether or not to do it) the fact remains that you have not achieved consensus, or anything near it, to change policy. Raul654 (talk) 21:01, 13 January 2008 (UTC) Consensus was achieved here to not block Tor proxy indef, and the blocking policy has been updated to that affect. This discussion was also crossposted to Wiken-L Foundation-l AN VPP VPT and a couple of other places. Dispite your disagreement, we do have consensus here to that affect. Sorry. Regards, M-ercury at
21:05, January 13, 2008

This is Wikipedia, not fantasy land. Claiming consensus does not make it so. Raul654 (talk) 21:07, 13 January 2008 (UTC) No, fantasy land would be saying something like this when clearly there were supporters to the discussion. M-ercury at 21:10, January 13, 2008 Supporters for what, exactly? This discussion is so confused - mostly due to you, personally - that it difficult to see what you are talking about. You wanted to limit blocks to a week - that idea went down in flames (both David Gerard and Zzzz explicitely spoke out against it). You wanted to limit pre-emption, and Thatcher, myself, etc rejected it. So what exactly are you claiming consensus for, and who supported it? Raul654 (talk) 21:29, 13 January 2008 (UTC)

There is considerable consensus for not indefinitely blocking IP addresses. There has been for a long time. - zzuuzz
(talk)

21:32, 13 January 2008 (UTC)

Claiming a lack of consensus does not make it so either. Random832 21:10, 13 January 2008 (UTC) If an ip-block-exempt usergroup is ever implemented (not sure what we're still waiting on, local policy perhaps?) orWikipedia:WikiProject on closed proxies is expanded to aid legitimate users who must use a proxy, this would almost be a non-issue. Mr.Z-man 00:37, 14 January 2008 (UTC) Raul, when someone says "I'm about to make a bot to perform an action on an unnecessarily large scale that goes against the goals of many Wikipedians", turning the discussion into a referendum on whether to do it is the right thing to do. rspeer / ds 02:12, 15 January 2008 (UTC) I've set up and added a proxy at WP:WikiProject on closed proxies. I'm kind of surprised I'm the only one ... Celarnor
Talk to me

16:47, 3 April 2008 (UTC)

Stop the madness! / Technical solutions?


We recently had a rollback debate that over 500 users participated in. Afterward, one of the sysadmin's comments was: how is possible that you can find 500 people to comment on rollback and not a single person to code a compromise solution? That's the crux of the issue here as well. We're focusing on blocking policy and how to stop all Tor uses. Instead, why can't we focus on possible solutions to the greater issue? WS: As a follow-up, Raul654 asks, Roger Dingledine, inventor of TOR, has said that if Wikipedia implemented a trust metric, this would effectively solve the problem of proxies. Have you considered adding such a feature?

JW: It is not up to me, but that avenue of approach seems viable. I think we should only soft-block Tor anyway. Jimmy also made a comment on his blog, vague but simple: here. I can't write code, but I know that there are tens of developers who are able to. I also know that there are other developers outside of Wikipedia who could help. If we spent half the energy toward a solution that we put toward this blocking policy, we would have an answer. I'm not trying to say that we should defer to Jimbo, and going even further, I'm making no comment on what blocking policy should be with regards to Tor nodes. Personally, I don't use Tor, so I simply don't care. What I am saying this is: why can't we find a better way, a way that's more in line with our values and the community's values? -MZMcBride (talk) 21:40, 13 January 2008 (UTC) I'd code something but it would be a system that soft-blocks any IP that appears on a Tor list and only unblocks once the IP is off the Tor list. Fnagaton 22:27, 13 January 2008 (UTC) Unfortunately softblocks are useless because vandals create accounts. Bugzilla:9862 has been proposed as a solution. --zzuuzz
(talk)

22:30, 13 January 2008 (UTC)

What about the trust metric? What is that and how could it be implemented? Or, if the trust metric system isn't the right direction, what about a bot to softblock IPs? (These blocks should be impersonal if done.) -MZMcBride (talk) 00:01, 14 January 2008 (UTC) FWIW, I've got a list up, updated hourly, of non-blocked tor nodes. User:SQL/Unblocked TOR. SQL
me! Query

03:58, 14 January 2008 (UTC)

Btw, those tor nodes don't necessarily *need* blocking. They're all exit nodes (to the best of my ability to figure that out), but, not all allow wiki editing. I meant the list for statistical use only, not as a blockchecklist... SQL [edit]policy
Query me!

04:11, 14 January 2008 (UTC)

proposed

talk page Preceding unsigned comment added by Mercury (talk contribs) [edit]A

possible trust system

I have been semi-following the discussion here, and while I understand the tor issue, I also understand the problem with hard-blocking the tor exit nodes (the Chinese, registered users ip addresses being not too private, etc.). User:MZMcBride suggested above a trust system as a possible solution. I thought about it, and concluded that it might be a good idea. Here is a possible trust meter system that I thought of: A user's (anons included) trust is measured as a real number between 0 and infinity. Anons in the same subnet share the same trust meter. Newly registered users start with the trust level of the ip address they register with. Anons start with a trust level of 1. Administrators have a trust level of infinity (or NaN). A user's trust level changes as follows: for every day of constructive editing a user's trust level increases by 1. A constructive day is a day in which the user edited wikipedia, and did not receive a warning from a more trusted user (possible problem: null edits. possible solution: null edit warnings). If a user is blocked (for any duration), his trust level is cut in half. If the blocked user is a registered user, his ip address trust level is also cut in half (in a similar style as auto-blocks). In this system, an article being semi-protected would mean that users are required to have trust level of more than 3 (or some other number) in order to edit the article. This (partially) solves the sleeper accounts problem as an account is required to not only be registered for 3 or more days before it can edit semi-protected articles, the account also needs to be used in these days in a positive manner. Another, optional, page-protection method can be "auto-protection", in which articles which are actively edited are auto-protected, requiring users to have a trust level of more than 1 before they can edit the article. Actively edited articles can be defined as articles having more than 4 editors working on them during the past 2 days. Auto-protection makes sense as active articles are more likely to be targeted by sock-puppets and random vandalism. Also collateral damage would be kept to a minimum as an article having 5 or more editors actively edit it is unlikely to need edits from a not-so-trusted editor. From a technical perspective (as far as I can tell, as I don't know how wikipedia is implemented, and I don't have any real experience developing large scale websites) this

system can be implemented by scanning the editors of the past 24 hours every 12 hours, and adding 0.5 trust points to the user if he has edited sometime between [currenttime-24h] and [currenttime-12h] and has not been warned in the past 24 hours (again, only AFAIK). This system should be most effective when the trust level is mostly hidden from the users (access granted only to administrators, or bureaucrats, or checkusers). Any comments? Thoughts? Suggestions? Rami R 19:42, 15 January 2008 (UTC) I don't know that a trust system would work with TOR users, as, nearly every edit (even from the same IP) has the potential to be by a different user. SQL
Query me!

20:46, 15 January 2008 (UTC)

Trivial to game. Just edit nicely for a while. Since the metric is so simple you could even have a bot do the work. And if the user's proxy is blocked, then how will they build trust? Use their non-proxy IP? Okay, but it defeats any anonymity purpose of using the proxy. --Gmaxwell (talk) 21:09, 15 January 2008 (UTC) Also, a "warning" is near impossible to determine technically unless we were to totally redo the warning system by building it into the software. Most template warnings have hidden text which indicates what they are, but a handwritten warning would not and there is no technical difference between a warning edit and any other edit. And like Gmaxwell said, all you would have to do is fix a typo a day for 4 days and you could edit a semi protected article, this would basically be like the current autoconfirm system, but with an edit count restriction of 4 instead of 0. Also, the auto-protection thing is backward IMO. The articles that should be auto-protected, if any, are the obscure articles where vandalism might stick around for days or weeks. Active articles have people watching for vandalism and other abuse and attract more new users to editing Mr.Z-man 02:45, 16 January 2008 (UTC)

Please do not reinvent the wheel


First, make sure you see my comments atWikipedia_talk:Blocking_exemption_policy#Evidence_that_the_world_won.27t_end.2C_and_a_counter_argument. There I point out that TOR is (mostly) not currently blocked. Any proposal to exempt or allow Tor, if coupled with systematic software driven soft-blocking would almost certantly reduce vandalism from Tor. That Tor is not effectively blocked and yet we live is an important perspective. I'm seeing all these ideas for allowing proxies.. Trust metrics, approval systems, etc... But there is already a good (and cryptographically strong) solution which should meet our goals (block bad guys, avoid one person having a zillion accounts but only one IP) without excessively compromising the pseudonomity of proxy using editors: See User:Lunkwill/nym. --Gmaxwell (talk) 21:09, 15 January 2008 (UTC) Soft-blocking is useless for TOR nodes, as has been shown time and again. I'm astonished to see people still suggesting soft-blocking proxies. Policy is to hard block, period. Jayjg (talk) 03:40, 17 January 2008 (UTC) I'm not very familiar with the topic, so I'd be glad to see evidence of soft-blocking being useless for most TOR nodes. Everyone here knows the policy, and it's why they want to change it (if it had clear consensus we wouldn't be discussing it here). This is Wikipedia, you don't just say "This is policy. Period". You evaluate the policy,

and see if it makes sense, and if it helps the ultimate goal (build a free encyclopedia). If it doesn't you change it or ignore it-depending on the situation. Puchiko(Talk-email) 21:30, 1 March 2008 (UTC) [edit]A

solution

Why don't reduce the life of blocks and create a Special:Recentchanges/TOR page for patrolling that edits? --Emijrp (talk) 22:08, 1 February 2008 (UTC) We aim to limit the block length to the length of time the proxy is open, and there's a reason the blocks are not shorter. We can monitor the Tor recent changes at Wikipedia:Open proxy detection. People using Tor will usually change IP address every few minutes (or faster), which means by the time they have been blocked (or unblocked) they have a new IP address. Blocking a single IP address used for vandalism or sockpuppetry simply doesn't work - it's completely punitive and not preventative. The prevention happens by limiting the number of IP addresses the vandals can use, whether that be through blocking them all with a bot, or just blocking the ones we happen to notice. To reduce the vandalism for Tor you don't need to block them faster after they have vandalised by watching the recent changes, instead you need to block more of them from editing in the first place. Unfortunately, the more you block, the more legitimate editors are affected. -- zzuuzz
(talk)

22:39, 1 February 2008 (UTC)

[edit]Blocking

indiscriminately

By proposing a total TOR block you are potentially aiding countries in denying their citizens uncensored internet access, this seems like a backwards step. Perhaps rather than auto blocking the node any TOR proxy requires them to register and then monitors their edits, perhaps flagging them up if they start editing lots of documents quickly or re-editing a lot with a similar checksum Silent52 (talk) 03:22, 27 February 2008 (UTC) I think it's a terrible idea to indiscriminately block what is effectively the last way for Wikipedians to evade censorship and ensure pviacy. Even now, if an established editor uses TOR as a proxy and for whatever reason they get a checkuser run on them, other editors see that as a red flag. For Wikipedians who value both their privacy and their anonymity on the internet, this is a very bad thing. We should be going in completely the other direction, trying to find ways to help editors use Talk to me proxies. Celarnor 06:53, 3 April 2008 (UTC) I think we should not do anything indiscriminately when we have chosen our admins for their discretion. (1 == 2)Until 13:46, 3 April 2008 (UTC) [edit]Is

this done/over?

With Wikipedia:IP block exemption now policy (and presuming that it's been implemented by developers), is the discussion on this page now concluded? Or is there something further to discuss? (For example, there doesn't seem to be a consensus about standardising length of such blocks.) - jc37 18:24, 18 April 2008 (UTC) There are two related issues on this page: preventing vandalism from Tor while allowing constructive edits, and avoiding collateral from Tor blocks when the IP is no longer Tor. IP block exemption will not help with

the latter. If I can boldly summarise the consensus: Tor blocks should not extend beyond the life of the Tor node. In many cases it is only a few days, because IPs can be dynamically reassigned and Tor is a program which can be (and often is) closed down after testing it. In a few cases they can be open for years. Because the lifetime of Tor nodes is variable there cannot be standardisation in block lengths, but initial blocks under a year will suffice in the overwhelming majority of cases. Additionally, the issue of whether to systematically and/or automatically block every Tor IP has no consensus, and may need to be revisited after the implementation of IP block exemption or quite possibly m:Global blocking. -- zzuuzz
(talk)

10:18, 19 April 2008 (UTC)

Looks like a good enough summary to me. So we should probably archive the page as "done". (And possibly note the "result" somewhere.) - jc37 18:28, 19 April 2008 (UTC)

Can be done with a simple shell request


This can be done using MediaWiki's support for DNS blacklists. It is silly to block all of them at once when we can just use a DNS blacklist for the purpose, which will automatically update. This is a two-line shell request. Werdna talk 03:49, 22 April 2008 (UTC) [edit]This

is not over.

I completely disagree with everything. I have run a Tor node a few times, once for a week and the other times for a few days. I would dare to step out on a limb and suggest this to be perhaps typical of casual users or those simply wanting to give it a test run. I was surprised to find myself blocked from Wikipedia because of this. After reading the rationale, I am almost disgusted. I understand the need to protect WP against vandalism, but this is practically a witch hunt against everyone who is running Tor. With terminology used in this discussion such as "plan of attack", you are clearly taking an offensive that yields considerablecollateral damage by blocking all Tor exit nodes. Let me introduce a new aspect here. I wasn't even allowing HTTP to leave my node! Since your inferior technology cannot even determine what the exit policies of the various nodes are, your indiscriminate blocking of them all is a violation of free speech, protection of individuals suffering from oppression and basic human rights. Furthermore, I find the very fact that you are doing obtrusive scanning (or subscribing others who do) flat out appalling. All of this is playing into the hands of oppressive governments, whose bidding you are doing. I detest Wikipedia supporting totalitarian regimes. Where is Wikipedia going with this outright hatred, the ongoing censorship practice and questionable editing policy? You are shooting yourself in the leg. My distrust has certainly skyrocketed. --84.249.164.59 (talk) 03:42, 17 March 2009 (UTC) [edit]Collateral

damage from Tor blocks

Like most people in Singapore, I use SingNet as my ISP, and SingNet's proxy system means that my outward IP changes (almost) every time I edit. At some point a few weeks back, some admin hard blocked one (or more) of SingNet's proxies somewhere in the 220.255.7.xxx range, meaning that every SingNet user is, pretty much randomly, intermittently blocked from editing whenever their edits go via that proxy. My specific complaints:

1 -- Obviously, hard blocking the SingNet proxy was in error, and the block should be rescinded and the admin at fault slapped with a trout. 2 -- Worse though, is that the block message (MediaWiki:Torblock-blocked) is uninformative, useless and partly incorrect. Here it is in entirety: You have been blocked, because it has been detected that you are using the Tor anonymising service, or you or somebody sharing your IP address is running a Tor exit node. Editing pages from tor exit nodes is disabled due to abuse. In order to edit through tor and from IP addresses running tor exit nodes, you need to request an account and then request block exemption. Please note that this will only be granted in exceptional circumstances. Please note that some Internet service providers may use a single IP address for a large number of users. If one of those users runs a Tor exit node, then all of them will be blocked. Note that this a) does not give the blocked IP or any way to deduce it, b) does not give the blocking admin or any reason he left, c) gives incorrect instructions (I already have an account and am using it to edit), and d) does not provide any sensible way to work around the block (IP block exemptions for all SingNet users in Singapore is not sensible). Suggestion: Adopt the text from MediaWiki:Blockedtext. I'm not sure, though, how many of the arguments passed into Blockedtext (eg. IP address, blocking admin) are available from MediaWiki:Torblock-blocked. For the specific case of SingNet, there is an immediate workaround: manually set the browser to use proxy.singnet.com.sg:8080, which then routes all requests through 220.255.4.30. But it took me days to figure that out... Jpatokal (talk) 11:25, 6 May 2009 (UTC) See Wikipedia talk:SGpedians' notice board#On SingNet and getting hit with Tor block notices lately? for reports of other Singaporean editors getting slammed by this. Jpatokal (talk) 03:31, 7 May 2009 (UTC) This was an error of an administrator that blocked a (likely former) tor node (auto blocks for tor nodes are based on a regularly updated IP list provided by Tor itself), while it's only a transparent proxy at this moment. Problems like these are a side effect of the usage of proxies by ISPs. As the Tor template clearly explains: "Please note that some Internet service providers may use a single IP address for a large number of users. If one of those users runs a Tor exit node, then all of them will be blocked." A common problem with ISPs that are running out of IP addresses, but unfortunately, not something that Wikipedia can solve. These ISPs are using the internet in a way it was not designed for, and as such, it's users should expect problems. Said ISPs are welcomed to enable XFF headers and it's users can request wikimedia to trust those XFFs, which makes it easier for wikimedia to identify individual users behind the proxy. Something else those ISPs can do, is to develop IPv6 structure for themselves and Wikipedia. People may not like it, but this is a technical issue that is difficult to solve, which combined with an error of an administrator (who assumed a proxy was a tor node, because it had been before), resulted in a problem for many Signet users. It's life, in such large projects as Wikipedia. User errors cannot be avoided and the

world (of technology) is not perfect. Advocate with your ISPs to be provided with non-proxied IP addresses and at least one error won't affect a large set of users. The damage trough Tor nodes has been so severe, that it cannot be tolerated as it has been before. We have blockexempt for people who want to truly contribute. It is clearly stated in the template. Perhaps a more friendly message might be an idea however: "If you have no idea what Tor is, you may request a "blockexemption" here... So to summarize:

1. 2. 3. 4. 5.

Don't use ISPs who proxy If you have to, request they enable XFF, and inform wikimedia about these XFF headers. Fight for IPv6 in the long run. Make the block message easier to understand for people without technical skills. Accept that administrators can make mistakes on such complicated cases.

TheDJ (talk contribs) 12:11, 7 May 2009 (UTC) Also, always NOTE the IP that has been blocked, user reports without the IP information are somewhat pointless, because we cannot take further action upon them. TheDJ (talk contribs) 12:26, 7 May 2009 (UTC) That's exactly what I'm bitching about -- how can users getting hit with that message report their IPs, since the block message does not give any way of finding out the IP!? Fixing this is the single biggest step you can take to solving this problem. This is something we could look into. TheDJ (talk contribs) 16:08, 7 May 2009 (UTC) I've now added the IP address to the torblock message.[7] -- zzuuzz
(talk)

16:42, 7 May 2009 (UTC)

Yay! Thank you. Jpatokal (talk) 01:47, 8 May 2009 (UTC) No, block exemptions are not a sensible way of solving this; as the message itself says, they're granted "only in exceptional circumstances". I've been editing Wikipedia since 2003, and when I asked for a block exemption, it was refused by another admin who failed to understand what was going on despite my repeated attempts to do so. At least now you have it. I'll look into it further however. TheDJ (talk contribs) 16:08, 7 May 2009 (UTC) And finally, SingNet is already listed at XFF as a trusted supporting ISP... Jpatokal (talk) 14:48, 7 May 2009 (UTC) It seems singtel/net has changed the IP range they are using. The XFF list has "165.21.154.0/24" whereas the ip's reported are in the range: 20.255.7.xxx TheDJ (talk contribs) 16:08, 7 May 2009 (UTC)

There's been, I suspect just the one, Tor exit node on SingNet which exits in the 220.255.7.x range. These IPs will have been automatically blocked on a rotating basis for about the last month. It seems SingNet has the whole /16, but the Tor proxy's only been on the /24. -- zzuuzz
(talk)

16:25, 7 May 2009 (UTC)

Yup you are right I think. The current list of tor exit nodes contains several 220.255.7.xxx IPs it seems. I still haven't been able to confirm the usage of XFF headers however. TheDJ (talk contribs) 16:44, 7 May 2009 (UTC) Domas has now enabled XFF for 220.255.7.* Though the singtel range is larger, he is not entirely sure where the range of XFF enabled IPs begins and ends, so he is playing it safe. But the accesslogs of the webservers did confirm that at least this range uses XFF headers. TheDJ (talk contribs) 16:53, 7 May 2009 (UTC) Please add proxy.singtel.com.sg (220.255.4.26) to the XFF list; it's presently autoblocked and stopping all the users using it. Jpatokal (talk) 07:12, 20 May 2009 (UTC) In the future, you can mail such requests for "extension" of this block of XFF addresses directly to xff AT wikimedia DOT org. I have now done so myself. TheDJ (talk contribs) 09:28, 20 May 2009 (UTC)

Suggestion
I think that TOR nodes that are not exits should be blocked as well. By running a TOR node, even a non-exit one, you are actually helping vandals by providing them a anonymous encrypted path to a TOR exit node. By blocking ALL TOR-nodes, even non-exit ones, we discourage usage of TOR. A block time of 1 week is suitable, and then a recheck. If its still a TOR node, extend the block by 1 week and recheck. So the block time of 1 week the gets as a punishment for node operators for running a TOR node and helping vandals. Its like selling weapons to criminals. Sebastiannielsen (talk) 02:07, 28 November 2010 (UTC) We are not bothered at all if someone operates a Tor node to help people with their anonymity elsewhere on the Internet. We are not interested in discouraging Tor, only preventing abusive edits to Wikipedia. If we're only concerned about preventing vandalism, all we need to do is block the exit nodes which can exit to Wikipedia -- zzuuzz
(talk)

08:44, 28 November 2010 (UTC)

User:Lunkwill/nym
From Wikipedia, the free encyclopedia
< User:Lunkwill

This page is too crowded: Wikipedia:Village pump (proposals)#Privacy protecting authentication via nym. Moving discussion of the proposal to: User talk:Lunkwill/nym. Also creating a voting section at the bottom of this page. Lunkwill 00:40, 2 December 2005 (UTC)

Contents
[show]

[edit]Executive

summary

I've built a software package called nym which could be used to allow users of anonymizing networks and shared-IP networks like schools to edit Wikipedia. Privacy would be preserved, yet admins would still be able to block IPs of vandals.

The software includes patches to WikiMedia, which I have submitted to MediaWiki's patch-review system. Wikipedia needs to decide whether to accept my proposal and patches. The patches are small and easy to implement; mainly it's a question of whether we're willing to try it out.

[edit]Overview

Recently there was a lot of discussion on the tor email list about ways in which tor users could contribute to wikipedia articles. For a while now, all the tor exit nodes have been blocked from editing due to vandals using tor to disguise their actual IP addresses. (But note that there are also benefits for other users of shared-IP services such as school proxies; see below) It was proposed that cryptographic techniques could be used to ensure that vandals can be blocked, while still allowing helpful users to edit. I implemented such a system and called it nym. It enforces Wikipedia's current mechanism of filtering incoming users by IP address, but allows users to still enjoy privacy via tor.
[edit]How

it works

To use nym, user Alice would do the following:

Turn off Tor (so that her real IP is visible) Visit a web page with their Javascript-enabled browsers. The browser does some math, then contacts a token server to obtain a data token in "exchange" for her IP address. (tor exit nodes and currently-blocked IPs would be refused tokens). Such a token can only be obtained once per IP address (or optionally, once per address per time period).

Clicking a few buttons and doing a little cut-and-paste, she trades the token for an SSL client certificate which she loads into her browser. The certificate certifies that she received it in exchange for a real IP address, but doesn't reveal what that address is.

She turns tor on again, and connects to a service such as Wikipedia. The service uses her certificate ID as a pseudonym instead of the IP address of the tor exit node she used to connect to Wikipedia. This ID shows up anywhere the IP address of a non-nym user would have shown up.

If she misbehaves, admins block her certificate just as they would have blocked her real IP. Now she faces the same challenge as other vandals, since she must obtain a new IP address and redo the whole issuing process in order to circumvent the block.

There is a live test system for nym, including a MediaWiki installation. You can try this process yourself from this page: nym client.
[edit]Other

benefits

Say Alice goes to a school which uses a single proxy, and has never heard of tor. Her classmate Bob is a vandal and gets the school blocked regularly. Alice can go home, obtain an SSL certificate from her home computer's IP, then load the certificate on her computer at school using her keychain drive. If it's a shared terminal, she simply removes the certificate at the end of her session.
[edit]Potential

problems

Most of the ways nym can be abused are no different from the challenges Wikipedia already handles on a daily basis. A vandal with access to many IP addresses can use nym to obtain many certificates, all of which would need to be blocked. But this is already the case with Wikipedia. Nym has the disadvantage that it hides the originating IP, making it more difficult to identify the source of vandalism, although a vandal using a set of IPs to obtain certificates in succession would still end up with certificates with adjacent serial numbers (which could then be blocked as a group). On the other hand, at least for the Javascript client I've described, obtaining each certificate takes several minutes, whereas traditional Wikipedia vandals can make new edits as quickly as they can switch IP addresses.
[edit]The

issue to decide

Ultimately, Wikipedia must decide whether to support nym. The technical requirements are quite reasonable; mainly, we must decide whether the potential hassles are worth the ability to offer privacy to our editors. Disadvantages:

New systems always have bugs, costing techie and user time. Determined vandals will have another avenue for attack. Much lessened anonymity - instead of mixing with all Tor users, you only mix with Tor users who have Wikipedia accounts and use nym.

Allows vandals to "store up" IP addresses over a long period of time and then use them all at once.

Doesn't work well for users with dynamic IP addresses - if someone ever got a token using the IP address you're using, you can't get one.

Users who run tor exit nodes themselves can't get a token.

Advantages:

Even if nym fails horribly (say someone finds a bug which allows certificates to be forged in seconds, or vandals are the only people who end up using nym), switching off nym support entirely will be quite trivial, leaving us right back in the situation we're in now. And if we later decide to support nym, it will be easy to start over and make all the nym users obtain new certificates.

Some tor users will be grateful. Wikipedia provides a perfect low-risk testbed for privacy/pseudonymity systems like nym. Our experiences will be of value to security researchers.

Regardless of tor, users behind single-IP proxies can be distinguished, allowing just the vandals to be blocked.

As nym demonstrates its privacy protecting claims over time, users facing oppressive laws may be willing to contribute information they would otherwise be afraid to reveal. (Particularly of interest to Wikinews).

Source code and documentation for nym, along with the preprint of an academic paper describing nym, can be found at the nym site.

Making it Happen
Project Director Apu Kapadia Graduate Students Peter C. Johnson Patrick P. Tsang Research Developers Cory Cornelius Daniel Peebles Undergraduates Tiger Huang

Advisors Sean W. Smith

Acknowledgements
This project was supported by Grant No. 2006-CS-001-00001 awarded by the U.S. Department of Homeland Security and by Grant No. 2005-DD-BX-1091 awarded by the Bureau of Justice Assistance. The Bureau of Justice Assistance is a component of the Office of Justice Programs, which also includes the Bureau of Justice Statistics, the National Institute of Justice, the Office of Juvenile Justice and Delinquiency Prevention, and the Office for Victims of Crime. Point of view or opinion in this document are those of the authors and do not represent the official position or policies of the United States Department of Justice. This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit.(http://www.openssl.org/)

Introduction Random number generators (RNG) play an important role in computer-based simulations such as Monte Carlo methods where random variates are needed to model inherent randomness of components [10]. Another major use of RNGs is in cryptography, for example, session and message keys for symmetric block ciphers like iterated DES [1]. There are two broad classes of RNGs: Hardware RNGs and Algorithmic RNGs. The former is a physical device relying on external sources like decay time of a radioactive material or electronic noises of resistors to generate random numbers, while the latter is a self-contained, finite, and deterministic computer program which stretches out a ``seed'' to a seemingly random sequence. Random numbers generated by algorithmic RNGs are called pseudo-random since the nondeterminism of randomness is only simulated. But it is also doubtful as to whether hardware RNGs produce truly random numbers. As pointed out by Einstein: ``God does not play dice,'' one can argue that by meticulously modeling the interactions among the electrons in a resistor, the noises could be just as deterministic as algorithmic RNGs. The choice between hardware RNGs and algorithmic RNGs depends on application needs. The reproducibility of outcomes is an advantage of algorithmic RNG for certain simulations. The statistical efficiency of simulation runs is measured by the variance of its output random variables. Smaller variance is favorable since it results in smaller confidence intervals. To reduce the variance, one technique called ``Common Random Numbers'' requires that simulation runs of different configurations under study use the same stream of random numbers [10]. In this project we only focus on algorithmic RNGs, so throughout this report we use RNG to denote algorithmic random number generators. We have implemented seven uniform RNGs (PMMLCG, UNIX, ICG, Tausworthe, TT800, MT19937, and BBS) and several statistical tests, both empirical and theoretical, to investigate their uniformity and randomness. The rest of this report is organized as follows. In Section 2 we discusses three kinds of randomness assessments. Section 3 and 4 describes the RNGs and the statistical tests we have implemented, respectively. Section 5 presents the experimental results.

Apu Kapadia, Ph.D.


Assistant Professor of Computer Science and Informatics

School of Informatics and Computing Indiana University Bloomington Bio Publications CV Contact Blog Photos

Photo by Apu Kapadia

Office

Informatics West 211 (812) 856 1465


(detailed contact info) Research Interests

Security and privacy in peer-to-peer and social networks, network security, human factors and usable security, privacy policies, security and privacy in pervasive and mobile computing, accountable anonymity, anonymizing networks, applied cryptography
(more in my bio, publications, or cv)

Apu KapadiaContact Information


kapadia at indiana dot edu Office Indiana University School of Informatics and Computing 901 E. 10th Street, Suite 211 Bloomington, IN 47408 Phone: (812) 856-1465 Fax: (812) 856-1995
http://securiosities.wordpress.com/

Teaching

Spring 2012: I-433/533: Systems & Protocol Security & Information Assurance Fall 2011: CSCI-B 649/INFO-I 590: Advanced Topics in Privacy Spring 2011: I-308: Information Representation Fall 2010: CSCI-B 649/INFO-I 590: Advanced Topics in Privacy Spring 2010: INFO-I400/H400/I590 Advanced Security and Privacy Fall 2009: INFO-I 590: Advanced Topics in Privacy Fall 2007: CS 38: Security and Privacy (Dartmouth College)
Program Committees

Consider submitting a paper here WPES '12, Raleigh, NC, USA, October 15, 2012. Submission deadline July 16, 2012. NDSS '13, San Diego, CA, USA, February 2427, 2013. Submission deadline August 6th, 2012. Consider attending these venues SOUPS '12, Washington, DC, USA, July 1113, 2012.
(full list of service)

Advising

Postdoc Sameer Patil PhD Students Zheng Dong (co-advised by Prof. Jean Camp) Roberto Hoyle Shirin Nilizadeh Greg Norcie (co-advised by Prof. Jean Camp) Zahid Rahman Robert Templeman Former students Naveed Alam (MSSI) Roman Schlegel (Visiting Scholar, PhD Student at City University of Hong Kong)

I was privileged to know


Patrick Tsang, obituary

Вам также может понравиться