Mirror Mirror on the Wall¶
The following is an overview of Tor descriptors. If you're already familiar with what they are and where to get them then you may want to skip to the end.
- What is a descriptor?
- Where do descriptors come from?
- Where can I get the current descriptors?
- Where can I get past descriptors?
- Can I get descriptors from the Tor process?
- Can I create descriptors?
- Validating the descriptor's content
- Saving and loading descriptors
- Putting it together...
- Are there any other parsing libraries?
What is a descriptor?¶
Tor is made up of two parts: the application and a distributed network of a few thousand volunteer relays. Information about these relays is public, and made up of documents called descriptors.
There are several different kinds of descriptors, the most common ones being...
Descriptor Type | Description |
---|---|
Server Descriptor | Information that relays publish about themselves. Tor clients once downloaded this information, but now they use microdescriptors instead. |
ExtraInfo Descriptor | Relay information that Tor clients do not need in order to function. This is self-published, like server descriptors, but not downloaded by default. |
Microdescriptor | Minimalistic document that just includes the information necessary for Tor clients to work. |
Network Status Document | Though Tor relays are decentralized, the directories that track the overall network are not. These central points are called directory authorities, and every hour they publish a document called a consensus (aka, network status document). The consensus in turn is made up of router status entries. |
Router Status Entry | Relay information provided by the directory authorities including flags, heuristics used for relay selection, etc. |
Hidden Service Descriptor | Information pertaining to a Hidden Service. These can only be queried through the tor process. |
Where do descriptors come from?¶
Descriptors fall into two camps:
Server, extra-info, and hidden service descriptors are self-published documents. Relays and hidden services publish these about themselves, and so naturally can indicate anything they'd like in them (true or not).
These are self contained documents, bundling within themselves a signiture Stem can optionally check.
Network status documents (aka votes, the consensus, and router status entries they contain) are created by the directory authorities. For a great overview on how this works see Jordan Wright's article on how the consensus is made.
Microdescriptors are merely a distilled copy of a server descriptor, and so belong to the first camp.
Where can I get the current descriptors?¶
To work Tor needs up-to-date relay information. As such getting the current descriptors is easy: just download it like Tor does.
Every tor relay provides an ORPort and many provide a DirPort as well which can both be downloaded from using Stem's stem.descriptor.remote module. Listing relays for instance is as easy as...
import stem.descriptor.remote
try:
for desc in stem.descriptor.remote.get_consensus():
print("found relay %s (%s)" % (desc.nickname, desc.fingerprint))
except Exception as exc:
print("Unable to retrieve the consensus: %s" % exc)
Please remember that Tor is a shared resource! If you're going to contribute much load please consider running a relay to offset your use.
ORPorts communicate through the tor protocol, and can be downloaded from by specifying it as the endpoint...
import stem.descriptor.remote
# Unlike the above example, this one downloads specifically through the
# ORPort of moria1 (long time tor directory authority).
try:
consensus = stem.descriptor.remote.get_consensus(
endpoints = (stem.ORPort('128.31.0.34', 9101),)
)
for desc in consensus:
print("found relay %s (%s)" % (desc.nickname, desc.fingerprint))
except Exception as exc:
print("Unable to retrieve the consensus: %s" % exc)
DirPorts by contrast are simpler and specially designed to offer descriptor information, but not all relays offer one. If no endpoint is specified we default to downloading from the DirPorts of tor's directory authorities.
If you would like to see what raw descriptors look like try curling a relay's DirPort. Section 6.2 of tor's directory specification lists the urls you can try.
% curl 128.31.0.34:9131/tor/server/all
router Unnamed 83.227.81.207 9001 0 9030
identity-ed25519
-----BEGIN ED25519 CERT-----
AQQABj3aAV7JzKHjSJjocve8jvnMwmy/Pv2HsSKoymeepddNBU5iAQAgBABw1VVB
965QDxs+wicWj4vNXMKIkKCN4gQhvzqG2UxsgmkaQlsKiEMrIxrzwlazP6od9+hi
WZKl3tshd0ekgUB6AAKwlvsrxl9wfy0G/Bf8PVsBftvNCWPwLR4pI3nibQU=
-----END ED25519 CERT-----
master-key-ed25519 cNVVQfeuUA8bPsInFo+LzVzCiJCgjeIEIb86htlMbII
...
Where can I get past descriptors?¶
Descriptor archives are available from CollecTor. If you need Tor's topology at a prior point in time this is the place to go!
With CollecTor you can either read descriptors directly...
import datetime
import stem.descriptor.collector
yesterday = datetime.datetime.utcnow() - datetime.timedelta(days = 1)
# provide yesterday's exits
exits = {}
for desc in stem.descriptor.collector.get_server_descriptors(start = yesterday):
if desc.exit_policy.is_exiting_allowed():
exits[desc.fingerprint] = desc
print('%i relays published an exiting policy today...\n' % len(exits))
for fingerprint, desc in exits.items():
print(' %s (%s)' % (desc.nickname, fingerprint))
... or download the descriptors to disk and read them later.
import datetime
import stem.descriptor
import stem.descriptor.collector
yesterday = datetime.datetime.utcnow() - datetime.timedelta(days = 1)
cache_dir = '~/descriptor_cache/server_desc_today'
collector = stem.descriptor.collector.CollecTor()
for f in collector.files('server-descriptor', start = yesterday):
f.download(cache_dir)
# then later...
for f in collector.files('server-descriptor', start = yesterday):
for desc in f.read(cache_dir):
if desc.exit_policy.is_exiting_allowed():
print(' %s (%s)' % (desc.nickname, desc.fingerprint))
Can I get descriptors from the Tor process?¶
If you already have Tor running on your system then it is already downloading descriptors on your behalf. Reusing these is a great way to keep from burdening the rest of the Tor network.
Tor only gets the descriptors that it needs by default, so if you're scripting against Tor you may want to set some of the following in your torrc. Keep in mind that these add a small burden to the network, so don't set them in a widely distributed application. And, of course, please consider running Tor as a relay so you give back to the network!
# Descriptors have a range of time during which they're valid. To get the
# most recent descriptor information, regardless of if Tor needs it or not,
# set the following.
FetchDirInfoEarly 1
FetchDirInfoExtraEarly 1
# Tor doesn't need all descriptors to function. In particular...
#
# * Tor no longer downloads server descriptors by default, opting
# for microdescriptors instead.
#
# * If you aren't actively using Tor as a client then Tor will
# eventually stop downloading descriptor information altogether
# to relieve load on the network.
#
# To download descriptors regardless of if they're needed by the
# Tor process or not set...
FetchUselessDescriptors 1
# Tor doesn't need extrainfo descriptors to work. If you want Tor to download
# them anyway then set...
DownloadExtraInfo 1
Now that Tor is happy chugging along, up-to-date descriptors are available through Tor's control socket...
from stem.control import Controller
with Controller.from_port(port = 9051) as controller:
controller.authenticate()
for desc in controller.get_network_statuses():
print("found relay %s (%s)" % (desc.nickname, desc.fingerprint))
... or by reading directly from Tor's data directory...
from stem.descriptor import parse_file
for desc in parse_file('/home/atagar/.tor/cached-consensus'):
print('found relay %s (%s)' % (desc.nickname, desc.fingerprint))
Can I create descriptors?¶
Besides reading descriptors you can create them too. This is most commonly done for test data. To do so simply use the create() method of Descriptor subclasses...
from stem.descriptor.server_descriptor import RelayDescriptor
# prints 'caerSidi (71.35.133.197:9001)'
desc = RelayDescriptor.create()
print("%s (%s:%s)" % (desc.nickname, desc.address, desc.or_port))
# prints 'demo (127.0.0.1:80)'
desc = RelayDescriptor.create({'router': 'demo 127.0.0.1 80 0 0'})
print("%s (%s:%s)" % (desc.nickname, desc.address, desc.or_port))
Unspecified mandatory fields are filled with mock data. You can also use content() to get a string descriptor...
from stem.descriptor.server_descriptor import RelayDescriptor
print(RelayDescriptor.content({'router': 'demo 127.0.0.1 80 0 0'}))
router demo 127.0.0.1 80 0 0
published 2012-03-01 17:15:27
bandwidth 153600 256000 104590
reject *:*
onion-key
-----BEGIN RSA PUBLIC KEY-----
MIGJAoGBAJv5IIWQ+WDWYUdyA/0L8qbIkEVH/cwryZWoIaPAzINfrw1WfNZGtBmg
skFtXhOHHqTRN4GPPrZsAIUOQGzQtGb66IQgT4tO/pj+P6QmSCCdTfhvGfgTCsC+
WPi4Fl2qryzTb3QO5r5x7T8OsG2IBUET1bLQzmtbC560SYR49IvVAgMBAAE=
-----END RSA PUBLIC KEY-----
signing-key
...
Validating the descriptor's content¶
Stem can optionally validate descriptors, checking their integrity and compliance with Tor's specs. This does the following...
- Checks that we have mandatory fields, and that their content conforms with what Tor's spec says they should have. This can be useful when data integrity is important to you since it provides an upfront assurance that the descriptor's correct (no need for 'None' checks).
- If you have pycrypto we'll validate signatures for descriptor types where that has been implemented (such as server and hidden service descriptors).
Prior to Stem 1.4.0 descriptors were validated by default, but this has become opt-in since then.
General rule of thumb: if speed is your chief concern then leave it off, but if correctness or signature validation is important then turn it on. Validating is as simple as including validate = True in any method that provides descriptors...
from stem.descriptor import parse_file
for desc in parse_file('/home/atagar/.tor/cached-consensus', validate = True):
print('found relay %s (%s)' % (desc.nickname, desc.fingerprint))
Saving and loading descriptors¶
Tor descriptors are just plaintext documents. As such, if you'd rather not use Pickle you can persist a descriptor by simply writing it to disk, then reading it back later.
import stem.descriptor.remote
server_descriptors = stem.descriptor.remote.get_server_descriptors().run()
with open('/tmp/descriptor_dump', 'wb') as descriptor_file:
descriptor_file.write(''.join(map(str, server_descriptors)))
Our server_descriptors here is a list of RelayDescriptor instances. When we write it to a file this looks like...
router default 68.229.17.182 443 0 9030
platform Tor 0.2.4.23 on Windows XP
protocols Link 1 2 Circuit 1
published 2014-11-17 23:42:38
fingerprint EE04 42C3 6DB6 6903 0816 247F 2607 382A 0783 2D5A
uptime 63
bandwidth 5242880 10485760 77824
extra-info-digest 1ABA9FC6B912E755483D0F4F6E9BC1B23A2B7206
... etc...
We can then read it back with parse_file() by telling it the type of descriptors we're reading...
from stem.descriptor import parse_file
server_descriptors = parse_file('/tmp/descriptor_dump', descriptor_type = 'server-descriptor 1.0')
for relay in server_descriptors:
print(relay.fingerprint)
For an example of doing this with a consensus document see here.
Putting it together...¶
As discussed above there are four methods for reading descriptors...
- Download descriptors directly with stem.descriptor.remote.
- Read a single file with parse_file().
- Read multiple files or an archive with the DescriptorReader.
- Requesting them from Tor with Controller methods like get_server_descriptors() and get_network_statuses().
Now lets say you want to figure out who the biggest exit relays are. You could use any of the methods above, but for this example we'll use stem.descriptor.remote...
import sys
import stem.descriptor.remote
from stem.util import str_tools
# provides a mapping of observed bandwidth to the relay nicknames
def get_bw_to_relay():
bw_to_relay = {}
try:
for desc in stem.descriptor.remote.get_server_descriptors().run():
if desc.exit_policy.is_exiting_allowed():
bw_to_relay.setdefault(desc.observed_bandwidth, []).append(desc.nickname)
except Exception as exc:
print("Unable to retrieve the server descriptors: %s" % exc)
return bw_to_relay
# prints the top fifteen relays
bw_to_relay = get_bw_to_relay()
count = 1
for bw_value in sorted(bw_to_relay.keys(), reverse = True):
for nickname in bw_to_relay[bw_value]:
print("%i. %s (%s/s)" % (count, nickname, str_tools.size_label(bw_value, 2)))
count += 1
if count > 15:
sys.exit()
% python example.py
1. herngaard (40.95 MB/s)
2. chaoscomputerclub19 (40.43 MB/s)
3. chaoscomputerclub18 (40.02 MB/s)
4. chaoscomputerclub20 (38.98 MB/s)
5. wannabe (38.63 MB/s)
6. dorrisdeebrown (38.48 MB/s)
7. manning2 (38.20 MB/s)
8. chaoscomputerclub21 (36.90 MB/s)
9. TorLand1 (36.22 MB/s)
10. bolobolo1 (35.93 MB/s)
11. manning1 (35.39 MB/s)
12. gorz (34.10 MB/s)
13. ndnr1 (25.36 MB/s)
14. politkovskaja2 (24.93 MB/s)
15. wau (24.72 MB/s)
Are there any other parsing libraries?¶
Yup! Stem isn't the only game in town when it comes to parsing. Metrics-lib is a highly mature parsing library for Java, and Zoossh is available for Go. Each library has its own capabilities...
Capability | Stem | Metrics-lib | Zoossh |
---|---|---|---|
Language | Python | Java | Go |
Checks signatures | Mostly | No | No |
Create new descriptors | Yes | No | No |
Lazy parsing | Yes | No | Yes |
Type detection by @type | Yes | Yes | Yes |
Type detection by filename | Yes | No | No |
Packages | Several | None | None |
Can Read/Download From | |||
Files | Yes | Yes | Yes |
Tarballs | Yes | Yes | No |
Tor Process | Yes | No | No |
Directory Authorities | Yes | Yes | No |
CollecTor | No | Yes | No |
Supported Types | |||
Server Descriptors | Yes | Yes | Partly |
Extrainfo Descriptors | Yes | Yes | No |
Microdescriptors | Yes | Yes | No |
Consensus | Yes | Yes | Partly |
Bridge Descriptors | Yes | Yes | No |
Hidden Service Descriptors | Yes | No | No |
Bridge Pool Assignments | No | Yes | No |
Torperf | No | Yes | No |
Tordnsel | Yes | Yes | No |
Benchmarks | |||
Server Descriptors | 0.60 ms | 0.29 ms | 0.46 ms |
Extrainfo Descriptors | 0.40 ms | 0.22 ms | unsupported |
Microdescriptors | 0.33 ms | 0.07 ms | unsupported |
Consensus | 865.72 ms | 246.71 ms | 83.00 ms |
Benchmarked With Commit | c01a9cd | 8767f3e | 2380e55 |
Language Interpreter | Python 3.5.1 | Java 1.7.0 | Go 1.5.2 |
Few things to note about these benchmarks...
- Zoossh is the fastest. Its benchmarks were at a disadvantage due to not reading from tarballs.
- Your Python version makes a very large difference for Stem. For instance, with Python 2.7 reading a consensus takes 1,290.84 ms (almost twice as long).
- Metrics-lib and Stem can both read from compressed tarballs at a small performance cost. For instance, Metrics-lib can read an lzma compressed consensus in 255.76 ms and Stem can do it in 902.75 ms.
So what does code with each of these look like?
Stem Example¶
import time
import stem.descriptor
def measure_average_advertised_bandwidth(path):
start_time = time.time()
total_bw, count = 0, 0
for desc in stem.descriptor.parse_file(path):
total_bw += min(desc.average_bandwidth, desc.burst_bandwidth, desc.observed_bandwidth)
count += 1
runtime = time.time() - start_time
print("Finished measure_average_advertised_bandwidth('%s')" % path)
print(' Total time: %i seconds' % runtime)
print(' Processed server descriptors: %i' % count)
print(' Average advertised bandwidth: %i' % (total_bw / count))
print(' Time per server descriptor: %0.5f seconds' % (runtime / count))
print('')
if __name__ == '__main__':
measure_average_advertised_bandwidth('server-descriptors-2015-11.tar')
Metrics-lib Example¶
package org.torproject.descriptor;
import org.torproject.descriptor.Descriptor;
import org.torproject.descriptor.DescriptorReader;
import org.torproject.descriptor.DescriptorSourceFactory;
import org.torproject.descriptor.ServerDescriptor;
import java.io.File;
import java.util.Iterator;
public class MeasurePerformance {
public static void main(String[] args) {
measureAverageAdvertisedBandwidth(new File("server-descriptors-2015-11.tar"));
}
private static void measureAverageAdvertisedBandwidth(
File tarballFileOrDirectory) {
System.out.println("Starting measureAverageAdvertisedBandwidth");
final long startedMillis = System.currentTimeMillis();
long sumAdvertisedBandwidth = 0;
long countedServerDescriptors = 0;
DescriptorReader descriptorReader =
DescriptorSourceFactory.createDescriptorReader();
Iterator<Descriptor> descriptors =
descriptorReader.readDescriptors(tarballFileOrDirectory).iterator();
while (descriptors.hasNext()) {
Descriptor descriptor = descriptors.next();
if (!(descriptor instanceof ServerDescriptor)) {
continue;
}
ServerDescriptor serverDescriptor = (ServerDescriptor) descriptor;
sumAdvertisedBandwidth += (long) Math.min(Math.min(
serverDescriptor.getBandwidthRate(),
serverDescriptor.getBandwidthBurst()),
serverDescriptor.getBandwidthObserved());
countedServerDescriptors++;
}
long endedMillis = System.currentTimeMillis();
System.out.println("Ending measureAverageAdvertisedBandwidth");
System.out.printf("Total time: %d millis%n",
endedMillis - startedMillis);
System.out.printf("Processed server descriptors: %d%n",
countedServerDescriptors);
System.out.printf("Average advertised bandwidth: %d%n",
sumAdvertisedBandwidth / countedServerDescriptors);
System.out.printf("Time per server descriptor: %.6f millis%n",
((double) (endedMillis - startedMillis))
/ ((double) countedServerDescriptors));
}
}
Zoossh Example¶
package main
import (
"fmt"
"os"
"path/filepath"
"time"
"git.torproject.org/user/phw/zoossh.git"
)
var processedDescs int64 = 0
var totalBw uint64 = 0
func Min(a uint64, b uint64, c uint64) uint64 {
min := a
if b < min {
min = b
}
if c < min {
min = c
}
return min
}
func ProcessDescriptors(path string, info os.FileInfo, err error) error {
if _, err := os.Stat(path); err != nil {
return err
}
if info.IsDir() {
return nil
}
consensus, err := zoossh.ParseDescriptorFile(path)
if err != nil {
return err
}
if (processedDescs % 100) == 0 {
fmt.Printf(".")
}
for _, getDesc := range consensus.RouterDescriptors {
desc := getDesc()
totalBw += Min(desc.BandwidthAvg, desc.BandwidthBurst, desc.BandwidthObs)
processedDescs++
}
return nil
}
func main() {
before = time.Now()
filepath.Walk("server-descriptors-2015-11", ProcessDescriptors)
fmt.Println()
after = time.Now()
duration = after.Sub(before)
fmt.Println("Total time for descriptors:", duration)
fmt.Printf("Time per descriptor: %dns\n",
duration.Nanoseconds()/processedDescs)
fmt.Printf("Processed %d descriptors.\n", processedDescs)
fmt.Printf("Average advertised bandwidth: %d\n", totalBw/uint64(processedDescs))
}