Before running any of the examples, the following steps must be performed.

  1. Install and run Accumulo via the instructions found in $ACCUMULO_HOME/README. Remember the instance name. It will be referred to as “instance” throughout the examples. A comma-separated list of zookeeper servers will be referred to as “zookeepers”.
  2. Create an Accumulo user (see the user manual), or use the root user. The “username” Accumulo user name with password “password” is used throughout the examples. This user needs the ability to create tables.

In all commands, you will need to replace “instance”, “zookeepers”, “username”, and “password” with the values you set for your Accumulo instance.

Commands intended to be run in bash are prefixed by ‘$’. These are always assumed to be run from the $ACCUMULO_HOME directory.

Commands intended to be run in the Accumulo shell are prefixed by ‘>’.

Each README in the examples directory highlights the use of particular features of Apache Accumulo.

batch: Using the batch writer and batch scanner.

bloom: Creating a bloom filter enabled table to increase query performance.

bulkIngest: Ingesting bulk data using map/reduce jobs on Hadoop.



combiner: Using example StatsCombiner to find min, max, sum, and count.

constraints: Using constraints with tables.

dirlist: Storing filesystem information.


filedata: Storing file data.

filter: Using the AgeOffFilter to remove records more than 30 seconds old.

helloworld: Inserting records both inside map/reduce jobs and outside. And reading records between two rows.

isolation: Using the isolated scanner to ensure partial changes are not seen.

mapred: Using MapReduce to read from and write to Accumulo tables.

maxmutation: Limiting mutation size to avoid running out of memory.



shard: Using the intersecting iterator with a term index partitioned by document.



visibility: Using visibilities (or combinations of authorizations). Also shows user permissions.