Tag Archives: and Token Filter

[repost ]Solr :Analyzers, Tokenizers, and Token Filters


When a document is indexed, its individual fields are subject to the analyzing and tokenizing filters that can transform and normalize the data in the fields. For example — removing blank spaces, removing html code, stemming, removing a particular character and replacing it with another. At indexing time as well as at query time you may need to do some of the above or similiar operations. For example, you might perform a Soundex transformation (a type of phonic hashing) on a string to enable a search based upon the word and upon its ‘sound-alikes’.

The lists below provide an overview of some of the more heavily used Tokenizers and TokenFilters provided by Solr “out of the box” along with tips/examples of using them. This list should by no means be considered the “complete” list of all Analysis classes available in Solr! In addition to new classes being added on an ongoing basis, you can load your own custom Analysis code as a Plugin.

Analyzers, per field type, are configured in the Solr Schema.

For a more complete list of what Tokenizers and TokenFilters come out of the box, please consult the javadocs for the analysis package. if you have any tips/tricks you’d like to mention about using any of these classes, please add them below.

For information about some language-specific !Tokenizers and TokenFilters available in Solr, please consult LanguageAnalysis.

Note: For a good background on Lucene Analysis, it’s recommended that you read the following sections in Lucene In Action:

  • 1.5.3 : Analyzer
  • Chapter 4.0 through 4.7 at least

Try searches for “analyzer”, “token”, and “stemming”.



  1. Analyzers, Tokenizers, and Token Filters
  2. High Level Concepts
    1. Stemming
    2. Analyzers
      1. Char Filters
      2. Tokenizers
      3. Token Filters
      4. Specifying an Analyzer in the schema
    3. When To use a CharFilter vs a TokenFilter
  3. Notes On Specific Factories
    1. CharFilterFactories
      1. solr.MappingCharFilterFactory
      2. solr.PatternReplaceCharFilterFactory
      3. solr.HTMLStripCharFilterFactory
    2. TokenizerFactories
      1. solr.KeywordTokenizerFactory
      2. solr.LetterTokenizerFactory
      3. solr.WhitespaceTokenizerFactory
      4. solr.LowerCaseTokenizerFactory
      5. solr.StandardTokenizerFactory
      6. solr.ClassicTokenizerFactory
      7. solr.UAX29URLEmailTokenizerFactory
      8. solr.PatternTokenizerFactory
      9. solr.PathHierarchyTokenizerFactory
      10. solr.ICUTokenizerFactory
    3. TokenFilterFactories
      1. solr.StandardFilterFactory
      2. solr.LowerCaseFilterFactory
      3. solr.TrimFilterFactory
      4. solr.PatternReplaceFilterFactory
      5. solr.StopFilterFactory
      6. solr.CommonGramsFilterFactory
      7. solr.EdgeNGramFilterFactory
      8. solr.KeepWordFilterFactory
      9. solr.LengthFilterFactory
      10. solr.WordDelimiterFilterFactory
      11. solr.SynonymFilterFactory
      12. solr.RemoveDuplicatesTokenFilterFactory
      13. solr.ISOLatin1AccentFilterFactory
      14. solr.ASCIIFoldingFilterFactory
      15. solr.PhoneticFilterFactory
      16. solr.ShingleFilterFactory
      17. solr.PositionFilterFactory
      18. solr.ReversedWildcardFilterFactory
      19. solr.CollationKeyFilterFactory
      20. solr.ICUCollationKeyFilterFactory
      21. solr.ICUNormalizer2FilterFactory
      22. solr.ICUFoldingFilterFactory
      23. solr.ICUTransformFilterFactory


High Level Concepts



There are four types of stemming strategies:

  • Porter or Reduction stemming — A transforming algorithm that reduces any of the forms of a word such as “runs, running, ran”, to its elemental root e.g., “run”. Porter stemming must be performed both at insertion time and at query time.
  • Lucene-Hunspell aims to provide features such as stemming, decompounding, spellchecking, normalization, term expansion, etc. taking advantage of the existing lexical resources already created and widely-used in projects like OpenOffice. This is still alpha-version but with an impressive list of supported languages (See this presentation for more)
  • Expansion stemming — Takes a root word and ‘expands’ it to all of its various forms — can be used either at insertion time or at query time. One way to approach this is by using the SynonymFilterFactory
  • KStem, an alternative to Porter for developers looking for a less agressive stemmer.



Analyzers are components that pre-process input text at index time and/or at search time. It’s important to use the same or similar analyzers that process text in a compatible manner at index and query time. For example, if an indexing analyzer lowercases words, then the query analyzer should do the same to enable finding the indexed words.

On wildcard and fuzzy searches, no text analysis is performed on the search word.

Most Solr users define custom Analyzers for their text field types consisting of zero or more Char Filter Factories, one Tokenizer Factory, and zero or more Token Filter Factories; but it is also possible to configure a field type to use a concrete Analyzer implementation

The Solr web admin interface may be used to show the results of text analysis, and even the results after each analysis phase when a configuration based analyzer is used.


Char Filters


Char Filter is a component that pre-processes input characters (consuming and producing a character stream) that can add, change, or remove characters while preserving character position information.

Char Filters can be chained.



A Tokenizer that splits-up a stream of characters (from each individual field value) into a series of tokens.

Thee can only be one Tokenizer in each Analyzer.


Token Filters

Tokens produced by the Tokenizer are passed through a series of Token Filters that add, change, or remove tokens. The field is then indexed by the resulting token stream.


Specifying an Analyzer in the schema

A Solr schema.xml file allows two methods for specifying the way a text field is analyzed. (Normally only field types of solr.TextField will have Analyzers explicitly specified in the schema):

  1. Specifying the class name of an Analyzer — anything extending org.apache.lucene.analysis.Analyzer.

    <fieldtype name="nametext">
  2. Specifying a TokenizerFactory followed by a list of optional TokenFilterFactories that are applied in the listed order. Factories that can create the tokenizers or token filters are used to prepare configuration for the tokenizer or filter and avoid the overhead of creation via reflection.

    <fieldtype name="text">
        <charFilter mapping="mapping-ISOLatin1Accent.txt"/>

Any Analyzer, CharFilterFactory, TokenizerFactory, or TokenFilterFactory may be specified using its full class name with package — just make sure they are in Solr’s classpath when you start your appserver. Classes in the org.apache.solr.analysis.* package can be referenced using the short alias solr.*.

If you want to use custom CharFilters, Tokenizers or TokenFilters, you’ll need to write a very simple factory that subclasses BaseTokenizerFactory or BaseTokenFilterFactory, something like this…


public class MyCustomFilterFactory extends BaseTokenFilterFactory {
  public TokenStream create(TokenStream input) {
    return new MyCustomFilter(input);


When To use a CharFilter vs a TokenFilter

There are several pairs of CharFilters and TokenFilters that have related (ie: MappingCharFilter and ASCIIFoldingFilter) or nearly identical functionality (ie: PatternReplaceCharFilterFactory and PatternReplaceFilterFactory) and it may not always be obvious which is the best choice.

The ultimate decision depends largely on what Tokenizer you are using, and whether you need to “out smart” it by preprocessing the stream of characters.

For example, maybe you have a tokenizer such as StandardTokenizer and you are pretty happy with how it works overall, but you want to customize how some specific characters behave.

In such a situation you could modify the rules and re-build your own tokenizer with javacc, but perhaps its easier to simply map some of the characters before tokenization with a CharFilter.


Notes On Specific Factories






Creates org.apache.lucene.analysis.MappingCharFilter.



Creates org.apache.solr.analysis.PatternReplaceCharFilter. Applies a regex pattern to string in char stream, replacing match occurances with the specified replacement string.



Creates org.apache.solr.analysis.HTMLStripCharFilter. HTMLStripCharFilter strips HTML from the input stream and passes the result to either CharFilter or Tokenizer. Like other CharFilters, it’s specified using a <charFilter> tag, and must come before the <tokenizer>. An example:



HTML stripping features:

  • The input need not be an HTML document as only constructs that look like HTML will be removed.
  • Removes HTML/XML tags while keeping the content
    • Attributes within tags are also removed, and attribute quoting is optional.
  • Removes XML processing instructions: <?foo bar?>
  • Removes XML comments
  • Removes XML elements starting with <! and ending with >
  • Removes contents of <script> and <style> elements.
    • Handles XML comments inside these elements (normal comment processing won’t always work)
    • Replaces numeric character entities references like &#65; or &#x7f;
      • The terminating ‘;‘ is optional if the entity reference is followed by whitespace.
    • Replaces all named character entity references.
      • &nbsp; is replaced with a space instead of the non-breaking space character \u00A0
      • terminating ‘;‘ is mandatory to avoid false matches on something like “Alpha&Omega Corp

HTML stripping examples:

my <a href="www.foo.bar">link</a>  my link 
<br>hello<!--comment-->  hello 
hello<script><!-- f('<!--internal--></script>'); --></script>  hello 
if a<b then print a;  if a<b then print a; 
hello <td height=22 nowrap align="left">  hello 
a<b &#65; Alpha&Omega O a<b A Alpha&Omega O 
M&eacute;xico México



Solr provides the following TokenizerFactories (Tokenizers and TokenFilters):



Creates org.apache.lucene.analysis.core.KeywordTokenizer.

Treats the entire field as a single token, regardless of its content.

  • Example: "http://example.com/I-am+example?Text=-Hello" ==> "http://example.com/I-am+example?Text=-Hello"



Creates org.apache.lucene.analysis.LetterTokenizer.

Creates tokens consisting of strings of contiguous letters. Any non-letter characters will be discarded.

  • Example: "I can't" ==> "I", "can", "t"




Creates org.apache.lucene.analysis.WhitespaceTokenizer.

Creates tokens of characters separated by splitting on whitespace.



Creates org.apache.lucene.analysis.LowerCaseTokenizer.

Creates tokens by lowercasing all letters and dropping non-letters.

  • Example: "I can't" ==> "i", "can", "t"




Creates org.apache.lucene.analysis.standard.StandardTokenizer.

A good general purpose tokenizer that strips many extraneous characters and sets token types to meaningful values. Token types are only useful for subsequent token filters that are type-aware. The StandardFilter is currently the only Lucene filter that utilizes token types.

Solr Version Behavior
pre-3.1 Some token types are number, alphanumeric, email, acronym, URL, etc. —

Example: "I.B.M. cat's can't" ==> ACRONYM: "I.B.M.", APOSTROPHE:"cat's", APOSTROPHE:"can't"

Solr3.1 Word boundary rules from Unicode standard annex UAX#29

Example: "I.B.M. 8.5 can't!!!" ==> ALPHANUM: "I.B.M.", NUM:"8.5", ALPHANUM:"can't"

arg default value note
maxTokenLength 255 Solr3.1 — SOLR-2188
Tokens longer than this are silently ignored.





Creates org.apache.lucene.analysis.standard.ClassicTokenizer.

This tokenizer preserves StandardTokenizer’s behavior pre-Solr 3.1: A good general purpose tokenizer that strips many extraneous characters and sets token types to meaningful values. Token types are only useful for subsequent token filters that are type-aware. The StandardFilter is currently the only Lucene filter that utilizes token types.

Some token types are number, alphanumeric, email, acronym, URL, etc. —

  • Example: "I.B.M. cat's can't" ==> ACRONYM: "I.B.M.", APOSTROPHE:"cat's", APOSTROPHE:"can't"
arg default value note
maxTokenLength 255 Solr3.1 — SOLR-2188
Tokens longer than maxTokenLength are silently ignored.





Creates org.apache.lucene.analysis.standard.UAX29URLEmailTokenizer.

Like StandardTokenizer, this tokenizer implements the word boundary rules from Unicode standard annex UAX#29. In addition, this tokenizer recognizes: full URLs using the file:://, http(s)://, and ftp:// schemes; hostnames with a registered TLD (top level domain, e.g. “.com”); IPv4 and IPv6 addresses; and e-mail addresses.

In addition to the token types output by StandardTokenizer from Solr3.1 onward, UAX29URLEmailTokenizer can also output <URL> and <EMAIL> token types.

  • Example: "Visit http://accarol.com/contact.htm?from=external&a=10 or e-mail bob.cratchet@accarol.com"
  • ==> ALPHANUM:"Visit", URL:"http://accarol.com/contact.htm?from=external&a=10", ALPHANUM:"or", ALPHANUM:"e-mail" EMAIL:"bob.cratchet@accarol.com"
arg default value note
maxTokenLength 255 Solr3.1 — SOLR-2188
Tokens longer than maxTokenLength are silently ignored.



Breaks text at the specified regular expression pattern.

For example, you have a list of terms, delimited by a semicolon and zero or more spaces: mice; kittens; dogs.


   <fieldType name="semicolonDelimited">
        <tokenizer pattern=";\s*" />

See the javadoc for details.



Solr3.1 Outputs file path hierarchies as synonyms.

Input String Output Tokens Position Inc
/usr/local/apache /usr
(w/ delimiter=”\” replace=”/”)


  <fieldType name="text_path" positionIncrementGap="100">
      <tokenizer delimiter="\" replace="/"/>



Solr3.1 Uses ICU‘s text bounds capabilities to tokenize text.

This tokenizer first identifies the writing system “Script” for runs of text within the document. Then, it tokenizes the text according to rules or dictionaries depending upon the writing system. For example, if it encounters Thai, it will apply dictionary-based segmentation to split the Thai text (Thai uses no spaces between words).

Input String Output Tokens Script Attribute
Testing บริษัทชื่อ נאסק”ר Testing


    <fieldType name="text_icu" autoGeneratePhraseQueries="false">

Note: to use this tokenizer, see solr/contrib/analysis-extras/README.txt for instructions on which jars you need to add to your SOLR_HOME/lib






Creates org.apache.lucene.analysis.standard.StandardFilter.

Removes dots from acronyms and ‘s from the end of tokens. Works only on typed tokens, i.e., those produced by StandardTokenizer or equivalent.

  • Example of StandardTokenizer followed by StandardFilter:
    • "I.B.M. cat's can't" ==> "IBM", "cat", "can't"




Creates org.apache.lucene.analysis.LowerCaseFilter.

Lowercases the letters in each token. Leaves non-letter tokens alone.

  • Example: "I.B.M.", "Solr" ==> "i.b.m.", "solr".





Creates org.apache.solr.analysis.TrimFilter.

Trims whitespace at either end of a token.

  • Example: " Kittens!   ", "Duck" ==> "Kittens!", "Duck".

Optionally, the “updateOffsets” attribute will update the start and end position offsets.



Like the PatternReplaceCharFilterFactory, but operates post-tokenization. See “When to use a Char Filter vs. a Token Filter” above.




Creates org.apache.lucene.analysis.StopFilter.

Discards common words.

The default English stop words are:


    "a", "an", "and", "are", "as", "at", "be", "but", "by",
    "for", "if", "in", "into", "is", "it",
    "no", "not", "of", "on", "or", "s", "such",
    "t", "that", "the", "their", "then", "there", "these",
    "they", "this", "to", "was", "will", "with"

A customized stop word list may be specified with the “words” attribute in the schema. Optionally, the “ignoreCase” attribute may be used to ignore the case of tokens when comparing to the stopword list.


<fieldtype name="teststop">
     <filter words="stopwords.txt" ignoreCase="true"/>




Creates org.apache.solr.analysis.CommonGramsFilter. Solr1.4

Makes shingles (i.e. the_cat) by combining common tokens (usually the same as the stop words list) and regular tokens. CommonGramsFilter is useful for issuing phrase queries (i.e. “the cat”) that contain stop words. Normally phrases containing stop words would not match their intended target and instead, the query “the cat” would match all documents containing “cat”, which can be undesirable behavior. Phrase query slop (eg, “the cat”~2) will not function as intended because common grams are indexed as shingled tokens that are adjacent to each other (i.e. the_cat is indexed as a single term). The CommonGramsQueryFilter converts the phrase query “the cat” into the single term query the_cat.

A customized common word list may be specified with the “words” attribute in the schema. Optionally, the “ignoreCase” attribute may be used to ignore the case of tokens when comparing to the common words list.


<fieldtype name="testcommongrams">
     <filter words="stopwords.txt" ignoreCase="true"/>




Creates org.apache.solr.analysis.EdgeNGramTokenFilter.

By default, create n-grams from the beginning edge of a input token.

With the configuration below the string value Nigerian gets broken down to the following terms

Nigerian => “ni”, “nig”, “nige”, “niger”, “nigeri”, “nigeria”, “nigeria”, “nigerian”

By default, minGramSize is 1, maxGramSize is 1 and side is “front”. You can also set side to generate the ngrams from right to left by setting “side” to a value of “back”

minGramSize – the minimum number of characters to start with. For example, minGramSize=4 would mean that a word like Apache => “Apac”, “Apache” would be the 2 tokens stored.

This FilterFactory is very useful in matching substrings of particular terms in the index during query time.


<fieldtype name="testedgengrams">
     <filter minGramSize="2" maxGramSize="15" side="front"/>




Creates org.apache.solr.analysis.KeepWordFilter. Solr1.3

Keep words on a list. This is the inverse behavior of StopFilterFactory. The word file format is identical.


<fieldtype name="testkeep">
     <filter words="keepwords.txt" ignoreCase="true"/>




Creates solr.LengthFilter.

Filters out those tokens *not* having length min through max inclusive.


<fieldtype name="lengthfilt">
    <filter min="2" max="5" />




Creates solr.analysis.WordDelimiterFilter.

Splits words into subwords and performs optional transformations on subword groups. By default, words are split into subwords with the following rules:

  • split on intra-word delimiters (all non alpha-numeric characters).
    • "Wi-Fi" -> "Wi", "Fi"
  • split on case transitions (can be turned off – see splitOnCaseChange parameter)
    • "PowerShot" -> "Power", "Shot"
  • split on letter-number transitions (can be turned off – see splitOnNumerics parameter)
    • "SD500" -> "SD", "500"
  • leading and trailing intra-word delimiters on each subword are ignored
    • "//hello---there, 'dude'" -> "hello", "there", "dude"
  • trailing “‘s” are removed for each subword (can be turned off – see stemEnglishPossessive parameter)
    • "O'Neil's" -> "O", "Neil"
      • Note: this step isn’t performed in a separate filter because of possible subword combinations.

Splitting is affected by the following parameters:

  • splitOnCaseChange=”1″ causes lowercase => uppercase transitions to generate a new part [Solr 1.3]:
    • "PowerShot" => "Power" "Shot"
    • "TransAM" => "Trans" "AM"
    • default is true (“1”); set to 0 to turn off
  • splitOnNumerics=”1″ causes alphabet => number transitions to generate a new part [Solr 1.3]:
    • "j2se" => "j" "2" "se"
    • default is true (“1”); set to 0 to turn off
  • stemEnglishPossessive=”1″ causes trailing “‘s” to be removed for each subword.
    • "Doug's" => "Doug"
    • default is true (“1”); set to 0 to turn off

Note that this is the default behaviour in all released versions of Solr.

There are also a number of parameters that affect what tokens are present in the final output and if subwords are combined:

  • generateWordParts=”1″ causes parts of words to be generated:
    • "PowerShot" => "Power" "Shot" (if splitOnCaseChange=1)
    • "Power-Shot" => "Power" "Shot"
    • default is 0
  • generateNumberParts=”1″ causes number subwords to be generated:
    • "500-42" => "500" "42"
    • default is 0
  • catenateWords=”1″ causes maximum runs of word parts to be catenated:
    • "wi-fi" => "wifi"
    • default is 0
  • catenateNumbers=”1″ causes maximum runs of number parts to be catenated:
    • "500-42" => "50042"
    • default is 0
  • catenateAll=”1″ causes all subword parts to be catenated:
    • "wi-fi-4000" => "wifi4000"
    • default is 0
  • preserveOriginal=”1″ causes the original token to be indexed without modifications (in addition to the tokens produced due to other options)
    • default is 0
  • protected=”protwords.txt” specifies a text file containing a list of words that should be protected and passed through unchanged.
    • default is empty (no protected words)

These parameters may be combined in any way.

  • Example of generateWordParts=”1″ and catenateWords=”1″:
    • "PowerShot" -> 0:"Power", 1:"Shot" 1:"PowerShot"
      (where 0,1,1 are token positions)
    • "A's+B's&C's" -> 0:"A", 1:"B", 2:"C", 2:"ABC"
    • "Super-Duper-XL500-42-AutoCoder!" -> 0:"Super", 1:"Duper", 2:"XL", 2:"SuperDuperXL", 3:"500" 4:"42", 5:"Auto", 6:"Coder", 6:"AutoCoder"

One use for WordDelimiterFilter is to help match words with different delimiters. One way of doing so is to specify generateWordParts="1" catenateWords="1" in the analyzer used for indexing, and generateWordParts="1" in the analyzer used for querying. Given that the current StandardTokenizer immediately removes many intra-word delimiters, it is recommended that this filter be used after a tokenizer that leaves them in place (such as WhitespaceTokenizer).


    <fieldtype name="subword">
      <analyzer type="query">
      <analyzer type="index">

In some cases you might want to adjust how WordDelimiterFilter splits on a per-character basis. To do this, you can supply a configuration file with the “types” attribute that specifies custom character categories. An example file is in subversion here.



Creates SynonymFilter.

Matches strings of tokens and replaces them with other strings of tokens.

  • The synonyms parameter names an external file defining the synonyms.
  • If ignoreCase is true, matching will lowercase before checking equality.
  • If expand is true, a synonym will be expanded to all equivalent synonyms. If it is false, all equivalent synonyms will be reduced to the first in the list.
  • The optional tokenizerFactory parameter names a tokenizer factory class to analyze synonyms (see https://issues.apache.org/jira/browse/SOLR-319), which can help with the synonym+stemming problem described in http://search-lucene.com/m/hg9ri2mDvGk1 .

Example usage in schema:


    <fieldtype name="syn">
          <filter synonyms="syn.txt" ignoreCase="true" expand="false"/>

Synonym file format:


# blank lines and lines starting with pound are comments.

#Explicit mappings match any token sequence on the LHS of "=>"
#and replace with all alternatives on the RHS.  These types of mappings
#ignore the expand parameter in the schema.
i-pod, i pod => ipod,
sea biscuit, sea biscit => seabiscuit

#Equivalent synonyms may be separated with commas and give
#no explicit mapping.  In this case the mapping behavior will
#be taken from the expand parameter in the schema.  This allows
#the same synonym file to be used in different synonym handling strategies.
ipod, i-pod, i pod
foozball , foosball
universe , cosmos

# If expand==true, "ipod, i-pod, i pod" is equivalent to the explicit mapping:
ipod, i-pod, i pod => ipod, i-pod, i pod
# If expand==false, "ipod, i-pod, i pod" is equivalent to the explicit mapping:
ipod, i-pod, i pod => ipod

#multiple synonym mapping entries are merged.
foo => foo bar
foo => baz
#is equivalent to
foo => foo bar, baz

Keep in mind that while the SynonymFilter will happily work with synonyms containing multiple words (ie: “sea biscuit, sea biscit, seabiscuit“) The recommended approach for dealing with synonyms like this, is to expand the synonym when indexing. This is because there are two potential issues that can arrise at query time:

  1. The Lucene QueryParser tokenizes on white space before giving any text to the Analyzer, so if a person searches for the words sea biscit the analyzer will be given the words “sea” and “biscit” seperately, and will not know that they match a synonym.
  2. Phrase searching (ie: "sea biscit") will cause the QueryParser to pass the entire string to the analyzer, but if the SynonymFilter is configured to expand the synonyms, then when the QueryParser gets the resulting list of tokens back from the Analyzer, it will construct a MultiPhraseQuery that will not have the desired effect. This is because of the limited mechanism available for the Analyzer to indicate that two terms occupy the same position: there is no way to indicate that a “phrase” occupies the same position as a term. For our example the resulting MultiPhraseQuery would be "(sea | sea | seabiscuit) (biscuit | biscit)" which would not match the simple case of “seabiscuit” occuring in a document

Even when you aren’t worried about multi-word synonyms, idf differences still make index time synonyms a good idea. Consider the following scenario:

  • An index with a “text” field, which at query time uses the SynonymFilter with the synonym TV, Televesion and expand="true"
  • Many thousands of documents containing the term “text:TV”
  • A few hundred documents containing the term “text:Television”

A query for text:TV will expand into (text:TV text:Television) and the lower docFreq for text:Television will give the documents that match “Television” a much higher score then docs that match “TV” comparably — which may be somewhat counter intuitive to the client. Index time expansion (or reduction) will result in the same idf for all documents regardless of which term the original text contained.




Creates org.apache.solr.analysis.RemoveDuplicatesTokenFilter.

Filters out any tokens which are at the same logical position in the tokenstream as a previous token with the same text. This situation can arise from a number of situations depending on what the “up stream” token filters are — notably when stemming synonyms with similar roots. It is usefull to remove the duplicates to prevent idf inflation at index time, or tf inflation (in a MultiPhraseQuery) at query time.




Creates org.apache.lucene.analysis.ISOLatin1AccentFilter.

Replaces accented characters in the ISO Latin 1 character set (ISO-8859-1) by their unaccented equivalent. Note that this is deprecated in favor of !ASCIIFoldingFilterFactory.




Creates org.apache.lucene.analysis.ASCIIFoldingFilter.

Converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the “Basic Latin” Unicode block) into their ASCII equivalents, if one exists.



See the ASCIIFoldingFilter Javadocs for more details.





Creates org.apache.solr.analysis.PhoneticFilter.

Uses commons codec to generate phonetically similar tokens. This currently supports five methods.

arg value
encoder one of: DoubleMetaphone, Metaphone, Soundex, RefinedSoundex, Caverphone Solr3.1
inject true/false — true will add tokens to the stream, false will replace the existing token
maxCodeLength integer — sets the maximum length of the code to be generated. Supported only for Metaphone and DoubleMetaphone encodings


  <filter encoder="DoubleMetaphone" inject="true"/>





Creates org.apache.lucene.analysis.shingle.ShingleFilter.

A ShingleFilter constructs shingles (token n-grams) from a token stream. In other words, it creates combinations of tokens as a single token.

For example, the sentence “please divide this sentence into shingles” might be tokenized into shingles “please divide”, “divide this”, “this sentence”, “sentence into”, and “into shingles”.

arg default value note
maxShingleSize 2
minShingleSize 2 Solr3.1 — SOLR-1740
outputUnigrams true
outputUnigramsIfNoShingles false Solr3.1 — SOLR-744
tokenSeparator ” “ Solr3.1 — SOLR-1740


  <filter maxShingleSize="2" outputUnigrams="true"/>





Creates org.apache.lucene.analysis.position.PositionFilter.

A PositionFilter manipulates the position of tokens in the stream.

Set the positionIncrement of all tokens to the “positionIncrement”, except the first return token which retains its original positionIncrement value.

arg value
positionIncrement default 0


  <filter />

PositionFilter can be used with a query Analyzer to prevent expensive Phrase and MultiPhraseQueries. When QueryParser parses a query, it first divides text on whitespace, and then Analyzes each whitespace token. Some TokenStreams such as StandardTokenizer or WordDelimiterFilter may divide one of these whitespace-separate tokens into multiple tokens.

The QueryParser will turn “multiple tokens” into a Phrase or MultiPhraseQuery, but “multiple tokens at the same position with only a position count of 1” is treated as a special case. You can use PositionFilter at the end of your QueryAnalyzer to force any subsequent tokens after the first one to have a position increment of zero, to trigger this case.

For example, by default a query of “Wi-Fi” with StandardTokenizer will create a PhraseQuery:


field:"Wi Fi"

If you instead wrap the StandardTokenizer with PositionFilter, the “Fi” will have a position increment of zero, creating a BooleanQuery:


field:Wi field:Fi

Another example is when exact matching hits are wanted for _any_ shingle within the query. (This was done at http://sesam.no to replace three proprietary ‘FAST Query-Matching servers’ with two open sourced Solr indexes, background reading in sesat and on the mailing list). It was needed that in the query all words and shingles to be placed at the same position, so that all shingles to be treated as synonyms of each other.

With only the ShingleFilter the shingles generated are synonyms only to the first term in each shingle group. For example the query “abcd efgh ijkl” results in a query like:

  • (“abcd” “abcd efgh” “abcd efgh ijkl”) (“efgh” efgh ijkl”) (“ijkl”)

where “abcd efgh” and “abcd efgh ijkl” are synonyms of “abcd”, and “efgh ijkl” is a synonym of “efgh”.

ShingleFilter does not offer a way to alter this behaviour.

Using the PositionFilter in combination makes it possible to make all shingles synonyms of each other. Such a configuration could look like:


   <fieldType name="shingleString" positionIncrementGap="100" omitNorms="true">
      <analyzer type="index">
      <analyzer type="query">
        <filter outputUnigrams="true" outputUnigramIfNoNgram="true" maxShingleSize="99"/>
        <filter />





A filter that reverses tokens to provide faster leading wildcard and prefix queries. Add this filter to the index analyzer, but not the query analyzer. The standard Solr query parser (SolrQuerySyntax) will use this to reverse wildcard and prefix queries to improve performance (for example, translating myfield:*foo into myfield:oof*). To avoid collisions and false matches, reversed tokens are indexed with a prefix that should not otherwise appear in indexed text.

See the javadoc for more details, or the example schema.





A filter that lets one specify:

  1. A system collator associated with a locale, or
  2. A collator based on custom rules

This can be used for changing sort order for non-english languages as well as to modify the collation sequence for certain languages. You must use the same CollationKeyFilter at both index-time and query-time for correct results. Also, the JVM vendor, version (including patch version) of the slave should be exactly same as the master (or indexer) for consistent results.

Also see

  1. Javadocs
  2. Lucene 2.9.1 contrib-collation documentation
  3. Lucene’s CollationKeyFilter javadocs
  4. UnicodeCollation




This filter works like CollationKeyFilterFactory, except it uses ICU for collation. This makes smaller and faster sort keys, and it supports more locales. See UnicodeCollation for some more information, the same concepts apply.

The only configuration difference is that locales should be specified to this filter with RFC 3066 locale IDs.


    <fieldType name="icu_sort_en">
        <filter locale="en" strength="primary"/>

Note: to use this filter, see solr/contrib/analysis-extras/README.txt for instructions on which jars you need to add to your SOLR_HOME/lib




This filter normalizes text to a Unicode Normalization Form.


    <fieldType name="normalized">
        <filter name="nfkc_cf" mode="compose"/>

These are the supported normalization forms:


NFC: name="nfc" mode="compose"
NFD: name="nfc" mode="decompose"
NFKC: name="nfkc" mode="compose"
NFKD: name="nfkc" mode="decompose"
NFKC_Casefold: name="nfkc_cf" mode="compose"

NFKC_Casefold (nfkc_cf) means applying the Unicode Case-Folding algorithm in conjunction with NFKC normalization. Unicode Case-Folding is more than lowercasing, e.g. it handles cases like ß/SS. Behind the scenes this is its own form (nfkc_cf), but both algorithms have been recursively computed across all of Unicode offline, so that its an efficient single-pass algorithm. For practical purposes this means you can use this factory with nfkc_cf as a better substitute for the combined behavior of LowerCaseFilter and NFKC normalization.

If you want to do more advanced normalization (e.g. apply a filter to work only on a subset of Unicode), see the javadocs.

Note: to use this filter, see solr/contrib/analysis-extras/README.txt for instructions on which jars you need to add to your SOLR_HOME/lib




This filter is a custom unicode normalization form that applies the foldings specified in UTR#30 in addition to NFKC_Casefold.


    <fieldType name="folded">

This means NFKC normalization, Unicode case folding, and search term folding (removing accents, etc) have been recursively computed across all of Unicode offline, so that its an efficient single-pass through the string. For practical purposes this means you can use this factory as a better substitute for the combined behavior of ASCIIFoldingFilter, LowerCaseFilter, and ICUNormalizer2Filter

Note: to use this filter, see solr/contrib/analysis-extras/README.txt for instructions on which jars you need to add to your SOLR_HOME/lib




This filter applies ICU Transforms to text.

Currently the filter only supports System transforms (or compounds consisting of), and custom rulesets are not yet supported.


    <fieldType name="transformed">
        <filter id="Traditional-Simplified"/>

You can see a list of the supported System transforms by going to this link, clicking the drop-down, and scrolling down to System.

Note: to use this filter, see solr/contrib/analysis-extras/README.txt for instructions on which jars you need to add to your SOLR_HOME/lib