Efficient personal search at large scale

Vespa includes a relatively unknown mode which provides personal search at massive scale for a fraction of the cost of alternatives: streaming search. In this article we explain streaming search and how to use it.

Imagine you are tasked with building the next Gmail, a massive personal data store centered around search. How do you do it? An obvious answer is to just use a regular search engine, write all documents to a big index and simply restrict queries to match documents belonging to a single user.

This works, but the problem is cost. Successful personal data stores has a tendency to become massive — the amount of personal data produced in the world outweighs public data by many orders of magnitude. Storing indexes in addition to raw data means paying for extra disk space for all this data and paying for the overhead of updating this massive index each time a user changes or adds data. Index updates are costly, especially when they need to be handled in real time, which users often expect for their own data. Systems like Gmail handle billions of writes per day so this quickly becomes the dominating cost of the entire system.

However, when you think about it there’s really no need to go through the trouble of maintaining global indexes when each user only searches her own data. What if we just maintain a separate small index per user? This makes both index updates and queries cheaper, but leads to a new problem: Writes will arrive randomly over all users, which means we’ll need to read and write a user’s index on every update without help from caching. A billion writes per day translates to about 25k read-and write operations per second peak. Handling traffic at that scale either means using a few thousand spinning disks, or storing all data on SSD’s. Both options are expensive.

Large scale data stores already solve this problem for appending writes, by using some variant of multilevel log storage. Could we leverage this to layer the index on top of a data store like that? That helps, but means we need to do our own development to put these systems together in a way that performs at scale every time for both queries and writes. And we still pay the cost of storing the indexes in addition to the raw user data.

Do we need indexes at all though? With some reflection, it turns out that we don’t. Indexes consists of pointers from words/tokens to the documents containing them. This allows us to find those documents faster than would be possible if we had to read the content of the documents to find the right ones, of course at the considerable cost of maintaining those indexes. In personal search however, any query only accesses a small subset of the data, and the subsets are know in advance. If we take care to store the data of each subset together we can achieve search with low latency by simply reading the data at query time — what we call streaming search. In most cases, most subsets of data (i.e most users) are so small that this can be done serially on a single node. Subsets of data that are too large to stream quickly on a single node can be split over multiple nodes streaming in parallel.

Numbers

How many documents can be searched per node per second with this solution? Assuming a node with 500 Mb/sec read speed (either from an SSD or multiple spinning disks), and 1k average compressed document size, the disk can search max 500Mb/sec / 1k/doc = 500,000 docs/sec. If each user store 1000 documents each on average this gives a max throughput per node of 500 queries/second. This is not an exact computation since we disregard time used to seek and write, and inefficiency from reading non-compacted data on one hand, and assume an overly pessimistic zero effect from caching on the other, but it is a good indication that our solution is cost effective.

What about latency? From the calculation above we see that the latency from finding the matching documents will be 2 ms on average. However, we usually care more about the 99% latency (or similar). This will be driven by large users which needs to be split among multiple nodes streaming in parallel. The max data size per node is then a tradeoff between latency for such users and the overall cost of executing their queries (less nodes per query is cheaper). For example, we can choose to store max 50.000 documents per user per node such that we get a max latency of 100 ms per query. Lastly, the total number of nodes decides the max parallelism and hence latency for the very largest users. For example, with 20 nodes in total a cluster we can support 20 * 50k = 1 million documents for a single user with 100 ms latency.

All right — with this we have our cost-effective solution to implement the next Gmail: Store just the raw data of users, in a log-level store. Locate the data of each user on a single node in the system for locality (or, really 2–3 nodes for redundancy), but split over multiple nodes for users that grow large. Implement a fully functional search and relevance engine on top of the raw data store, which distributes queries to the right set of nodes for each user and merges the results. This will be cheap and efficient, but it sounds like a lot of work! It sure would be nice if somebody already did all of it, ran it at large scale for years and then released it as open source.

Well, as luck would have it we already did this in Vespa. In addition to the standard indexing mode, Vespa includes a streaming mode for documents which provides this solution, implemented by layering the full search engine functionality over the raw data store built into Vespa. When this solution is compared to indexed search in Vespa or more complicated sharding solutions in Elastic Search for personal search applications, we typically see about an order of magnitude reduction in cost of achieving a system which can sustain the query and update rates needed by the application with stable latencies over long time periods. It has been used to implement various applications such as storing and searching massive amounts of mails, personal typeahead suggestions, personal image collections, and private forum group content.

Using streaming search on Vespa

The steps to using streaming search on Vespa are:

  • Set streaming mode for the document type(s) in question in services.xml.
  • Write documents with a group name (e.g a user id) in their id, by setting g=[groupid] in the third part of the document id, as in e.g id:mynamespace:mydocumenttype:g=user123:doc123
  • Pass the group id in queries by setting the query property streaming.groupname in queries.

That’s it! With those steps you have created a scalable, battle-proven personal search solution which is an order of magnitude cheaper than any alternative out there, with full support for structured and text search, advanced relevance including natural language and machine-learned models, and powerful grouping and aggregation for features like faceting. For more details see the documentation on streaming search. Have fun with it, and as usual let us know what you are building!

Efficient open-domain question-answering on Vespa.ai

Open-domain question-answering has emerged as a benchmark for measuring a
system’s capability to read, represent, and retrieve general knowledge.
Retrieval-based question-answering systems require connecting various systems
and services, such as BM25 text search, vector similarity search, NLP model
serving, tokenizers, and middleware to glue all this together. Most of these
are core features of Vespa.ai. In this post, we reproduce the state-of-the-art
baseline for retrieval-based question-answering systems within a single,
scalable production ready application on Vespa.ai.

Introduction

Some of the most effective drivers of scientific progress are benchmarks.
Benchmarks provide a common goal, a purpose, for improving the state-of-the-art
on datasets that are available to everyone. Leaderboards additionally add a
competitive motivation, offering the opportunity to excel among peers. And
rather than just endlessly tinkering to improve relevant metrics, competitions
add deadlines that spur researchers to actually get things done.

Within the field of machine learning, benchmarks have been particularly
important to stimulate innovation and progress. A new competition, the
Efficient Open-Domain Question Answering challenge for NeurIPS 2020, seeks to
advance the state-of-the-art in question answering systems. The goal here is to
develop a system capable of answering questions without any topic restriction.
With all the recent progress in natural language processing, this area has
emerged as a benchmark for measuring a system’s capability to read, represent,
and retrieve general knowledge.

The current retrieval-based state-of-the-art is the Dense Passage Retrieval
system, as described in the Dense
Passage Retrieval for Open-Domain Question Answering
paper. It consists of a set of python
scripts, tools, and models developed primarily for research. There are a lot
of parts in such a system. These include two BERT-based models for encoding
text to embedding vectors, another BERT-based model for extracting answers,
approximate nearest-neighbor similarity search and text-based BM25 methods for
retrieving candidates, tokenizers, and so on. It’s not trivial to bring such a
system to production. We thought it would be interesting to consolidate these
different parts and demonstrate how to build an open-domain question-answering
serving system with Vespa.ai that achieves state-of-the-art accuracy.

Most of these components are core features in Vespa. A while ago, we improved
Vespa’s text search support for term-based retrieval and ranking. We recently
added efficient approximate nearest neighbors for semantic, dense vector
recall. For hybrid retrieval, Vespa supports many types of machine-learned
models, for instance neural networks and decision forests. We have also
improved our support for TensorFlow and PyTorch models to run larger NLP and
Transformer models.

This is interesting because while this has obvious benefits in a research
setting, such systems’ real value lies in their end-use in applications. Vespa
is designed as a highly performant and scalable production-ready system. Thus,
it offers a simplified path to deployment in production without coping with the
complexity of maintaining many different subsystems. That makes Vespa an
attractive package.

During this blog post, we’ll touch upon

  • Fast approximate-nearest neighbors for semantic, dense vector retrieval.
  • Term-based (BM25) retrieval for sparse vector retrieval.
  • Importing of multiple pre-trained BERT-based models in Vespa for encoding embedding vectors and extracting answers.
  • Custom logic for tokenization and other things.

For more details we refer to the companion sample
application.

This post’s goal is to recreate the Dense Passage Retrieval (DPR) paper results
for the Natural Questions
benchmark.
We’ll first go through a high-level overview of how a retrieval-based
question-answering system works in the context of this paper. Then we’ll show
how this can all be implemented on Vespa without any external services or
plugins, and recreate the paper’s state-of-the-art results as measured by the
exact match of answers given a set of questions. We’ll wrap up with a look to
the future and the next post in this series.

Background: the anatomy of a retrieval-based QA system

The Natural Questions benchmark consists of natural language questions and
answers. How to retrieve and represent the knowledge required to answer the
questions is up to each system. There are two main approaches to this:
retrieval and parametric. A retrieval-based system uses search terms or
semantic representation vectors to recall a set of sentences or paragraphs
subsequently evaluated with a machine-learned model to extract the exact
answer. A parametric system stores the answers more or less directly in the
weights of a large neural network model. There has also been research into
hybrid retrieval and parametric systems such as the Retrieval-Augmented
Generation system, which recently improved
the state-of-the-art for the natural question benchmark as a whole. In this
blog post, we’ll focus on a retrieval-based system, but will explore parametric
and hybrid approaches in later blog posts.

A retrieval-based question answering system typically stores its “knowledge” in
an information retrieval system. This can be sentences, paragraphs, or entire
documents. Here we’ll use the Wikipedia dataset where each page is split into
passages of 100 words each. The dataset contains 21 million such passages. When
answering a question, we first retrieve the passages that most likely include
the answer. They are then analyzed with a machine-learned model to extract the
spans that most likely results in the correct response. These stages are called
the “retriever” and “reader”, respectively.

Extracting the answer from passages

The retriever

The retriever is responsible for generating a set of candidate passages. Since
the subsequent reader component is expensive to evaluate, it is crucial to have
an effective retrieval mechanism. There are two main approaches to passage
retrieval: term-based (sparse) such as for BM25, and embedding (dense) vectors,
which each have their strengths and weaknesses.

Term-based (sparse) retrieval

Term-based retrieval is the classic information retrieval method and covers
algorithms such as TF-IDF and BM25. Conceptually, the text is represented by a
vector where each dimension represents a term in a vocabulary. A non-zero value
signifies its presence. As each text only contains a subset of possible terms
in the vocabulary, these vectors are large and sparse. The similarity between
two texts, for instance, a document and a query, can be computed by a dot
product between the sparse vectors with slight modifications (e.g., term
importance) for TF-IDF or BM25. Term-based methods rely on inverted index
structures for efficient retrieval. This can in some cases be further
accelerated by algorithms such as WAND.

Except for any pre-processing such as lemmatization, stemming, and possibly
stop-word removal, terms are matched exactly as found in the text. This can be
a strength as well as a weakness. For salient terms, e.g. names and
places, this cuts down the search space significantly. However, potentially
relevant documents that don’t contain the exact term will not be retrieved
unless one uses query expansion or related techniques. The Dense Passage
Retrieval (DPR) paper uses ElasticSearch as the providing system for BM25.

Embedding-based (dense) retrieval

The number of potential terms in a vocabulary can be vast indeed. The basic
idea behind embedding vectors is to compress this high dimensional sparse
vector to a much smaller dense vector where most dimensions contain a non-zero
value. This has the effect of projecting a query or document vector into a
lower-dimensional space. This can be done so that vectors that are close
geometrically are also close semantically. The DPR paper uses two BERT models
to encode text: one for encoding queries and one for encoding documents. The
two models are trained simultaneously in a two-tower configuration to maximize
the dot product for passages likely to answer the question.

In contrast to the sparse representation, there are no exact methods for
finding the nearest neighbors efficiently. So we trade accuracy for efficiency
in what is called approximate nearest neighbors (ANN). Many different methods
for ANN search have been proposed. Some are compatible with inverted index
structures so they can be readily implemented in existing information retrieval
systems. Examples are k-means clustering, product quantization (and it’s
relatives), and locality sensitive hashing, where the centroids or buckets can
be indexed. A method that is not compatible with inverted indexes is
HNSW (hierarchical navigable small world).
HNSW is based on graph structures, is efficient, and has an attractive
property where the graph can be incrementally built at runtime. This is in
contrast to most other methods that require offline, batch oriented index
building.

Retrieval based on semantic embeddings complements term-based retrieval
well. Semantically similar documents can be recalled even though they don’t
contain the exact same terms. Unlike the bag-of-words approach for term-based
retrieval, word order can provide additional context. Historically, however,
term-based retrieval has outperformed semantic embeddings on question answering
problems, but the DPR paper shows that dense retrieval can be vastly improved
if the encoding has specifically been trained to the task. The DPR paper uses
FAISS with an HNSW index for
similarity search.

The reader

While the retriever component’s job is to produce a set of candidate passages
that hopefully contain the answer to the question, the reader extracts the
passages’ actual answer. This requires some form of natural language
understanding model, and BERT (or other Transformer) models are used. These
models are typically huge and expensive to evaluate, so only a small number of
candidate passages are run through them.

Transformer models take sequences of tokens as input. The tokenization of text
can be done in quite a few different ways to balance vocabulary size and
sequence length. Due to BERT models’ full attention mechanism, evaluation time
increases quadratically with sequence length. So a reasonable balance must be
struck, and BERT-based models use a WordPiece or similar algorithm to split
less common words into subwords.

The reader model’s input is the concatenation of the tokens representing the
question, the document’s title, and the passage itself. The model looks up the
embedding representation for each token, and through a series of layers,
produces a new representation for each token. These representations can then be
used for different tasks. For question-answering, an additional layer is added
to compute the three necessary outputs: the relevance of the passage to the
question, and the start and end indexes of the answer.

BERT for question answering

To extract the final answer, the passage that produced the largest relevance
score is used. The two other outputs of the model are probabilities for each
token of being a start token and an end token. The final answer is chosen by
finding the span with the largest sum of start probability and end probability.
This results in a sequence of tokens, which must be converted to words by the
tokenizer before returning. The DPR paper uses a BERT-based
model to
output span predictions.

Putting all this together

The retrieval-based question-answering system as described above, capable of
both term- and embedding-based retrieval, requires at least the following
components:

  • A BM25 information retrieval system storing the 21 million Wikipedia text passages.
  • An efficient vector similarity search system storing the passage embedding vectors.
  • A model serving system for the three different BERT-based models: query encoder, document encoder, and reader model.
  • A BERT-based tokenizer.
  • Middleware to glue this all together.

The tokenizer generates the token sequence for the text. These tokens are
stored for usage in the reader model. They are also used in a document
BERT-base encoder model to create the embedding vector for dense retrieval. The
text and embedding vectors need to be indexed for fast retrieval.

A similar process is followed for each query. The tokenizer produces a token
sequence used to generate an embedding vector in the query BERT-based encoder.
The first stage, retrieval, is done using either term-based retrieval or
embedding based retrieval. The top-N passages are passed to the Reader model
and ranked accordingly. The best passage is analyzed to extract the best span
containing the answer.

This is a non-trivial list of services that need to be set up to implement a
question-answering system. In the next section we show how to implement all
this as a single Vespa application.

Reproducing the baseline on Vespa

Most of the components mentioned above have become core features in Vespa. In
the following we’ll present an overview of setting this up in Vespa. The
details can be seen in the companion sample
application.

Schema

When creating an application with Vespa, one typically starts with a document
schema.