Pair Search Logo
Senior and Sole Product DesignerFeb 2024 - Jun 2024

Legal and Policy Search Tools: From Search to Analysis 7x Faster

Search systems for Singapore's parliamentary debates, judgments or legislation rely on simple keyword matching, flooding legal and policy officers with irrelevant results. We built Pair Search to deliver relevant results through AI-powered search while maintaining the human-in-the-loop approach that these officers need for high-stakes work. Try Pair Search here!

Pair Search Hero Image

Problem

Government legal and policy work requires synthesising vast amounts of information. Current search systems yield irrelevant results and provide no analytical support, forcing officers into hours of manual work.

My Role

As the only designer, I led the complete design of Pair Search from research through to shipped product. I also worked closely with engineers to optimise search relevancy and prompt engineering for analysis functions.

Research: Understanding Legal and Policy Research Workflows

Conducted research with 8 policy officers and 5 lawyers to understand their professional search behaviours and pain points with current tools.

  • πŸ” Irrelevant results: Current tools use keyword search that returns too many irrelevant results mixed unpredictably with relevant ones, forcing manual verification of everything
  • πŸ“š Fragmented sources: Single research questions often require searching multiple separate databases (Hansard, Legislation, Judgments), multiplying effort and creating information gaps
  • ⏰ Synthesis bottleneck: Hours spent on repetitive analysis tasks like extracting speaker quotes, building chronological timelines, categorising by legal principles
  • πŸ€– AI tools don't match professional workflows: Existing tools generate opaque summaries, forcing officers to fact-check AI outputs instead of supporting their natural search β†’ verify β†’ synthesise process

Improving Search Quality

The core need was not a better interface, but improved search relevance. I worked closely with our engineer and Product Operations member to improve search quality through:

  1. Test case development: Created evaluation datasets across all 3 sources for relevance testing
  2. Algorithm optimization: Tested different weightings of e5 embeddings, ColBERTv2, and BM25 algorithms, iteratively refining the hybrid scoring formula for optimal precision and recall

Result: Achieved 3.2 average click rank, meaning users found what they needed within the top 3 resultsβ€”a 19-position improvement over existing platforms.

Search Quality Configuration

Interface Design: Familiar Yet Powerful

I designed the interface to feel as familiar as Google search while adding one-click analysis tools. This kept the product search-first while providing convenient AI synthesis for relevant results.

1. Familiar search interface to surface relevant results quickly

Added excerpts and metadata so users could quickly assess relevance. This small upgrade saved significant time compared to existing systems.

Pair Search Interface

2. Quick synthesis tools to solve the synthesis

Created one-click analysis tools for common research needs based on each source:

  • Hansard: Speaker view analysis, Policy issue timeline
  • Judgments: Extracting key legal principles, Case summariser
  • Legislation: Key areas regulated, Summarisation

Users could select the most relevant results to be analysed, supporting their workflow of broad search β†’ selection β†’ analysis. This kept officers in the loop during generation, making analysis more reliable and giving users control to iterate, rather than AI that generates results without transparency.

3. Smart filters for faster refinement

Since we couldn't perfectly optimise search ranking for each query, we suggested filters based on the query so users could quickly refine their search with minimal clicks. We also created a thorough filter panel for deeper research needs.

Pair Search Filters

4. Unified search across sources

Created one interface for multiple datasets, with embedded results from other sources to prevent duplicate searches.

Multiple Sources Search

3.2 click rank

vs a click rank of 23 in other official search tools

194,000 searches

in total, across our datasets

Key Learnings

Designing for expert users requires deep domain understanding. Legal professionals have sophisticated mental models that AI must complement, not replace. This meant keeping humans in the loop through transparent, controllable tools.

Domain expertise within the team is critical. We couldn't assess search or analysis quality without legal knowledge. Bringing on a legally trained intern and working with a Courts officer to annotate results transformed our ability to iterate meaningfully and catch nuances we would have missed entirely.

Focus on core problems first. Though AI generations were tempting to build first, users wanted fundamental improvements like finding landmark cases quickly. Staying focused on fast, accurate search rather than getting distracted by flashy AI features kept us aligned with what users actually needed most.

rachodoodles@gmail.com

Growing up, I learnt that a warm bowl of noodles and cut fruits is a language of care. So here's some virtual care from me to you, and thank you for stopping by!

Bowl of noodles with chopsticks and cut fruits