Challenges of index exchange for search engine interoperability

by Djoerd Hiemstra, Gijs Hendriksen, Chris Kamphuis, and Arjen de Vries

We discuss tokenization challenges that arise when sharing inverted file indexes to support interoperability between search engines, in particular: How to tokenize queries such that the tokens are consistent with the tokens in the shared index? We discuss various solutions and present preliminary experimental results that show when the problem occurs and how it can be mitigated by standardizing on a simple, generic tokenizer for all shared indexes.

To be presented at the 5th International Open Search Symposium #OSSYM2023 at CERN, Geneva, Switzerland on 4-6 October 2023

[download pdf]

Impact and development of an Open Web Index for open web search

by Michael Granitzer, Stefan Voigt, Noor Afshan Fathima, Martin Golasowski, Christian Guetl, Tobias Hecking, Gijs Hendriksen, Djoerd Hiemstra, Jan Martinovič, Jelena Mitrović, Izidor Mlakar, Stavros Moiras, Alexander Nussbaumer, Per Öster, Martin Potthast, Marjana Senčar Srdič, Sharikadze Megi, Kateřina Slaninová, Benno Stein, Arjen P. de Vries, Vít Vondrák, Andreas Wagner, Saber Zerhoudi

Web search is a crucial technology for the digital economy. Dominated by a few gatekeepers focused on commercial success, however, web publishers have to optimize their content for these gatekeepers, resulting in a closed ecosystem of search engines as well as the risk of publishers sacrificing quality. To encourage an open search ecosystem and offer users genuine choice among alternative search engines, we propose the development of an Open Web Index (OWI). We outline six core principles for developing and maintaining an open index, based on open data principles, legal compliance, and collaborative technology development. The combination of an open index with what we call declarative search engines will facilitate the development of vertical search engines and innovative web data products (including, e.g., large language models), enabling a fair and open information space. This framework underpins the EU-funded project OpenWebSearch.EU, marking the first step towards realizing an Open Web Index.

Published by the Journal of the American Society of Information Science and Technology (JASIST)

[download pdf]

Artificial intelligence: there are problems we need to address right now, the rest is science fiction

by Frederik Zuiderveen Borgesius, Marvin van Bekkum, and Djoerd Hiemstra

Everywhere you read warnings of ‘existential risks’ from artificial intelligence (AI). Some even warn that AI could wipe out humanity. The tech company OpenAI is predicting the emergence of artificial general intelligence and superintelligence, and of future AI systems that will be more intelligent than humans. Some policymakers also fear this kind of scenario.

But things are not moving that fast. ‘Artificial general intelligence’ means an AI system that, like humans, can perform a variety of different tasks. There is no such general AI at present, and even if it does come one day, creating it will take a very long time.

Many AI systems are useful. Search engines, for example, are indispensable to internet users, and are a good example of specific AI. A specific AI system can perform one task well, such as pointing people to the right website. Modern spam filters, translation software, and speech recognition software also work well thanks to specific AI.

But these are still examples of specific AI – far removed from general AI, let alone ‘superintelligence’. Humans can learn new things. AI systems cannot. What computer scientists are getting better and better at is creating general large language models that can be used for all kinds of specific AI. The same language model can be used for translation software, spam filters, and search engines. Does this mean that such a language model has general intelligence? Could it develop consciousness? Absolutely not! There is therefore no real risk of a science fiction scenario in which an AI system wipes out humanity.

This focus on existential risks distracts us from the real risks at hand, which require our attention right now. Little remains of our privacy, for example. AI systems are trained using data, lots of data. That is why AI developers, mostly big tech companies, are collecting massive amounts of data. For instance, OpenAI presumably gobbled up large sections of the web to develop ChatGPT, including personal data. Incidentally, OpenAI is quite secretive about what data it uses.

Secondly, the use of AI can lead to unfair discrimination. For example, many facial recognition systems do not work well for people with darker skin tones. In the US, the police have repeatedly arrested the wrong person because a facial recognition system wrongly identified the dark-skinned men as criminals.

Thirdly, AI systems consume incredible amounts of electricity. Training and using language models like GPT require a lot of computing power from large data centres, which guzzle energy. Finally, the power of big tech companies is only growing with the use of AI systems. Developing AI systems costs a lot of money, so as the use of AI increases, we become even more dependent on big tech companies. These kinds of risks are already here now. Let’s focus on that, and not let ourselves be distracted by the ghost of sentient AI.

Published by Radboud Recharge.

SIGIR 2023 live at Radboud

On 24, 25 and 26 July we will follow the 46th International ACM SIGIR Conference online from lecture hall 0.28 in the Mercator building. We will start each morning at 8:30h. for the live stream from Tapei, Taiwan and watch recorded sessions and keynotes in the afternoon. There will be presentations from well-known Radboud researchers such as Harrie Oosterhuis, Chris Kamphuis and Negin Ghasemi! 😄 

More information at:

Follow us on-line: #SIGIR2023

Fausto de Lang graduates on tokenization for information retrieval

An empirical study of the effect of vocabulary size for various tokenization strategies in passage retrieval performance.

by Fausto de Lang

Many interactions between the the fields of lexical retrieval and large language models still remain underexplored, in particular there is little research into the use of advanced language model tokenizers in combination with classical information retrieval mechanisms. This research looks into the effect of vocabulary size for various tokenization strategies in passage retrieval performance. It also provides an overview of the impact of the WordPiece, Byte-Pair Encoding and Unigram tokenization techniques on the MSMARCO passage retreival task. These techniques are explored in both re-trained tokenizers and in tokenizers trained from scratch. Based on three metrics this research has found the WordPiece tokenization technique is the best performing technique on the MSMARCO passage retrieval tasks. It has also found that a training vocabulary size of around 10,000 tokens is best in regards to Recall performance, while around 320,000 tokens shows the optimal Mean Reciprocal Rank and Normalized Discounted Cumulative Gain scores. Most importantly, the optimum at a relatively small vocabulary size suggests that shorter subwords can benefit the indexing and searching process (up to a certain point). This is a meaningful result since it means that many applications where (re-)trained tokenizers are used in information retrieval capacity might be improved by tweaking the vocabulary size during training. This research has mainly focused on building a bridge between (re-)trainable tokenizers and information retrieval software, while reporting on interesting tunable parameters. Finally, this research recommends researchers to build their
own tokenizer from scratch since it forces one to look at the configuration of the underlying processing steps.

Defended on 27 June 2023

Git repository at:

UNFair: Search Engine Manipulation, Undetectable by Amortized Inequity

by Tim de Jonge and Djoerd Hiemstra

Modern society increasingly relies on Information Retrieval systems to answer various information needs. Since this impacts society in many ways, there has been a great deal of work to ensure the fairness of these systems, and to prevent societal harms. There is a prevalent risk of failing to model the entire system, where nefarious actors can produce harm outside the scope of fairness metrics. We demonstrate the practical possibility of this risk through UNFair, a ranking system that achieves performance and measured fairness competitive with current state-of-the-art, while simultaneously being manipulative in setup. UNFair demonstrates how adhering to a fairness metric, Amortized Equity, can be insufficient to prevent Search Engine Manipulation. This possibility of manipulation bypassing a fairness metric discourages imposing a fairness metric ahead of time, and motivates instead a more holistic approach to fairness assessments.

To be presented at the ACM Conference on Fairness, Accountability, and Transparency (FAccT 2023) on 12-15 June in Chicago, USA.

[download pdf]

Cross-Market Product-Related Question Answering

by Negin Ghasemi, Mohammad Aliannejadi, Hamed Bonab, Evangelos Kanoulas, Arjen de Vries, James Allan, and Djoerd Hiemstra

Online shops such as Amazon, eBay, and Etsy continue to expand their presence in multiple countries, creating new resource-scarce marketplaces with thousands of items. We consider a marketplace to be resource-scarce when only limited user-generated data is available about the products (e.g., ratings, reviews, and product-related questions). In such a marketplace, an information retrieval system is less likely to help users find answers to their questions about the products. As a result, questions posted online may go unanswered for extended periods. This study investigates the impact of using available data in a resource-rich marketplace to answer new questions in a resource-scarce marketplace, a new problem we call cross-market question answering. To study this problem’s potential impact, we collect and annotate a new dataset, XMarket-QA, from Amazon’s UK (resource-scarce) and US (resource-rich) local marketplaces. We conduct a data analysis to understand the scope of the cross-market question-answering task. This analysis shows a temporal gap of almost one year between the first question answered in the UK marketplace and the US marketplace. Also, it shows that the first question about a product is posted in the UK marketplace only when 28 questions, on average, have already been answered about the same product in the US marketplace. Human annotations demonstrate that, on average, 65% of the questions in the UK marketplace can be answered within the US marketplace, supporting the concept of cross-market question answering. Inspired by these findings, we develop a new method, CMJim, which utilizes product similarities across marketplaces in the training phase for retrieving answers from the resource-rich marketplace that can be used to answer a question in the resource-scarce marketplace. Our evaluations show CMJim’s significant improvement compared to competitive baselines.

To be presented at the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2023) on July 23-27 in Taipei, Taiwan.

[download pdf]

Towards a Generic Model for Classifying Software into Correctness Levels and its Application to SQL

by Benard Wanjiru, Patrick van Bommel, and Djoerd Hiemstra

Automated grading systems can save a lot of time when carrying our grading of software exercises. In this paper, we present our ongoing work on a generic model for generating software correctness levels. These correctness levels enable partial grades of students’ software exercises. The generic model can be used as a foundation for correctness of SQL queries and can be generalized to different programming languages.

To be presented at the SEENG 2023 Workshop on Software Engineering for the Next Generation of the 45th International Conference on Software Engineering on Tuesday 16 May in Melbourne, Australia.

[download pdf]