Teaching

Current courses

Open projects in OpenWebSearch.eu

Please contact me for open Research Internships, BSc thesis and MSc thesis projects.

  • Word pieces for information retrieval: Study an optimal word pieces algorithm for information retrieval: Can we improve search by moving beyond word boundaries?
  • Create a query frequency list from (Google) autocompletions. Assuming that autocompletions are user queries ranked by their frequencies,  we may combine many such small sorted lists. Challenge: integrate these lists in one global order.
  • Federated crawling for the Open Search Foundation: There has been a lot of work on distributed crawling, where there is full cooperation between (geographically) distributed crawlers and all nodes use the same central crawling policy. In federated crawling there is no such central policy, each participant decides what to crawl. A node may reject crawling a url; there may be overlap in pages crawled by different nodes; a node may follow some nodes or block other nodes. What would be the effects of such requirements? In this project, you implement a federated crawler or simulate a crawler using a large existing web crawl such as CommonCrawl.
  • “Fat head”, “best of the web” web index. Create a web index that contains  the “essential” web, the web that answers the most common queries, using query statistics from SEO companies like ahrefs. What is the trade-off? What is the smallest web index that answers most queries? What is the smallest index that answers the top 10% of queries? etc.
  • The golden web evaluation set: Using the results from the frequency-list project or the query statics from the project above, download search engine result pages from popular search engines to create a dataset that can be used to evaluate new search engines by checking if they retrieve the same top 10 as Google/Bing/Yandex/Baidu
  • Web page quality ranker: Inspired by the Waterloo Spam Rankings, create a classifier/ranker that assigns a quality score to any web page

Other projects (some of these are already taken)

  • Comparison of fairness measures for search: Learning-to-rank methods for search engines (machine learning for search) optimize for clicks and may therefore results in biased results or results that are unfairly amplify click-bate and hate-speech. In this research, you develop methods for measuring the fairness of results and compare existing methods, for instance on simulated data
  • A bittorrent-based distributed file system: Design and evaluate a file system (inspired by for instance the Google file system) that uses bittorrent/webtorrent to distribute blocks of data over multiple machines and makes sure the replication level is sufficiently high such that no data is lost and the file system remains (eventually) consistent.
  • Federated Search (Data Science/Software Science):
    Research approaches that combine the results from multiple, independent and non-cooperative (in the sense that they do not share their index) search engines

    • Ranked federated search for the Clarin Virtual Language Observatory (VLO): Clarin is the European Research Infrastructure for Language Resources and Technology. The project should answer the question” How to model ranking, and how does it improve the quality (and efficiency) of Clarin’s content search engine?
    • Federated crawling for the Open Search Foundation: There has been a lot of work on distributed crawling, where there is full cooperation between (geographically) distributed crawlers and all nodes use the same central crawling policy. In federated crawling there is no such central policy, each participant decides what to crawl. A node may reject crawling a url; there may be overlap in pages crawled by different nodes; a node may follow some nodes or block other nodes. What would be the effects of such requirements? In this project, you implement a federated crawler or simulate a crawler using a large existing web crawl such as CommonCrawl.
    • Can we use techniques from information retrieval or machine learning to improve the open source federated virtual assistant Stanford’s Almond?
  • Federated Learning (Data Science):
    Research approaches that divides machine learning over multiple independent and private data sources

    • TAKEN: Federated learning for the Personal Health Train. Develop and evaluate machine learning approaches using data lakes of Health care providers. The Personal Health Train provide FAIR data layers in which structured data is provided in a standard way. The data is available by federated queries and analysis. Goal: develop a federated machine learning approach using unstructured data, such as clinical notes entered by health practitioners. This project is done at the RUMC.
    • Federated Learning-to-Rank: Learn a (personalized) search engine (re-)ranker that never leaves the user’s device.
  • Federated Social networks (Data Science/Software Science):
    • Ephemeral Social networking (Software Science):
      Based on the W3C standard ActivityPub, design an ephemeral social network (in which most posts are removed after some time) and compare its network/storage/memory/cpu load compared to durable solutions like Mastodon.
    • Secure federated communication (Digital Security):
      Design/adapt an end-to-end encrypted solution for ActivityPub-based social networking: How to handle multiple devices and heterogeneous networks?
    • Transitioning the RU to self-hosted, federated, solutions (Information Sciences): For, for instance, self-hosted web analytics, social networking, or video streaming: What are the user requirements? What solutions meet these requirements? What are additional benefits (for instance more autonomy for employees and students)? How to show this with a proof-of-concept (more info).
  • With Nedap Healthcare, Groenlo Machine Learning and Natural Language Processing:
    Clinical Natural Language Processing / De-identification of medical records.
  • With RUMC, Nedap Healthcare and Leiden University: MSc thesis project on Generating synthetic clinical data for shared Machine Learning tasks.
  • Bias in Machine Learning: evaluating spam filters for bias (see: Spam filters are efficient and uncontroversial. Until you look at them)
  • Ad blocker detection detection: How many sites detect ad blockers and ask users to remove them? (per country/category)
  • “Fat head”, “best of the web” web index. Create a web index that contains  the “essential” web, the web that answers the most common queries, using query statistics from SEO companies like ahrefs. What is the trade-off? What is the smallest web index that answers most queries? What is the smallest index that answers the top 10% of queries? etc.

Past courses

Teaching information