skip to main content
10.1145/3460231.3474607acmconferencesArticle/Chapter ViewAbstractPublication PagesrecsysConference Proceedingsconference-collections
abstract

FINN.no Slates Dataset: A new Sequential Dataset Logging Interactions, all Viewed Items and Click Responses/No-Click for Recommender Systems Research

Published:13 September 2021Publication History
First page image
Skip Supplemental Material Section

Supplemental Material

video_recsys.mp4

mp4

4.7 MB

References

  1. Richard S. Sutton Barto and Andrew G.2015. Reinforcement Learning: An Introduction.Google ScholarGoogle Scholar
  2. Minmin Chen, Alex Beutel, Paul Covington, Sagar Jain, Francois Belletti, and Ed Chi. 2018. Top-K Off-Policy Correction for a REINFORCE Recommender System. (2018). https://doi.org/10.1145/3289600.3290999Google ScholarGoogle Scholar
  3. Paul Covington, Jay Adams, and Emre Sargin. 2016. Deep Neural Networks for YouTube Recommendations. Proceedings of the 10th ACM Conference on Recommender Systems - RecSys ’16 (2016), 191–198. https://doi.org/10.1145/2959100.2959190Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Criteo. 2020. Criteo 1TB Click Logs dataset. https://ailab.criteo.com/download-criteo-1tb-click-logs-dataset/Google ScholarGoogle Scholar
  5. James Edwards and David Leslie. 2018. Diversity as a Response to User Preference Uncertainty. In Statistical Data Science. WORLD SCIENTIFIC (EUROPE), 55–68. https://doi.org/10.1142/9781786345400_0004Google ScholarGoogle Scholar
  6. Simen Eide, David S. Leslie, and Arnoldo Frigessi. 2021. Dynamic Slate Recommendation with Gated Recurrent Units and Thompson Sampling. (2021), 1–30. https://doi.org/10.21203/rs.3.rs-525958/v1Google ScholarGoogle Scholar
  7. Balázs Hidasi, Massimo Quadrana, Alexandros Karatzoglou, and Domonkos Tikk. 2016. Parallel Recurrent Neural Network Architectures for Feature-rich Session-based Recommendations. In Proceedings of the 10th ACM Conference on Recommender Systems - RecSys ’16. 241–248. https://doi.org/10.1145/2959100.2959167Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Yifan Hu, Yehuda Koren, and Chris Volinsky. 2008. Collaborative Filtering for Implicit Feedback Datasets. In 2008 Eighth IEEE International Conference on Data Mining. IEEE, 263–272. https://doi.org/10.1109/ICDM.2008.22Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Eugene Ie, Vihan Jain, Jing Wang, Sanmit Narvekar, Ritesh Agarwal, Rui Wu, Heng Tze Cheng, Tushar Chandra, and Craig Boutilier. 2019. SLateq: A tractable decomposition for reinforcement learning with recommendation sets. IJCAI International Joint Conference on Artificial Intelligence 2019-Augus(2019), 2592–2599. https://doi.org/10.24963/ijcai.2019/360Google ScholarGoogle ScholarCross RefCross Ref
  10. Eugene Ie, Vihan Jain, Jing Wang, Sanmit Narvekar, Ritesh Agarwal, Rui Wu, Heng Tze Cheng, Morgane Lustman, Vince Gatto, Paul Covington, Jim McFadden, Tushar Chandra, and Craig Boutilier. 2019. Reinforcement learning for slate-based recommender systems: A tractable decomposition and practical methodology. arXiv (5 2019). http://arxiv.org/abs/1905.12767Google ScholarGoogle Scholar
  11. Eugene Iey, Chih Wei Hsu, Martin Mladenov, Vihan Jain, Sanmit Narvekarx, Jing Wang, Rui Wu, and Craig Boutilier. 2019. RECSIM: A configurable simulation platform for recommender systems. arXiv (2019), 1–23.Google ScholarGoogle Scholar
  12. Thorsten Joachims, Adith Swaminathan, and Maarten De Rijke. 2018. Deep learning with logged bandit feedback. In International Conference on Learning Representations. http://www.joachims.org/banditnet/Google ScholarGoogle Scholar
  13. Tor Lattimore and Csaba Szepesvári. 2019. Bandit Algorithms. Technical Report.Google ScholarGoogle Scholar
  14. James McInerney, Benjamin Lacker, Samantha Hansen, Karl Higley, Hugues Bouchard, Alois Gruson, and Rishabh Mehrotra Spotify. 2018. Explore, Exploit, and Ex-plain: Personalizing Explainable Recommendations with Bandits. (2018). https://doi.org/10.1145/3240323.3240354Google ScholarGoogle Scholar
  15. Navid Rekabsaz, Oleg Lesota, Markus Schedl, Jon Brassey, and Carsten Eickhoff. 2021. TripClick: The Log Files of a Large Health Web Search Engine. Vol. 1. Association for Computing Machinery. http://arxiv.org/abs/2103.07901Google ScholarGoogle Scholar
  16. Jacopo Tagliabue, Ciro Greco, Jean-Francis Roy, Bingqing Yu, Patrick John Chia, Federico Bianchi, and Giovanni Cassani. 2021. SIGIR 2021 E-Commerce Workshop Data Challenge. Vol. 1. Association for Computing Machinery. http://arxiv.org/abs/2104.09423Google ScholarGoogle Scholar
  17. Xiangyu Zhao, Liang Zhang, Zhuoye Ding, Dawei Yin, Yihong Zhao, and Jiliang Tang. 2018. Deep Reinforcement Learning for List-wise Recommendations. 9 (2018). https://doi.org/10.1145/nnnnnnn.nnnnnnnGoogle ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    RecSys '21: Proceedings of the 15th ACM Conference on Recommender Systems
    September 2021
    883 pages
    ISBN:9781450384582
    DOI:10.1145/3460231

    Copyright © 2021 Owner/Author

    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 13 September 2021

    Check for updates

    Qualifiers

    • abstract
    • Research
    • Refereed limited

    Acceptance Rates

    Overall Acceptance Rate254of1,295submissions,20%

    Upcoming Conference

    RecSys '24
    18th ACM Conference on Recommender Systems
    October 14 - 18, 2024
    Bari , Italy

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format