comparemela.com

Latest Breaking News On - Self supervised learning - Page 3 : comparemela.com

MIT FUTURE OF AI

IIIT Hyderabad hosts BDA 2022, an international conference on Big Data Analytics

Hyderabad (Telangana) [India], December 20 (ANI/PRNewswire): IIIT Hyderabad is hosting the 10th international conference on Big Data Analytics at its campus in Gachibowli from 19-22 December 2022. The conference is an international forum for researchers and industry practitioners to share their original research results, practical experiences and thoughts on big data from different perspectives including storage models, data access, computing paradigms, analytics, information sharing and privacy, redesigning mining algorithms, open issues, and future research trends. It includes 4 workshops (on Data Challenges in Assessing (Urban & Regional) Air Quality, Big Data Analytics using HPCC Systems, Data Science for Justice Delivery in India and workshop on Universal Acceptance and Email Address Internationalization) 4 keynote talks by Y Narahari, Indian Institute of Science, Bangalore; Sanjay Madria, Missouri University of Science and Technology, USA; Raj Sharman, University at Buffalo a

Contrastive Representation Learning

The goal of contrastive representation learning is to learn such an embedding space in which similar sample pairs stay close to each other while dissimilar ones are far apart. Contrastive learning can be applied to both supervised and unsupervised settings. When working with unsupervised data, contrastive learning is one of the most powerful approaches in self-supervised learning. Contrastive Training Objectives In early versions of loss functions for contrastive learning, only one positive and one negative sample are involved.

Fuzzy contrastive learning for online behavior analysis by Jie Yang, Gang Huang et al

With the prevalence of smart devices, billions of people are accessing digital resource in their daily life. Online user-behavior modeling, as such, has been actively researched in recent years. However, due to the data uncertainty (sparse-ness and skewness), traditional techniques suffer from certain drawbacks, such as relying on labor-intensive expertise or prior knowledge, lacking of interpretability and transparency, and expensive computational cost. As a step toward bridging the gap, this paper proposes a fuzzy-set based contrastive learning algorithm. The general idea is to design an end-to-end learning framework of optimizing representation from contrastive samples. The proposed algorithm is characterized by three main modules, including data augmentation, fuzzy encoder, and semi-supervised optimization. More precisely, data augmentation is used to produce contrastive (positive and negative) samples based on anchor ones. The fuzzy encoder is introduced to fuzzify (or encode) lat

© 2024 Vimarsana

vimarsana © 2020. All Rights Reserved.