Recent studies have demonstrated that backdoor attacks can cause a significant security threat to federated learning. Existing defense methods mainly focus on detecting or eliminating the backdoor patterns after the model is backdoored. However, these methods either cause model performance degradation or heavily rely on impractical assumptions, such as labeled clean data, which exhibit limited effectiveness in federated learning. To this end, we propose FLPurifier, a novel backdoor defense method in federated learning that can effectively purify the possible backdoor attributes before federated aggregation. Specifically, FLPurifier splits a complete model into a feature extractor and classifier, in which the extractor is trained in a decoupled contrastive manner to break the strong correlation between trigger features and the target label. Compared with existing backdoor mitigation methods, FLPurifier doesn’t rely on impractical assumptions since it can effectively purify the backdoo
Researchers from the UK demonstrate the efficacy of self-supervised learning for human activity recognition using a vast wearable dataset. By addressing convergence issues and evaluating multi-task self-supervision, they showcase robust models with superior representation quality across diverse populations and activity types.
A Budding Computer Scientist, Shreyas Fadnavis Making Substantial Contributions in the Field of Medical Imaging thehindubusinessline.com - get the latest breaking news, showbiz & celebrity photos, sport news & rumours, viral videos and top stories from thehindubusinessline.com Daily Mail and Mail on Sunday newspapers.
New MIT studies support the idea that the brain uses a process similar to a machine-learning approach known as “self-supervised learning.” This type of machine learning allows computational models to learn about visual scenes based solely on the similarities and differences between them, with no labels or other information.