Published in工人智慧Review: An Image is Worth 16x16 Words: Transformers for Image Recognition at ScaleIntroduction to Vision TransformerAug 29, 2022Aug 29, 2022
Published in工人智慧Notes of Clean Code from Uncle Bob這是一篇不完整的 Clean Code — Uncle Bob 速記,內文請與影片搭配使用。Dec 31, 2021Dec 31, 2021
Published in工人智慧Speed up Python numerical computation 660,000 times with NumbaPython 在現代科學運算上扮演很重要的角色,尤其是 Machine Learning 領域,其核心是一連串的矩陣運算和最佳化理論;但受限於其 Interpreter GIL 和 動態類型語言等等,在中大型運算上的效能一直為人所詬病,所以 Numpy 應運而生,大部分使用 C…Aug 30, 2021Aug 30, 2021
Published in工人智慧Review: Meta Pseudo Labels — Series 3 of 3這是 Google Self-training 系列的最後一篇,這個系列第一篇提出了一個新的Teacher-student 的訓練架構:加入適當的噪聲後,在不斷的迭代訓練下,能夠不斷的推昇準確度。Jun 23, 20211Jun 23, 20211
Published in工人智慧Review: Rethinking Pre-training and Self-training — Series 2 of 3Pre-training 一直是 Computer Vision 愛用的技術,一般認為它能夠非常快的配適在不同資料集(以及任務)上,早年甚至認為它在任何條件下都能夠快速的達到與 train-from-scratch 相等的準確度,甚至認為在相同的 iteration…Feb 27, 20211Feb 27, 20211
Published in工人智慧Review: Self-training with Noisy Student improves ImageNet classification — Series 1 of 3Introduction to noisy studentFeb 15, 2021Feb 15, 2021
Published in工人智慧How does Batch Normalization REALLY Work?(It's not about Internal Variate Shift)How Does Batch Normalization Help Optimization?Nov 16, 2020Nov 16, 2020
Published in工人智慧Review: Attention is all you needIntroduction to Self-Attention and TransformerOct 9, 20201Oct 9, 20201