链接, 密码: x1xO
链接, 密码: BlZa
|
汇报1:Google GFS
汇报2:HDFS
汇报3:NoSQL: IBM, Oracle, Memcached
汇报4:BigTable: Bigtable: A Distributed Storage System for Structured Data
汇报5:Survey of Distributed File System Design Choices, Ceph
汇报6:RAID
汇报7:文件IO API
汇报8:ZeroCopy: Efficient data transfer through zero copy;
Design and Implementation of Zero-Copy for Linux
汇报9:MapReduce:simplified data processing on large clusters, 实验
汇报10:完美Hash
汇报11:线性时间排序: 主材料, 备用材料
- 深度大数据技术
汇报12:机器学习技术,课程,Introduction to Machine Learning
汇报13:ML for Big Data
汇报14:论文,课程, 深度学习简介,运行环境安装
汇报15:Deep residual learning for image recognition
汇报16:Encoder-decoder with atrous separable convolution for semantic image segmentation
汇报17:Transfer Learning,A Comprehensive Survey on Transfer Learning
汇报18:GAN PPT,GAN介绍,Generative Adversarial Networks
汇报19:深度大模型基础
汇报20:Transformer, Attention Is All You Need
汇报21:BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
汇报22:Diffusion Models, Diffusion Models Beat GANs on Image Synthesis, Diffusion Models: A Comprehensive Survey of Methods and Applications
汇报23:开端:Language Models are Few-Shot Learners
汇报24:演化(问答系统):LaMDA: Language Models for Dialog Applications, WebGPT: Browser-assisted question-answering with human feedback, Improving alignment of dialogue agents via targeted human judgements, Improving Language Models by Retrieving from Trillions of Tokens
汇报25:现状:Scaling Language Models: Methods, Analysis & Insights from Training Gophe, PaLM: Scaling Language Modeling with Pathways
商汤公开课
汇报26:开端:ViT Vision Transformer, Swin Transformer, VMODE
汇报27:现状:CoAtNet, CoCa
汇报28:通用模型架构:商汤INTERN, 百度文心UFO 2.0, 华为盘古CV大模型
|
大数据基础
海量跨媒体分析
lyx: http://www.lyx.org |