Zhu Han, ProfessorAAAS Fellow, IEEE Fellow
University of Houston, USA
Zhu Han received the B.S. degree in electronic engineering from Tsinghua University, in 1997, and the M.S. and Ph.D. degrees in electrical engineering from the University of Maryland, College Park, in 1999 and 2003, respectively. From 2000 to 2002, he was an R&D Engineer of JDSU, Germantown, Maryland. From 2003 to 2006, he was a Research Associate at the University of Maryland. From 2006 to 2008, he was an assistant professor in Boise State University, Idaho. Currently, he is a John and Rebecca Moores Professor in Electrical and Computer Engineering Department as well as Computer Science Department at University of Houston, Texas. His research interests include security, wireless resource allocation and management, wireless communication and networking, game theory, and wireless multimedia. Dr. Han is an NSF CAREER award recipient 2010. Dr. Han has several IEEE conference best paper awards, and winner of 2011 IEEE Fred W. Ellersick Prize, 2015 EURASIP Best Paper Award for the Journal on Advances in Signal Processing and 2016 IEEE Leonard G. Abraham Prize in the field of Communication Systems (Best Paper Award for IEEE Journal on Selected Areas on Communications). Dr. Han is the winner 2021 IEEE Kiyo Tomiyasu Award. He has been IEEE fellow since 2014, AAAS fellow since 2020 and IEEE Distinguished Lecturer from 2015 to 2018. Dr. Han is 1% highly cited researcher according to Web of Science since 2017.
Speech Title: Federated Learning in Mobile Edge Computing
Abstract: In recent years, mobile devices are equipped with increasingly advanced computing capabilities, which opens up countless possibilities for meaningful applications, e.g., for augmented reality, Internet of Things, and vehicular networks. Traditional cloud- based Machine Learning (ML) approaches require the data to be centralized in a cloud server or data center. However, this results in critical issues related to unacceptable latency and communication inefficiency. To this end, Mobile edge computing (MEC) has been proposed to bring intelligence closer to the edge, where data is originally generated. However, conventional edge ML technologies still require personal data to be shared with edge servers. Recently, in light of increasingly privacy concerns, the concept of Federated Learning (FL) has been introduced. In FL, end devices use their local data to train a local ML model required by the server. The end devices then send the local model updates instead of raw data to the server for aggregation. FL can serve as an enabling technology in mobile edge networks since it enables the collaborative training of an ML model and also enables ML for mobile edge network optimization. However, in a large-scale and complex mobile edge network, FL still faces the implementation challenges with regard to communication costs and resource allocation. In this talk, we begin with an introduction to the background and fundamentals of FL. Then, we discuss several potential challenges for FL implementation. In addition, we provide extensive simulation results to showcase the discussed issues and possible solutions.
Ching-Yung Lin, ProfessorIEEE Fellow, IBM Chief Scientist, Graph Computing, IBM Distinguished Researcher
Columbia University, USA
Abstract: Artificial Intelligence shows its great potential recently, especially in games and in multimedia, e.g., vision, speech and language recognition and creation. Beyond sensing, AI with other brain functions, such as reasoning, feeling, strategy, and knowledge learning, is still in early-stage exploration. Since human brain is a graph of 100 billion nodes and 700 trillion edges, we have been building graph computing foundations for more than a decade to explore achieving full-brain functions through graph computing. Our latest version is the Graphen Ardi AI platform.
What can Full-Brain AI technologies such as Deep Learning, Machine Reasoning, and Knowledge Exploration be used in the medical domain? For instance, you might have heard inspiring stories, such as, in Dec 2020, Google’s AlphaFold2 successfully ‘solved’ the protein structure prediction challenge for static samples in PDB through Deep Learning. A US consulting firm published a white paper in May 2020 listed Google, Graphen, Intel and Nvidia as potential companies whose foundations will power the advance of future drug development. Graphen’s AI Tools for Medicine (Atom) provides tools for these four stages: (1) Druggable Candidates Finding, by analyzing omics data and pathways empowered by large-scale knowledge learning; (2) Candidates Preprocessing, including protein structure prediction, protein function prediction, epitope & paratope site prediction, and protein binding site prediction; (3) Small Molecular and Biosimilar Drug Development, including drug target affinity prediction, antibody selection models, drug / peptide generative models, and ADME prediction tools; and (4) Precision Therapy Development, including explainable drug selection. We shall also introduce our work in the Whole Genome Analysis.
Dr. Yuantong Ding 丁远彤 博士Scientist at China National GeneBank
Yuantong Ding received her B.S. degree in biology from Fudan University in 2012, the M.S. degree in computer science and the Ph.D. degree in biology from Duke University in 2018. She joined BGI as a research scientist in 2019. Expert in evolutionary analysis and cancer genomics. Current interests include biological big data mining and management, security and privacy protection with new technologies like blockchain, secure multi-party computation and federated learning. She has published 16 academic papers and international conference reports, owns 4 invention patents. Her team won an international award in the iDASH competition organized by NIH in 2019 and awarded the Benchmark Case of Privacy Preserving Computation by China Academy of Information and Communications Technology in 2020. She participated in the development of multiple international, national and regional standards including IEEE International Standard Guide for Architectural Framework and Application of Federated Machine Learning, Guide for an Architectural Framework for Explainable Artificial Intelligence etc.
美国杜克大学生物学博士和计算机硕士，深圳市海外高层次人才。主要研究方向生物大数据挖掘与管理，设计开发基于深度学习的新算法，及促进生物大数据与区块链、密码学、人工智能等前沿技术的贯穿融合及应用。发表高水平论文 4 篇，拥有发明专利 4 项，参与 IEEE 国际标准《Guide for Architectural Framework and Application of Federated Machine Learning》制定，主导《基因数据应用区块链服务指南》，《基因数据流通区块链存证应用指南》两项团体标准制定与发布。
Speech Title: CNGBdb-CODEPLOT: an integrated cloud platform for multi-omics data sharing and analysis in life science
Abstract: Dramatic advancement of DNA Sequencing technology has revolutionized the research approaches and benefits human health, agricultural science, and pandemic control. Here, CODEPLOT provides a comprehensive solution of data sharing, workflow management, elastic cloud computing resource and trusted collaboration environment for research and industries in life science. At this stage, three unique datasets including the oneKP dataset, COVID-19 genome sequence datasets and single cell project datasets, and 23 responding automatic analysis workflows are available and made ready for use in the CODEPLOT platform. Compared with other relevant resources, CODEPLOT is featured by the advantage of three aspects. Firstly, Docker/container technology, workflow description language (WDL) and Cromwell workflow engine are incorporated to provide a highly efficient and user-friendly framework to create workflows and perform bioinformatics analysis in this platform. Meanwhile, CODEPLOT provide an enterprise level of elastic cloud computing resource to run parallel jobs in batch mode, based on highly scalable, high-performance, enterprise-class Kubernetes clusters and Docker container technology. Furthermore, the cutting-edge technologies like data encryption, block-chain and secure multiparty computing are employed in this platform to provide a highly secure and reliable environment for data sharing and collaboration. In short, CODEPLOT is a reliable and flexible bioinformatics computing platform in life science, which aims to promote the efficient sharing, cooperation, and utilization of omics data in research and industries.
Availability and Implementation: https://db.cngb.org/codeplot
Hongjun Wang, Associate ProfessorKey Lab of Cloud Computing & Intelligent Technology, Southwest Jiaotong University, China
Wang Hongjun, Ph.D., associate professor of School of computing and artificial intelligence, Southwest Jiaotong University, senior member of Chinese computer society, member of artificial intelligence and pattern recognition special committee of Chinese computer society, member of machine learning special committee of Chinese artificial intelligence society, and member of collaborative computing special committee of Chinese computer society. He has been engaged in machine learning research for more than 10 years, and has published more than 80 papers in international and domestic famous academic journals and international famous academic conferences (such as IEEE TC, IEEE TKDE, ACM TKDD, Information Sciences, DMKD, KBS, China Science, SDM). Most of the papers have been searched by SCI or EI, and the applicants are IEEE TC, IEEE tnnls, IEEE TKDE, ACM TKDD, DMKD, KBS, etc Reviewers of more than 30 journals, including information sciences, Journal of software, Journal of computer science, etc.
Speech Title: Discriminant representation learning
Abstract: Representation learning is an important part of deep learning. Traditional large representation learning is to make the difference between encoded and decoded data smaller, the better. The theme of this explanation is to make the data after representation learning not only show little difference between the encoded and decoded data, but also learn the discriminability between data categories. At the same time, the feature learning process is guided by ensemble learning, which makes the result of representation learning more discriminative. In this lecture, the restricted Boltzmann machine is taken as an example to explain the construction, reasoning and algorithm design of representation discriminant learning model.