Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
This is a page not in th emain menu
Published:
Announcing the official release of the Metropolis Multi-Camera Tracking AI Workflow, featured in NVIDIA GTC’24 Keynote by Jensen Huang. This innovative solution accelerates the development of vision AI applications for large spaces, enhancing safety, efficiency, and management across various industries. Leveraging NVIDIA’s cutting-edge tools, this workflow offers a validated path to production, customizable AI models, and comprehensive support, enabling seamless development from simulation to deployment. Join us in transforming infrastructure and operations with advanced AI technology.
Published:
As the lead organizer of the AI City Challenge at CVPR, I’m excited to highlight our progress with NVIDIA Omniverse, which provided the largest indoor synthetic dataset for over 700 teams from nearly 50 countries. This dataset, essential for developing AI models to improve efficiency in retail, warehouse management, and traffic systems, included 212 hours of video across 90 virtual environments. Our global collaboration with ten prestigious institutions underscores the effort to advance AI for smart cities and automation. NVIDIA’s innovations, like Omniverse Cloud Sensor RTX, will further accelerate autonomous system development. Join the Omniverse community to stay updated and connected.
Published:
In the heart of the industrial automation revolution, the Metropolis multi-camera tracking system emerges as a beacon of innovation, seamlessly integrating with NVIDIA’s AI suite to redefine efficiency and safety in complex industrial settings. Developed by a pioneering software engineer, Metropolis creates a real-time, comprehensive map from hundreds of camera feeds, guiding autonomous mobile robots through intricate environments with unparalleled precision. This fusion of real-time AI and digital twin technology not only showcases the potential to drastically reduce operational downtime but also marks a significant leap forward in the quest for smarter, more responsive industrial ecosystems. Through this lens, we glimpse the future of automation, where digital precision and human ingenuity converge to create harmonious, highly optimized workplaces.
Published:
The Metropolis AI Workflows & Microservices 1.0 is officially live and is set to revolutionize the way enterprises and our ecosystem approach centralized perception across an array of matrixed sensors. One of the most exciting features of this release is the Multi-Camera Tracking app, which I had the pleasure of developing. The app is a reference architecture for video analytics applications that tracks people across multiple cameras and provides the counts of unique people seen over time. This is also known as Multi-Target Multi-Camera (MTMC) tracking.
Published:
On a beautiful day in Seremban, Malaysia, Macy Lee and I had our dream wedding. It was a day filled with love, joy, and the presence of God that we will cherish forever.
Published:
NVIDIA has recently announced the release of the TAO Toolkit 4.0, which includes several exciting new features and enhancements. As a developer who has contributed to the toolkit, I’m thrilled to share my experience working on the people re-identification and pose-based action recognition networks, as well as the end-to-end video analytics pipelines on the Triton Inference Server.
Published:
Starting a new church can be a daunting task. However, when a group of Christ followers feel called by God to spread the Gospel, they know they must answer. That’s exactly what happened with the team at Waymaker Church.
Published:
Love is a beautiful thing, and nothing symbolizes that more than a wedding ceremony. Macy Lee and I, two lovebirds who have been together for a while, decided to take our relationship to the next level by tying the knot in an intimate vow ceremony. We chose the beautiful rooftop lounge of Augusta Apartments in Seattle, WA, to exchange our vows in front of our church and lab friends.
Published:
Love is in the air, and when it’s time to take that big step, nothing is more exciting than the perfect proposal. For me, that moment came on Thanksgiving Day in 2021, at my friend’s house in Bothell, WA. I proposed to my girlfriend, Macy Lee, and I’m thrilled to say that she said yes!
Published:
I am excited to share my experience working on the Amazon One project, an innovative identity service that uses people’s palm for payment, entry, and more. As a member of the research team that developed and launched Amazon One, I had the opportunity to contribute to this groundbreaking technology in significant ways.
Published:
I am pleased to announce that I have successfully passed my Ph.D. Dissertation Defense in the Department of Electrical and Computer Engineering (ECE) at the University of Washington (UW). This has been a long and challenging journey, and I am grateful for the support of my supervisory committee, colleagues, sponsors, family, friends, and my faith.
Published:
As a software developer, I love to participate in hackathons to test my skills and knowledge, as well as to collaborate with fellow tech enthusiasts. One of the most exciting hackathons I have participated in is Code for the Kingdom (C4TK) - Seattle 2019, where my team won the People’s Choice Award.
Published:
I still remember the feeling of excitement and awe when I received the news that I had been elected to be a Session Elder of the University Presbyterian Church (UPC). It was an honor that came with great responsibility and a strong sense of duty to serve my fellow members and the church community as a whole.
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in 2014 International Society for Music Information Retrieval Conference, 2014
Recommended citation: Zheng Tang and Dawn AA Black. "Melody Extraction from Polyphonic Audio of Western Opera: A Method Based on Detection of the Singer’s Formant". 2014 International Society for Music Information Retrieval Conference (ISMIR 2014). pp. 161-166. 2014. http://www.terasoft.com.tw/conf/ismir2014/proceedings/T029_329_Paper.pdf
Published in 2016 IEEE International Conference on Acoustics, Speech and Signal Processing, 2016
Recommended citation: Zheng Tang, Jenq-Neng Hwang, Yen-Shuo Lin and Jen-Hui Chuang. "Multiple-Kernel Adaptive Segmentation and Tracking (MAST) for Robust Object Tracking". Proceedings of 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2016). pp. 3064-3068. 2016. http://ieeexplore.ieee.org/document/7471849
Published in 2016 International Conference on Pattern Recognition, 2016
Recommended citation: Zheng Tang, Yen-Shuo Lin, Kuan-Hui Lee, Jenq-Neng Hwang, Jen-Hui Chuang and Zhijun Fang. "Camera Self-Calibration from Tracking of Moving Persons". Proceedings of 2016 International Conference on Pattern Recognition (ICPR 2016). pp. 260-265. 2016. https://ieeexplore.ieee.org/document/7899644
Published in IEEE Transactions on Circuits and Systems for Video Technology, 2017
[Paper]
Recommended citation: Young-Gun Lee, Zheng Tang and Jenq-Neng Hwang. "Online-Learning-Based Human Tracking Across Non-Overlapping Cameras". IEEE Transactions on Circuits and Systems for Video Technology (T-CSVT). vol. 28, no. 10, pp. 2870-2883. 2018. http://ieeexplore.ieee.org/document/7932896
Published in 2017 IEEE Smart World Congress - 1st AI City Challenge Workshop, 2017
Recommended citation: Zheng Tang, Gaoang Wang, Tao Liu, Young-Gun Lee, Adwin Jahn, Xu Liu, Xiaodong He and Jenq-Neng Hwang. "Multiple-Kernel Based Vehicle Tracking Using 3D Deformable Model and Camera Self-Calibration". arXiv:1708.06831. 2017. https://arxiv.org/abs/1708.06831
Published in 2017 IEEE International Conference on Image Processing, 2017
Recommended citation: Young-Gun Lee, Zheng Tang, Jenq-Neng Hwang and Zhijun Fang. "Inter-Camera Tracking Based on Fully Unsupervised Online Learning". Proceedings of 2017 IEEE International Conference on Image Processing (ICIP 2017). pp. 2607-2611. 2017. https://ieeexplore.ieee.org/document/8296754
Published in 2017 International Workshop on Multimedia Signal Processing, 2017
[Paper]
Recommended citation: Tao Liu, Yong Liu, Zheng Tang and Jenq-Neng Hwang. "Adaptive Ground Plane Estimation for Moving Camera-Based 3D Object Tracking". Proceedings of 2017 International Workshop on Multimedia Signal Processing (MMSP 2017). 2017. https://ieeexplore.ieee.org/document/8122256
Published in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition - 2nd AI City Challenge Workshop, 2018
[Paper] [Code] [Slides] [Poster] [Demo1] [Demo2]
Recommended citation: Zheng Tang, Gaoang Wang, Hao Xiao, Aotian Zheng and Jenq-Neng Hwang. "Single-Camera and Inter-Camera Vehicle Tracking and 3D Speed Estimation Based on Fusion of Visual and Semantic Features". Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW 2018). pp. 108-115. 2018. http://openaccess.thecvf.com/content_cvpr_2018_workshops/w3/html/Tang_Single-Camera_and_Inter-Camera_CVPR_2018_paper.html
Published in 2018 IEEE International Conference on Multimedia and Expo, 2018
Recommended citation: Zheng Tang, Renshu Gu and Jenq-Neng Hwang. "Joint Multi-View People Tracking and Pose Estimation for 3D Scene Reconstruction". 2018 IEEE International Conference on Multimedia and Expo (ICME 2018). 2018. https://ieeexplore.ieee.org/document/8486576
Published in 2018 IEEE International Conference on Image Processing, 2018
[Paper]
Recommended citation: Na Wang, Haiqing Du, Yong Liu, Zheng Tang and Jenq-Neng Hwang. "Self-Calibration of Traffic Surveillance Cameras Based on Moving Vehicle Appearance and 3-D Vehicle Modeling". Proceedings of 2018 IEEE International Conference on Image Processing (ICIP 2018). pp. 3064-3068. 2018. https://ieeexplore.ieee.org/document/8451478
Published in IEEE Access, 2019
[Paper]
Recommended citation: Zheng Tang, Yen-Shuo Lin, Kuan-Hui Lee, Jenq-Neng Hwang and Jen-Hui Chuang. "ESTHER: Joint Camera Self-Calibration and Automatic Radial Distortion Correction from Tracking of Walking Humans". IEEE Access. vol. 7, pp. 10754-10766. 2019. https://ieeexplore.ieee.org/document/8605504
Published in IEEE Access, 2019
Recommended citation: Zheng Tang and Jenq-Neng Hwang. "MOANA: An Online Learned Adaptive Appearance Model for Robust Multiple Object Tracking in 3D". IEEE Access. vol. 7, pp. 31934-31945. 2019. https://ieeexplore.ieee.org/document/8660675
Published in Ph.D. dissertation. Department of Electrical & Computer Engineering, University of Washington, Seattle, WA, 2019
Recommended citation: Zheng Tang. "Robust Video Object Tracking via Camera Self-Calibration". Ph.D. dissertation. University of Washington, Seattle, WA. 2019. http://hdl.handle.net/1773/43951
Published in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition - 3rd AI City Challenge Workshop, 2019
Recommended citation: Milind Naphade, Zheng Tang, Ming-Ching Chang, David C Anastasiu, Anuj Sharma, Rama Chellappa, Shuo Wang, Pranamesh Chakraborty, Tingting Huang, Jenq-Neng Hwang and Siwei Lyu. "The 2019 AI City Challenge". Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW 2019). pp. 452-460. 2019. http://openaccess.thecvf.com/content_CVPRW_2019/html/AI_City/Naphade_The_2019_AI_City_Challenge_CVPRW_2019_paper.html
Published in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019
[Paper] [Presentation] [Slides] [Poster] [Website]
Recommended citation: Zheng Tang, Milind Naphade, Ming-Yu Liu, Xiaodong Yang, Stan Birchfield, Shuo Wang, Ratnesh Kumar, David Anastasiu and Jenq-Neng Hwang. "CityFlow: A City-Scale Benchmark for Multi-Target Multi-Camera Vehicle Tracking and Re-Identification". Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2019). pp. 8797-8806. 2019. https://arxiv.org/abs/1903.09254
Published in 2019 IEEE/CVF International Conference on Computer Vision, 2019
Recommended citation: Zheng Tang, Milind Naphade, Stan Birchfield, Jonathan Tremblay, William Hodge, Ratnesh Kumar, Shuo Wang and Xiaodong Yang. "PAMTRI: Pose-Aware Multi-Task Learning for Vehicle Re-Identification Using Highly Randomized Synthetic Data". Proceedings of 2019 IEEE/CVF International Conference on Computer Vision (ICCV 2019). pp. 211-220. 2019. http://arxiv.org/abs/2005.00673
Published in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition - 4th AI City Challenge Workshop, 2020
Recommended citation: Milind Naphade, Shuo Wang, David Anastasiu, Zheng Tang, Ming-Ching Chang, Xiaodong Yang, Liang Zheng, Anuj Sharma, Rama Chellappa and Pranamesh Chakraborty. "The 4th AI City Challenge". Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW 2020). 2020. https://arxiv.org/abs/2004.14619
Published in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition - 5th AI City Challenge Workshop, 2021
Recommended citation: Milind Naphade, Shuo Wang, David C. Anastasiu, Zheng Tang, Ming-Ching Chang, Xiaodong Yang, Yue Yao, Liang Zheng, Pranamesh Chakraborty, Christian E. Lopez, Anuj Sharma, Qi Feng, Vitaly Ablavsky and Stan Sclaroff. "The 5th AI City Challenge". Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW 2021). 2021. https://arxiv.org/abs/2104.12233
Published in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition - 6th AI City Challenge Workshop, 2022
Recommended citation: Milind Naphade, Shuo Wang, David C. Anastasiu, Zheng Tang, Ming-Ching Chang, Yue Yao, Liang Zheng, Mohammed Shaiqur Rahman, Archana Venkatachalapathy, Anuj Sharma, Qi Feng, Vitaly Ablavsky, Stan Sclaroff, Pranamesh Chakraborty, Alice Li, Shangru Li and Rama Chellappa. "The 6th AI City Challenge". Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW 2022). 2022. https://arxiv.org/abs/2204.10380
Published in 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition - 7th AI City Challenge Workshop, 2023
Recommended citation: Milind Naphade, Shuo Wang, David C. Anastasiu, Zheng Tang, Ming-Ching Chang, Yue Yao, Liang Zheng, Mohammed Shaiqur Rahman, Meenakshi S. Arya, Anuj Sharma, Qi Feng, Vitaly Ablavsky, Stan Sclaroff, Pranamesh Chakraborty, Sanjita Prajapati, Alice Li, Shangru Li, Krishna Kunadharaju, Shenxin Jiang and Rama Chellappa. "The 7th AI City Challenge". Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW 2023). 2023. https://arxiv.org/abs/2304.07500
Published in IEEE Transactions on Circuits and Systems for Video Technology, 2023
[Paper]
Recommended citation: Chao Wang and Zheng Tang. "The Staged Knowledge Distillation in Video Classification: Harmonizing Student Progress by a Complementary Weakly Supervised Framework". IEEE Transactions on Circuits and Systems for Video Technology (T-CSVT). 2024. http://ieeexplore.ieee.org/document/10182291
Published in arXiv, 2023
[Paper]
Recommended citation: Yue Yao, Xinyu Tian, Zheng Tang, Sujit Biswas, Huan Lei, Tom Gedeon and Liang Zheng. "Training with Product Digital Twins for AutoRetail Checkout". arXiv:2308.09708. 2023. https://arxiv.org/abs/2308.09708
Published in 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition - 8th AI City Challenge Workshop, 2024
Recommended citation: Shuo Wang, David C. Anastasiu, Zheng Tang, Ming-Ching Chang, Yue Yao, Liang Zheng, Mohammed Shaiqur Rahman, Meenakshi S. Arya, Anuj Sharma, Pranamesh Chakraborty, Sanjita Prajapati, Quan Kong, Norimasa Kobori, Munkhjargal Gochoo, Munkh-Erdene Otgonbold, Ganzorig Batnasan, Fady Alnajjar, Ping-Yang Chen, Jun-Wei Hsieh, Xunlei Wu, Sameer Satish Pusegaonkar, Yizhou Wang, Sujit Biswas and Rama Chellappa. "The 8th AI City Challenge". Proceedings of 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW 2024). 2024. https://arxiv.org/abs/2404.09432
Published in Neurocomputing, 2024
Graph Convolutional Networks (GCNs) have emerged as a potent tool for learning graph representations, finding applications in a plethora of real-world scenarios. Nevertheless, a significant portion of deep learning research has predominantly concentrated on enhancing model performance via the construction of deeper GCNs. Regrettably, the efficacy of training deep GCNs is marred by two fundamental weaknesses: the inadequacy of conventional methodologies in handling heterogeneous networks, and the exponential surge in model complexity as network depth increases. This, in turn, imposes constraints on their practical utility. To surmount these inherent limitations, we propose an innovative approach named the Wide Sub-stage Graph Convolutional Network (WSSGCN). Our method is an outcome of meticulous observations drawn from classical and graph convolutional networks, aimed at rectifying the constraints associated with traditional GCNs. Our strategy involves the conception of a staged convolutional network framework that mirrors the fundamental tenets of the step-by-step learning process akin to human cognition. This framework prioritizes three distinct forms of consistency: response-based, feature-based, and relationship-based. Our approach involves three tailored convolutional networks capturing node/edge, subgraph, and global features. Additionally, we introduce a novel method to expand graph width for efficient GCN training. Empirical validation on benchmarks highlights WSSGCN’s superior accuracy and faster training versus conventional GCNs. WSSGCN triumphs over traditional GCN constraints, significantly enhancing graph representation learning.
Recommended citation: Chao Wang, Zheng Tang and Hailu Xu. "WSSGCN: Wide Sub-stage Graph Convolutional Networks". Neurocomputing. vol. 602, p. 128273. 2024. https://www.sciencedirect.com/science/article/pii/S0925231224010440
Published in 2024 European Conference on Computer Vision, 2024
Recommended citation: Liqi Yan, Qifan Wang, Junhan Zhao, Qiang Guan, Zheng Tang, Jianhui Zhang and Dongfang Liu. "Radiance Field Learners As UAV First-Person Viewers". Proceedings of 2024 European Conference on Computer Vision (ECCV 2024). 2024. https://arxiv.org/abs/2408.05533
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.