CVPR 2021, The 1st Workshop on

Sketch-Oriented Deep Learning (SketchDL)

Online, June 19th, 2021

15:00 to 19:00 EDT time zone

This page will be updated continuously.

Videos, slides, and other related documents will be uploaded soon.

Please email Peng Xu for any feedback or questions.

(Link to 2nd SketchDL workshop@CVPR2022)


Speakers


Organizers


Overview

Drawing is a universal communication method that transcends barriers to link human societies. It has been used from ancient times to today, comes naturally to children before writing, and transcends language barriers. The recent prevalence of touchscreen devices has made sketch creation a much easier task than ever and consequently made sketch-oriented applications increasingly popular, e.g., Quick!Draw online game. The progress of deep learning and machine learning has immensely benefited sketch research and applications, e.g., GAN (generative adversarial networks), GNN (graph neural network), meta-learning, self-supervised learning. Moreover, large-scale (even million-scale) sketch datasets have been already emerged in recent years. The proliferation of mobile computing devices with touchscreens interfaces have also sparked interest in machine learning methods that can process human sketching–whether as an interface with our devices, or to facilitate content production and communication of ideas. All this is bringing new opportunities and challenges to the field of sketch-oriented research.

This workshop aims to bring researchers together from a diverse scope of research areas (e.g., computer vision, computer graphics, human computer interaction, deep learning, machine learning, cognitive science), to explore directions and topics for future sketch-oriented machine learning.


Program

15:00 - 15:05 . opening remarks

15:05 - 15:45 . keynote by Dr. Jun-Yan Zhu, "Sketch Your Own Models"

15:45 - 16:25 . keynote by Dr. Petar Veličković, "Neural Algorithmic Sketching"

16:25 - 16:35 . break & demo session

16:35 - 17:15 . keynote by Dr. Judith Fan, "Cognitive Tools for Making the Invisible Visible"

17:15 - 17:25 . oral paper presentation: "Im2Vec: Synthesizing Vector Graphics Without Vector Supervision" [paper]

17:25 - 17:35 . oral paper presentation: "On Training Sketch Recognizers for New Domains" [paper]

17:35 - 17:45 . oral paper presentation: "Compact and Effective Representations for Sketch-Based Image Retrieval" [paper]

17:45 - 17:55 . oral paper presentation: "Sketch-QNet: A Quadruplet ConvNet for Color Sketch-Based Image Retrieval" [paper]

17:55 - 18:05 . oral paper presentation: "Engineering Sketch Generation for Computer-Aided Design" [paper]

18:05 - 18:15 . oral paper presentation: "Creative Sketch Generation" [paper]

18:15 - 18:55 . panel discussion for open problem

18:55 - 19:00 . closing remark


Call for Paper

(CFP poster can be downloaded via this link)

Topics of Interests

This workshop encourages novel and creative deep learning works for all forms of drawings, including free-hand sketch, professional (forensic) facial sketch, professional pencil sketch, professional landscape sketch, cartoon/manga, well-drawn 3D sketch, etc.

Topics of interests by this workshop include, but are not limited to:

  • uni-modal tasks
    • global-level understanding and interpretation
      • sketch object/scene/face recognition
      • sketch online/offline recognition
      • sketch retrieval and hashing
      • sketch generation
      • sketch-oriented neural representation, e.g., CNN- or RNN- network designing for sketch
    • partial-level understanding and interpretation
      • sketch grouping
      • sketch segmentation (pixel-level parse and stroke-level segmentation)
      • sketch abstraction
  • multi-modal tasks
    • sketch related cross-modal visual retrieval/hashing, e.g., sketch-based photo/video/3D retrieval
    • sketch related cross-modal generation
    • text to sketch generation
    • applications with other modalities, e.g., spatial text, art clip, cartoon
  • emerging and potential theory and applications
    • sketch-oriented GNN/GCN (graph neural/convolutional network) and TCN (temporal/textual convolutional neural network) designing and representation
    • self-supervised/un-supervised/weakly-supervised learning and representation for sketch
    • transfer learning, meta learning, zero-shot and few-shot learning for sketch
    • adversarial learning for sketch
    • sketch-related security surveillance, e.g., sketch-based person Re-ID, sketch-based forensic applications
    • sketch-related AR/VR (Augmented Reality and Virtual Reality) and HCI applications
    • sketch-related RL (Reinforcement Learning)
    • sketch for computer graphics, robots, art/industry design, business, education

Important Dates

Paper Submission Deadline . Apr. 6, 2021

Notification of Acceptance . Apr. 15, 2021

Camera-ready Due . Apr. 20, 2021

Submission and Review

All submissions will be handled electronically via the workshop’s CMT Website
https://cmt3.research.microsoft.com/SketchDL2021.

All submissions will undergo standard double-blind peer-review.

Length, format, and template should follow the CVPR 2021 Submission Guidelines.

The best paper award will be sponsored by Google.


Technical Program Committee Members

Bria Long, Stanford University

Cusuh Ham, Georgia Institute of Technology

Kun Liu, Beijing Univ. of Posts & Telecommunications

Leo Sampaio Ferraz Ribeiro, Universidade de São Paulo

Manfred Lau, City University of Hong Kong

Mengqiu Xu, Beijing Univ. of Posts & Telecommunications

Moacir Antonelli Ponti, Universidade de São Paulo

Patsorn Sangkloy, Georgia Institute of Technology

Pengkai Zhu, Boston University

Qingyuan Zheng, University of Maryland

Tongtong Yuan, Beijing University of Technology

Tu Bui, University of Surrey

Xiaoguang Han, The Chinese University of Hong Kong (ShenZhen)

Xiaoying Feng, Avar Consulting, Inc. & American Institutes for Research

Xiatian Zhu, Samsung AI Centre, UK

Yongye Huang, ByteDance

Youyi Zheng, Zhejiang University


Sponsor


The webpage template is from here.