Ke-Sen Huang's Home Page

  • I got my Ph. D from the Department of Computer Science of National Tsing-Hua University , Taiwan .
  • My research interests include: animation synthesis, animation summarization, and motion retrieval.
  • My Web Changelog

Paper Collection / Resources

  • Open Access to ACM SIGGRAPH-Sponsored Content : For both SIGGRAPH and SIGGRAPH Asia, conference content is freely accessible in the ACM Digital Library for a one-month period that begins two weeks before each conference, and ends a week after it concludes.
  • Journal of Computer Graphics Techniques
  • Point-based Graphics Papers

Computer Graphics Conference and Special Issue Calendar

  • CFP - The Springer Encyclopedia of Computer Graphics and Games (ECGG) ( PDF )
  • 2013 , 2012 , 2011 , 2010 , 2009 , 2008
  • Page maintained by Ke-Sen Huang
  • SIGGRAPH 2022 (Jounal Track Submissions: 257 Dual-track Submissions: 353 Journal Acceptances: 133 Conference Acceptances: 61 Journal Acceptance Rate: 21.80%)
  • SIGGRAPH 2021 (Submitted: 444     Accepted:   149     Acceptance Rate: 34%)
  • SIGGRAPH 2020 (Submitted: 443     Accepted:   124     Acceptance Rate: 28%)
  • SIGGRAPH 2019 (Submitted: 385     Accepted:   111     Acceptance Rate: 29%)
  • SIGGRAPH 2018 (Submitted: 464     Accepted:   128     Acceptance Rate: 28%)
  • SIGGRAPH 2017 (Submitted: 439     Accepted:   126     Acceptance Rate: 28%)
  • SIGGRAPH 2016 (Submitted: 467     Accepted:   119     Acceptance Rate: 25%)
  • SIGGRAPH 2015 (Submitted: 462     Accepted:   118     Acceptance Rate: 25%)
  • SIGGRAPH 2014 (Submitted: 505     Accepted:   127     Acceptance Rate: 25%)
  • SIGGRAPH 2013 (Submitted: 480     Accepted:   115     Acceptance Rate: 24%)
  • SIGGRAPH 2012 (Submitted: 449     Accepted:   94     Acceptance Rate: 21%)
  • SIGGRAPH 2011 (Submitted: 432     Accepted:   82     Acceptance Rate: 19%)
  • SIGGRAPH 2010 (Submitted: 390     Accepted:   103     Acceptance Rate: 26%)
  • SIGGRAPH 2009 (Submitted: 439     Accepted:    78     Acceptance Rate: 18%)
  • SIGGRAPH 2008 (Submitted: 518     Accepted:    90     Acceptance Rate: 17%)
  • Page maintained by Ke-Sen Huang and Tim Rowley
  • SIGGRAPH 2007 (Submitted: 455     Accepted: 108     Acceptance Rate: 24%)
  • SIGGRAPH 2006 (Submitted: 474     Accepted:   86     Acceptance Rate: 18%)
  • SIGGRAPH 2005 (Submitted: 461     Accepted:   98     Acceptance Rate: 21%)
  • Page maintained by Tim Rowley
  • SIGGRAPH 2004 (Submitted: 478     Accepted:   83     Acceptance Rate: 17%) 
  • SIGGRAPH 2003 (Submitted: 424     Accepted:   81     Acceptance Rate: 19%)
  • SIGGRAPH 2002 (Submitted: 358     Accepted:   67     Acceptance Rate: 19%)
  • SIGGRAPH 2001 (Submitted: 300     Accepted:   65     Acceptance Rate: 22%)
  • SIGGRAPH 2000 (Submitted: 304     Accepted:   59     Acceptance Rate: 19%)
  • SIGGRAPH 2004 (Submitted: 478     Accepted:   83     Acceptance Rate: 17%) 
  • SIGGRAPH 2003 (Submitted: 424     Accepted:   81     Acceptance Rate: 19%)
  • SIGGRAPH 2002 (Submitted: 358     Accepted:   67     Acceptance Rate: 19%)
  • SIGGRAPH 2001 (Submitted: 300     Accepted:   65     Acceptance Rate: 22%)
  • SIGGRAPH 2000 (Submitted: 304     Accepted:   59     Acceptance Rate: 19%)

SIGGRAPH Asia

  • SIGGRAPH Asia 2022 (Submitted:  ???     Accepted:   ??     Acceptance Rate:  ??%)
  • SIGGRAPH Asia 2021 (Submitted:  270     Accepted:   92     Acceptance Rate:  34%)
  • SIGGRAPH Asia 2020 (Submitted:  305     Accepted:  109     Acceptance Rate:  36%)
  • SIGGRAPH Asia 2019 (Submitted:  309     Accepted:   93     Acceptance Rate:  30%)
  • SIGGRAPH Asia 2018 (Submitted:  353     Accepted:  106     Acceptance Rate:  30%)
  • SIGGRAPH Asia 2017 (Submitted:  312     Accepted:   78     Acceptance Rate:  25%)
  • SIGGRAPH Asia 2016 (Submitted:  300     Accepted:   89     Acceptance Rate:  30%)
  • SIGGRAPH Asia 2015 (Submitted:  302     Accepted:   84     Acceptance Rate:  28%)
  • SIGGRAPH Asia 2014 (Submitted:  352     Accepted:   63     Acceptance Rate:  18%)
  • SIGGRAPH Asia 2013 (Submitted:  317     Accepted:   66     Acceptance Rate:  21%)
  • SIGGRAPH Asia 2012 (Submitted:  326     Accepted:   77     Acceptance Rate:  24%)
  • SIGGRAPH Asia 2011 (Submitted:  330     Accepted:   68     Acceptance Rate:  21%)
  • SIGGRAPH Asia 2010 (Submitted:  274     Accepted:   49     Acceptance Rate:  18%)
  • SIGGRAPH Asia 2009 (Submitted:  275     Accepted:   70     Acceptance Rate:  25%)
  • SIGGRAPH Asia 2008 (Submitted:  320     Accepted:   59     Acceptance Rate:  18%)

SIGGRAPH Resource

  • Page maintained by Stephen Hill
  • SIGGRAPH 2022 Resource (Courses, Posters, ...)
  • SIGGRAPH 2021 Resource (Courses, Posters, ...)
  • SIGGRAPH 2020 Resource (Courses, Posters, ...)
  • SIGGRAPH 2019 Resource (Courses, Posters, ...)
  • SIGGRAPH 2018 Resource (Courses, Posters, ...)
  • SIGGRAPH 2017 Resource (Courses, Posters, ...)
  • SIGGRAPH 2016 Resource (Courses, Posters, ...)
  • SIGGRAPH 2015 Resource (Courses, Posters, ...)
  • SIGGRAPH 2014 Resource (Courses, Posters, ...)
  • SIGGRAPH 2013 Resource (Courses, Posters, ...)
  • SIGGRAPH 2012 Resource (Courses, Posters, ...)
  • SIGGRAPH 2011 Resource (Courses, Posters, ...)

Eurographics

  • EG 2022 (Submitted:   ???     Accepted:   ??     Acceptance Rate: ??%)
  • EG 2021 (Submitted:   ???     Accepted:   ??     Acceptance Rate: ??%)
  • EG 2020 (Submitted:   ???     Accepted:   ??     Acceptance Rate: ??%)
  • EG 2019 (Submitted:   121     Accepted:   37     Acceptance Rate: 31%)
  • EG 2018 (Submitted:   ???     Accepted:   ??     Acceptance Rate: ??%)
  • EG 2017 (Submitted:   ???     Accepted:   ??     Acceptance Rate: ??%)
  • EG 2016 (Submitted:   ???     Accepted:   ??     Acceptance Rate: ??%)
  • EG 2015 (Submitted:   207     Accepted:   55     Acceptance Rate: 27%)
  • EG 2014 (Submitted:   209     Accepted:   52     Acceptance Rate: 25%)
  • EG 2013 (Submitted:   205     Accepted:   52     Acceptance Rate: 25%)
  • EG 2012 (Submitted:   260     Accepted:   66     Acceptance Rate: 25%)
  • EG 2011 (Submitted:   236     Accepted:   40     Acceptance Rate: 17%)
  • EG 2010 (Submitted:   261     Accepted:   53     Acceptance Rate: 20%)
  • EG 2009 (Submitted:   243     Accepted:   56     Acceptance Rate: 23%)
  • EG 2008 (Submitted:   300     Accepted:   58     Acceptance Rate: 19%)
  • EG 2007 (Submitted:   212     Accepted:   50     Acceptance Rate: 24%)
  • EG 2006 (Submitted:   250     Accepted:   42     Acceptance Rate: 17%)
  • EG 2005 (Submitted:   303     Accepted:   47     Acceptance Rate: 16%)
  • EG 2004 (Submitted:   243     Accepted:   44     Acceptance Rate: 18%)
  • EG 2003 (Submitted:   221     Accepted:   45     Acceptance Rate: 20%)

Symposium on Interactive 3D Graphics and Games

  • I3D 2022 (Submitted:   34     Accepted 16 Acceptance Rate: 47%)
  • I3D 2021 (Submitted:   34     Accepted 16 Acceptance Rate: 47%)
  • I3D 2020 (Submitted:   60     Accepted PACM: 9 Accepted Conference: 17    Acceptance Rate: 15% PACM plus 43% Conference)
  • I3D 2019 (Submitted:   49     Accepted PACM: 8 Accepted Conference: 16    Acceptance Rate: 16% PACM plus 33% Conference)
  • I3D 2018 (Submitted:   70     Accepted PACM: 22 Accepted Conference: 14    Acceptance Rate: 31% PACM plus 20% Conference)
  • I3D 2017 (Submitted:   45     Accepted:   16     Acceptance Rate: 36%)
  • I3D 2016 (Submitted:   48     Accepted:   20     Acceptance Rate: 42%)
  • I3D 2015 (Submitted:   39     Accepted:   15     Acceptance Rate: 38%)
  • I3D 2014 (Submitted:   47     Accepted:   19     Acceptance Rate: 40%)
  • I3D 2013 (Submitted:   68     Accepted:   20     Acceptance Rate: 29%)
  • I3D 2012 (Submitted:   63     Accepted:   25     Acceptance Rate: 40%)
  • I3D 2011 (Submitted:   64     Accepted:   24     Acceptance Rate: 38%)
  • I3D 2010 (Submitted:   72     Accepted:   24     Acceptance Rate: 33%)
  • I3D 2009 (Submitted:   87     Accepted:   28     Acceptance Rate: 32%)
  • I3D 2008 (Submitted:   57     Accepted:   24     Acceptance Rate: 42%)
  • I3D 2007 (Submitted:   72     Accepted:   25     Acceptance Rate: 35%)
  • I3D 2006 (Submitted:   73     Accepted:   28     Acceptance Rate: 38%)
  • I3D 2005 (Submitted:   95     Accepted:   26     Acceptance Rate: 27%)
  • I3D 2003 (Submitted: 102    Accepted:   27     Acceptance Rate: 26%)

Eurographics Symposium on Rendering

  • EGSR 2022 (Submitted:   47     Accepted (Journal/Conference):   15/12     Acceptance Rate (Journal/Conference):  31.9%/57.4%)
  • EGSR 2021 (Submitted:   52     Accepted (Journal/Conference):   14/20     Acceptance Rate (Journal/Conference):  26.9%/65.4%)
  • EGSR 2020 (Submitted:   50     Accepted (Journal/Conference):   14/5     Acceptance Rate (Journal/Conference):  28.0%/38.0%)
  • EGSR 2019 (Submitted:   65     Accepted (Journal/Conference):   19/6     Acceptance Rate (Journal/Conference):  29.2%/38.5%)
  • EGSR 2018 (Submitted:   39     Accepted:   15     Acceptance Rate:  38.5%)
  • EGSR 2017 (Submitted:   41     Accepted:   16     Acceptance Rate:  39.0%)
  • EGSR 2016 (Submitted:   41     Accepted:   13     Acceptance Rate:  31.7%)
  • EGSR 2015 (Submitted:   51     Accepted:   17     Acceptance Rate:  33.3%)
  • EGSR 2014 (Submitted:   41     Accepted:   15     Acceptance Rate:  36.6%)
  • EGSR 2013 (Submitted:   46     Accepted:   17     Acceptance Rate:  37.0%)
  • EGSR 2012 (Submitted:   69     Accepted:   21     Acceptance Rate:  30.4%)
  • EGSR 2011 (Submitted:   62     Accepted:   24     Acceptance Rate:  38.7%)
  • EGSR 2010 (Submitted:   72     Accepted:   28     Acceptance Rate:  38.9%)
  • EGSR 2009 (Submitted:   72     Accepted:   21     Acceptance Rate:  29.2%)
  • EGSR 2008 (Submitted:   71     Accepted:   26     Acceptance Rate:  36.6%)
  • EGSR 2007 (Submitted:   94     Accepted:   33     Acceptance Rate:  35.1%)
  • EGSR 2006 (Submitted: 109     Accepted:   39     Acceptance Rate:  35.8%)
  • EGSR 2005 (Submitted:   93     Accepted:   31     Acceptance Rate:  33.3%)
  • EGSR 2004 (Submitted:   98     Accepted:   39     Acceptance Rate:  39.8%)
  • EGSR 2003 (Submitted:   81     Accepted:   30     Acceptance Rate:  37.0%)

ACM SIGGRAPH / Eurographics Symposium on Computer Animation

  • SCA 2022 (Submitted:   78     Accepted:   30     Acceptance Rate: 38%)
  • SCA 2021 (Submitted:   ??     Accepted:   ??     Acceptance Rate: ??%)
  • SCA 2020 (Submitted:   59     Accepted:   22     Acceptance Rate: 37%)
  • SCA 2019 (Submitted:   ??     Accepted:   ??     Acceptance Rate: ??%)
  • SCA 2018 (Submitted:   ??     Accepted:   ??     Acceptance Rate: ??%)
  • SCA 2017 (Submitted:   ??     Accepted:   ??     Acceptance Rate: ??%)
  • SCA 2016 (Submitted:   47     Accepted:   24     Acceptance Rate: 51%)
  • SCA 2015 (Submitted:   ??     Accepted:   ??     Acceptance Rate: ??%)
  • SCA 2014 (Submitted:   48     Accepted:   18     Acceptance Rate: 38%)
  • SCA 2013 (Submitted:   57     Accepted:   20     Acceptance Rate: 35%)
  • SCA 2012 (Submitted:   80     Accepted:   27     Acceptance Rate: 34%)
  • SCA 2011 (Submitted:   77     Accepted:   30     Acceptance Rate: 39%)
  • SCA 2010 (Submitted:   56     Accepted:   24     Acceptance Rate: 43%)
  • SCA 2009 (Submitted:   67     Accepted:   26     Acceptance Rate: 39%)
  • SCA 2008 (Submitted:   60     Accepted:   24     Acceptance Rate: 40%)
  • SCA 2007 (Submitted:   81     Accepted:   28     Acceptance Rate: 35%)
  • SCA 2006 (Submitted: 126     Accepted:   37     Acceptance Rate: 29%)
  • SCA 2005 (Submitted: 100     Accepted:   35     Acceptance Rate: 35%)
  • SCA 2004 (Submitted: 120     Accepted:   37     Acceptance Rate: 31%)
  • SCA 2003 (Submitted: 100     Accepted:   38     Acceptance Rate: 38%)

Eurographics Symposium on Geometry Processing

  • SGP 2019 (Submitted:  ??     Accepted:   ??     Acceptance Rate:    ??%)
  • SGP 2018 (Submitted:  ??     Accepted:   ??     Acceptance Rate:    ??%)
  • SGP 2017 (Submitted:  ??     Accepted:   ??     Acceptance Rate:    ??%)
  • SGP 2016 (Submitted:  81     Accepted:   26     Acceptance Rate:    32%)
  • SGP 2015 (Submitted:  72     Accepted:   22     Acceptance Rate:    31%)
  • SGP 2014 (Submitted:  89     Accepted:   28     Acceptance Rate:    31%)
  • SGP 2013 (Submitted:  56     Accepted:   23     Acceptance Rate:    41%)
  • SGP 2012 (Submitted:  72     Accepted:   25     Acceptance Rate:    35%)
  • SGP 2011 (Submitted:  77     Accepted:   23     Acceptance Rate:    30%)
  • SGP 2010 (Submitted:  70     Accepted:   24     Acceptance Rate:    34%)
  • SGP 2009 (Submitted:  75     Accepted:   26     Acceptance Rate:    35%)
  • SGP 2008 (Submitted:  96     Accepted:   23     Acceptance Rate:    24%)
  • SGP 2007 (Submitted:  74     Accepted:   21     Acceptance Rate:    28%)
  • SGP 2006 (Submitted:  79     Accepted:   21     Acceptance Rate:    27%)
  • SGP 2005 (Submitted:  87     Accepted:   22     Acceptance Rate:    25%)
  • SGP 2004 (Submitted:   ?     Accepted:   25     Acceptance Rate:    29%)
  • SGP 2003 (Submitted:  72     Accepted:   25     Acceptance Rate:   35%)

High Performance Graphics

  • HPG 2022 (Submitted: 28     Accepted: 12     Acceptance Rate: 42.8%)
  • HPG 2021 (Submitted: 28     Accepted: 12 (6 conference + 6 journal)     Acceptance Rate: 42.8%)
  • HPG 2020 (Submitted: 22     Accepted: 12     Acceptance Rate: 54.5%)
  • HPG 2019 (Submitted: 20     Accepted: 6     Acceptance Rate: 30%)
  • HPG 2018 (Long Papers Submitted: 39     Accepted: 12     Acceptance Rate: 30.7% Short Papers     Submitted: 31     Accepted: 8     Acceptance Rate: 25.8%)
  • HPG 2017 (Submitted: 40     Accepted: 18     Acceptance Rate: 45%)
  • HPG 2016 (Submitted: 32     Accepted: 17     Acceptance Rate: 53.1%)
  • HPG 2015 (Submitted: 41     Accepted: 12     Acceptance Rate: 29%)
  • HPG 2014 (Submitted: 40     Accepted: 14     Acceptance Rate: 35%)
  • HPG 2013 (Submitted: 44     Accepted: 15     Acceptance Rate: 34%)
  • HPG 2012 (Submitted: 47     Accepted: 14     Acceptance Rate: 30%)
  • HPG 2011 (Submitted: 64     Accepted: 21     Acceptance Rate: 33%)
  • HPG 2010 (Submitted: 60     Accepted: 19     Acceptance Rate: 32%)
  • HPG 2009 (Submitted: 72     Accepted: 21     Acceptance Rate: 29%)
  • This conference is the synthesis of two highly-successful conference series: Graphics Hardware and Interactive Ray Tracing

The ACM SIGGRAPH Conference on Motion, Interaction and Games

The acm/eg expressive symposium, ieee/eurographics symposium on point-based graphics.

  • PBG 2007 (Submitted:  26     Accepted:   14     Acceptance Rate:  54%)
  • PBG 2006 (Submitted:  33     Accepted:   16     Acceptance Rate: 48%)
  • PBG 2005 (Submitted:  30     Accepted:   15     Acceptance Rate: 50%)
  • PBG 2004 (Submitted:  44     Accepted:   24     Acceptance Rate: 55%)

Digital Production Symposium

  • DigiPro 2013 (Submitted: ??     Accepted: 5     Acceptance Rate: ??%)

IEEE/EG Symposium on Interactive Ray Tracing

  • RT 2008 (Submitted: 46     Accepted: 24     Acceptance Rate: 52%)
  • RT 2007 (Submitted: 43     Accepted: 23     Acceptance Rate: 53%)
  • RT 2006 (Submitted: 41     Accepted: 22     Acceptance Rate: 54%)

Eurographics Workshop on 3D Object Retrieval

  • 3DOR 2008 (Submitted:  19     Accepted:  10     Acceptance Rate: 53%)

Eurographics Workshop on Sketch-Based Interfaces and Modeling

  • Page maintained by Xiaomao Wu
  • SBIM 2008 (Submitted:  ?     Accepted:  ?     Acceptance Rate: ?%)

Symposium on Applied Perception in Graphics and Visualization

  • Page maintained by Roy Walmsley
  • APGV 2008   (Submitted: ?     Accepted: ?     Acceptance Rate: ?%)
  • Page maintained by Ke-Sen Huang and Roy Walmsley
  • APGV 2007 (Submitted: 39     Accepted: 14     Acceptance Rate: 36%)
  • APGV 2006 (Submitted: 45     Accepted: 20     Acceptance Rate: 44%)
  • APGV 2005 (Submitted: 43     Accepted: 21     Acceptance Rate: 49%)
  • APGV 2004 (Submitted: 38     Accepted: 21     Acceptance Rate: 55%)

Non-Photorealistic Animation and Rendering

  • NPAR 2011 (Submitted: 37     Accepted:  18     Acceptance Rate: 49%)
  • NPAR 2010 (Submitted: 45     Accepted:  19     Acceptance Rate: 42%)
  • NPAR 2009 (Submitted: 21     Accepted:   7     Acceptance Rate: 33%)
  • NPAR 2008 (Submitted: 27     Accepted:  11     Acceptance Rate: 41%)
  • NPAR 2007 (Submitted: 34     Accepted: 16     Acceptance Rate: 47%)
  • NPAR 2006 (Submitted: 43     Accepted: 17     Acceptance Rate: 40%)
  • NPAR 2004 (Submitted: 63     Accepted: 14     Acceptance Rate: 22%)

Pacific Graphics

  • PG 2012 (Submitted:  153    Accepted:  30     Acceptance Rate:  20%)
  • PG 2011 (Submitted:  168    Accepted:  27     Acceptance Rate:  16%)
  • PG 2010 (Submitted:  180    Accepted:  31     Acceptance Rate:  17%)
  • PG 2009 (Submitted:  177     Accepted:  31     Acceptance Rate:  18%)
  • PG 2008 (Submitted:  186     Accepted:  34     Acceptance Rate:  18%)
  • PG 2007 (Submitted:  179     Accepted:  39     Acceptance Rate:  22%)
  • PG 2006 (Submitted:  206     Accepted:  35    Acceptance Rate:  17%)
  • PG 2005 (Submitted:  267     Accepted:  37     Acceptance Rate:  14%)
  • PG 2004 (Submitted:  164    Accepted:  41     Acceptance Rate:  25%)
  • PG 2003 (Submitted:  182     Accepted:  35     Acceptance Rate:  19%)
  • PG 2001 (Submitted:  112     Accepted:  41     Acceptance Rate:  37%)
  • PG 2000 (Submitted: 121     Accepted:  40     Acceptance Rate:  33%)

International Conference on Computational Photography

  • ICCP 2009 (Submitted: ?     Accepted: ?     Acceptance Rate: ?%)

 Shape Modeling International

  • SMI 2008 (Submitted: ?     Accepted: 24     Acceptance Rate: ?%)
  • SMI 2007 (Submitted: 62     Accepted: 23     Acceptance Rate: 37%)
  • SIM 2006 (Submitted: 58     Accepted: 20     Acceptance Rate: 34%)
  • SMI 2005 (Submitted: 80     Accepted: 30     Acceptance Rate: 38%)
  • SMI 2004 (Submitted: 79     Accepted: 29     Acceptance Rate: 37%)

Computer Animation and Social Agents

Computer graphics international.

  • CGI 2008 (Submitted: ?     Accepted: ?     Acceptance Rate: ?%) 
  • CGI 2007 (Submitted: ?     Accepted: ?     Acceptance Rate: ?%)
  • CGI 2006 (Submitted: ?     Accepted: ?     Acceptance Rate: ?%)  
  • CGI 2005 (Submitted: 111     Accepted: 36     Acceptance Rate: 32%)
  • CGI 2004 (Submitted: 172     Accepted: 60     Acceptance Rate: 35%)
  • CGI 2003 (Submitted: ?     Accepted: ?     Acceptance Rate: ?%)

Eurographics/SIGGRAPH Graphics Hardware

  • GH 2008 (Submitted: 32     Accepted: 11    Acceptance Rate: 32%)
  •  Page maintained by Tim Rowley
  • GH 2007 (Submitted: 30     Accepted: 12     Acceptance Rate: 40%)
  • GH 2006 (Submitted: 45     Accepted: 14     Acceptance Rate: 31%)
  • GH 2005 (Submitted: 32     Accepted: 13     Acceptance Rate: 41%)
  • GH 2004 (Submitted: 43     Accepted: 14     Acceptance Rate: 33%)
  • GH 2003 (Submitted: 39     Accepted: 13     Acceptance Rate: 33%)
  • GH 2002 (Submitted: 32     Accepted: 14     Acceptance Rate: 44%)
  • GH 2001 (Submitted:   ?     Accepted:   ?     Acceptance Rate:    ?%)
  • GH 2000 (Submitted: 31     Accepted: 14     Acceptance Rate: 45%)

IEEE Visualization

  • Page maintained by Yingcai Wu

2008 , 2007 , 2006 , 2005

  • Page maintained by Markus Hadwiger
  • 2004 , 2003

IEEE Information Visualization

  • IEEE InfoVis 2008 (Submitted: ? Accepted: ? Acceptance Rate: ?%)
  • IEEE InfoVis 2007 (Submitted: 116 Accepted: 27 Acceptance Rate: 23%)
  • IEEE InfoVis 2006 (Submitted: 104 Accepted: 24 Acceptance Rate: 23%)

Eurographics/IEEE symposium on Visualization (EuroVis) (Page maintained by Wing-Yi Chan )

Acm symposium on solid and physical modeling.

  • SPM 2008 (Submitted: 79     Accepted: 25     Acceptance Rate: 32%)
  • SPM 2007 (Submitted: 94     Accepted: 25     Acceptance Rate: 27%)
  • SPM 2006 (Submitted: 56     Accepted: 21     Acceptance Rate: 38%)
  • SPM 2005 (Submitted: 60     Accepted: 22     Acceptance Rate: 37%) 

Graphics Interface

  • GI 2011 (Submitted:   74     Accepted: 29     Acceptance Rate: 39%)
  • GI 2010 (Submitted:   88     Accepted: 33     Acceptance Rate: 38%)
  • GI 2009 (Submitted:   77     Accepted: 28     Acceptance Rate: 36%)
  • GI 2008 (Submitted:   85     Accepted: 34     Acceptance Rate: 40%)
  • GI 2007 (Submitted:   89     Accepted: 43     Acceptance Rate: 48%)
  • GI 2006 (Submitted:   94     Accepted: 31     Acceptance Rate: 33%)
  • GI 2005 (Submitted: 104     Accepted: 30     Acceptance Rate: 29%)

Winter School of Computer Graphics (WSCG)

Conference on graphics, patterns and images (sibgrapi), acm multimedia.

  • Page maintained by Yi-Hsuan Yang and Chia-Kai Liang
  • MM 2008 (Submitted: ?    Accepted: ?     Acceptance Rate: ?%)

IASTED Computer Graphics and Imaging (Page maintained by Roy Walmsley )

  • 2008 , 2007

IEEE VGTC Pacific Visualization Symposium (PacificVis) (Page maintained by Roy Walmsley )

  • Asian-Pacific Symposium on Information Visualization (APVIS) 2007

Computer Vision Resource (Page maintained by Gilles Mazars )

  • European Conference on Computer Vision (ECCV)
  • ECCV 2008 ( rss )
  • Asian Conference on Computer Vision (ACCV) 
  • ACCV 2007 ( rss )
  • IEEE International Conference on Computer Vision (ICCV) 
  • IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 
  • CVPR 2009 ( RSS )
  • CVPR 2008 ( RSS ) 
  • CVPR 2007 ( RSS )

Acceptance Rates

  • Acceptance Rates for Publications in Virtual Reality / Graphics / HCI / Visualization / Vision
  • Networking Conferences Statistics
  • Software Engineering Conferences Statistics
  • Database Conferences Statistics

Central European Seminar on Computer Graphics for students

Computer Graphics

University of california - berkeley.

  • Publications

Truth in Motion: The Unprecedented Risks and Opportunities of Extended Reality Motion Data

Vivek Nair, Louis Rosenberg, James F. O'Brien, Dawn Song IEEE S&P

Motion tracking “telemetry” data lies at the core of nearly all modern extended reality (XR) and metaverse experiences. While generally presumed innocuous, recent studies ... [more] Motion tracking “telemetry” data lies at the core of nearly all modern extended reality (XR) and metaverse experiences. While generally presumed innocuous, recent studies have demonstrated that motion data actually has the potential to profile and deanonymize XR users, posing a significant threat to security and privacy in the metaverse. [less]

Berkeley Open Extended Reality Recordings 2023 (BOXRR-23): 4.7 Million Motion Capture Recordings from 105,000 XR Users

Vivek Nair, Wenbo Guo, Rui Wang, James F. O'Brien, Louis Rosenberg, Dawn Song IEEE VR 2024

Extended reality (XR) devices such as the Meta Quest and Apple Vision Pro have seen a recent surge in attention, with motion tracking "telemetry" data lying at the core of nearly ... [more] Extended reality (XR) devices such as the Meta Quest and Apple Vision Pro have seen a recent surge in attention, with motion tracking "telemetry" data lying at the core of nearly all XR and metaverse experiences. Researchers are just beginning to understand the implications of this data for security, privacy, usability, and more, but currently lack large-scale human motion datasets to study. The BOXRR-23 dataset contains 4,717,215 motion capture recordings, voluntarily submitted by 105,852 XR device users from over 50 countries. BOXRR-23 is over 200 times larger than the largest existing motion capture research dataset and uses a new, highly efficient and purpose-built XR Open Recording (XROR) file format. [less]

Unique Identification of 50,000+ Virtual Reality Users from Head and Hand Motion Data

Vivek Nair, Wenbo Guo, Justus Mattern, Rui Wang, James F. O'Brien, Louis Rosenberg, Dawn Song USENIX Security 23

With the recent explosive growth of interest and investment in virtual reality (VR) and the “metaverse,” public attention has rightly shifted toward the unique security and ... [more] With the recent explosive growth of interest and investment in virtual reality (VR) and the “metaverse,” public attention has rightly shifted toward the unique security and privacy threats that these platforms may pose. While it has long been known that people reveal information about themselves via their motion, the extent to which this makes an individual globally identifiable within virtual reality has not yet been widely understood. In this study, we show that a large number of real VR users (N=55,541) can be uniquely and reliably identified across multiple sessions using just their head and hand motion relative to virtual objects. After training a classification model on 5 minutes of data per person, a user can be uniquely identified amongst the entire pool of 55,541 with 94.33% accuracy from 100 seconds of motion, and with 73.20% accuracy from just 10 seconds of motion. This work is the first to truly demonstrate the extent to which biomechanics may serve as a unique identifier in VR, on par with widely used strong biometrics like facial or fingerprint recognition. [less]

Exploring the Unprecedented Privacy Risks of the Metaverse

Vivek Nair, Gonzalo Munilla Garrido, Dawn Song, James F. O'Brien PoPETS 2023

Thirty study participants playtested an innocent-looking "escape room" game in virtual reality (VR). Behind the scenes, an adversarial program had accurately inferred over ... [more] Thirty study participants playtested an innocent-looking "escape room" game in virtual reality (VR). Behind the scenes, an adversarial program had accurately inferred over 25 personal data attributes, from anthropometrics like height and wingspan to demographics like age and gender, within just a few minutes of gameplay. As notoriously data-hungry companies become increasingly involved in VR development, this experimental scenario may soon represent a typical VR user experience. While virtual telepresence applications (and the so-called "metaverse") have recently received increased attention and investment from major tech firms, these environments remain relatively under-studied from a security and privacy standpoint. In this work, we illustrate how VR attackers can covertly ascertain dozens of personal data attributes from seemingly-anonymous users of popular metaverse applications like VRChat. These attackers can be as simple as other VR users without special privilege, and the potential scale and scope of this data collection far exceed what is feasible within traditional mobile and web applications. We aim to shed light on the unique privacy risks of the metaverse, and provide the first holistic framework for understanding intrusive data harvesting attacks in these emerging VR ecosystems. [less]

KBody: Balanced monocular whole-body estimation

Nikolaos Zioulis, James F. O'Brien CVFAD 2023

KBody is a method for fitting a low-dimensional body model to an image. It follows a predict-and-optimize approach, relying on data-driven model estimates for the constraints ... [more] KBody is a method for fitting a low-dimensional body model to an image. It follows a predict-and-optimize approach, relying on data-driven model estimates for the constraints that will be used to solve for the body's parameters. Compared to other approaches, it introduces virtual joints to identify higher quality correspondences and disentangles the optimization between the pose and shape parameters to achieve a more balanced result in terms of pose and shape capturing capacity, as well as pixel alignment. [less]

KBody: Towards general, robust, and aligned monocular whole-body estimation

Nikolaos Zioulis, James F. O'Brien RHOBIN 2023

KBody is a method for fitting a low-dimensional body model to an image. It follows a predict-and-optimize approach, relying on data-driven model estimates for the constraints ... [more] KBody is a method for fitting a low-dimensional body model to an image. It follows a predict-and-optimize approach, relying on data-driven model estimates for the constraints that will be used to solve for the body's parameters. Acknowledging the importance of high quality correspondences, it leverages ``virtual joints" to improve fitting performance, disentangles the optimization between the pose and shape parameters, and integrates asymmetric distance fields to strike a balance in terms of pose and shape capturing capacity, as well as pixel alignment. We also show that generative model inversion offers a strong appearance prior that can be used to complete partial human images and used as a building block for generalized and robust monocular body fitting. Project page: https://klothed.github.io/KBody. [less]

Monocular Facial Performance Capture Via Deep Expression Matching

Stephen Bailey, Jérémy Riviere, Morten Mikkelsen, James F. O'Brien SCA 2022

Facial performance capture is the process of automatically animating a digital face according to a captured performance of an actor. Recent developments in this area have ... [more] Facial performance capture is the process of automatically animating a digital face according to a captured performance of an actor. Recent developments in this area have focused on high-quality results using expensive head-scanning equipment and camera rigs. These methods produce impressive animations that accurately capture subtle details in an actor’s performance. However, these methods are accessible only to content creators with relatively large budgets. Current methods using inexpensive recording equipment generally produce lower quality output that is unsuitable for many applications. In this paper, we present a facial performance capture method that does not require facial scans and instead animates an artist-created model using standard blend-shapes. Furthermore, our method gives artists high-level control over animations through a workflow similar to existing commercial solutions. Given a recording, our approach matches keyframes of the video with corresponding expressions from an animated library of poses. A Gaussian process model then computes the full animation by interpolating from the set of matched keyframes. Our expression-matching method computes a low-dimensional latent code from an image that represents a facial expression while factoring out the facial identity. Images depicting similar facial expressions are identified by their proximity in the latent space. In our results, we demonstrate the fidelity of our expression-matching method. We also compare animations generated with our approach to animations generated with commercially available software. [less]

This photograph has been altered: Testing the effectiveness of image forensic labeling on news image credibility

Cuihua Shen, Mona Kasra, James F. O'Brien Misinformation Review

Despite the ubiquity of images and videos in online news environments, much of the existing research on misinformation and its correction is solely focused on textual misinformation ... [more] Despite the ubiquity of images and videos in online news environments, much of the existing research on misinformation and its correction is solely focused on textual misinformation, and little is known about how ordinary users evaluate fake or manipulated images and the most effective ways to label and correct such falsities. We designed a visual forensic label of image authenticity, Picture-O-Meter, and tested the label’s efficacy in relation to its source and placement in an experiment with 2440 participants. Our findings demonstrate that, despite human beings’ general inability to detect manipulated images on their own, image forensic labels are an effective tool for counteracting visual misinformation. [less]

Fast and Deep Facial Deformations

Stephen Bailey, Dalton Omens, Paul Dilorenzo, James F. O'Brien SIGGRAPH 2020

Film-quality characters typically display highly complex and expressive facial deformation. The underlying rigs used to animate the deformations of a character’s face ... [more] Film-quality characters typically display highly complex and expressive facial deformation. The underlying rigs used to animate the deformations of a character’s face are often computationally expensive, requiring high-end hardware to deform the mesh at interactive rates. In this paper, we present a method using convolutional neural networks for approximating the mesh deformations of characters’ faces. For the models we tested, our approximation runs up to 17 times faster than the original facial rig while still maintaining a high level of fidelity to the original rig. We also propose an extension to the approximation for handling high-frequency deformations such as fine skin wrinkles. While the implementation of the original animation rig depends on an extensive set of proprietary libraries making it difficult to install outside of an in-house development environment, our fast approximation relies on the widely available and easily deployed TensorFlow libraries. In addition to allowing high frame rate evaluation on modest hardware and in a wide range of computing environments, the large speed increase also enables interactive inverse kinematics on the animation rig. We demonstrate our approach and its applicability through interactive character posing and real-time facial performance capture. [less]

Fake images: The effects of source, intermediary, and digital media literacy on contextual assessment of imag credibility online

Cuihua Shen, Mona Kasra, Wenjing Pan, Grace A. Bassett, Yining Malloch, James F. O'Brien New Media and Society

Fake or manipulated images propagated through the Web and social media have the capacity to deceive, emotionally distress, and influence public opinions and actions. Yet ... [more] Fake or manipulated images propagated through the Web and social media have the capacity to deceive, emotionally distress, and influence public opinions and actions. Yet few studies have examined how individuals evaluate the authenticity of images that accompany online stories. This article details a 6-batch large-scale online experiment using Amazon Mechanical Turk that probes how people evaluate image credibility across online platforms. In each batch, participants were randomly assigned to 1 of 28 news-source mockups featuring a forged image, and they evaluated the credibility of the images based on several features. We found that participants’ Internet skills, photo-editing experience, and social media use were significant predictors of image credibility evaluation, while most social and heuristic cues of online credibility (e.g. source trustworthiness, bandwagon, intermediary trustworthiness) had no significant impact. Viewers’ attitude toward a depicted issue also positively influenced their credibility evaluation. [less]

Fast and Deep Deformation Approximations

Stephen Bailey, Dave Otte, Paul Dilorenzo, James F. O'Brien SIGGRAPH 2018

Character rigs are procedural systems that compute the shape of an animated character for a given pose. They can be highly complex and must account for bulges, wrinkles, and ... [more] Character rigs are procedural systems that compute the shape of an animated character for a given pose. They can be highly complex and must account for bulges, wrinkles, and other aspects of a character's appearance. When comparing film-quality character rigs with those designed for real-time applications, there is typically a substantial and readily apparent difference in the quality of the mesh deformations. Real-time rigs are limited by a computational budget and often trade realism for performance. Rigs for film do not have this same limitation, and character riggers can make the rig as complicated as necessary to achieve realistic deformations. However, increasing the rig complexity slows rig evaluation, and the animators working with it can become less efficient and may experience frustration. In this paper, we present a method to reduce the time required to compute mesh deformations for film-quality rigs, allowing better interactivity during animation authoring and use in real-time games and applications. Our approach learns the deformations from an existing rig by splitting the mesh deformation into linear and nonlinear portions. The linear deformations are computed directly from the transformations of the rig's underlying skeleton. We use deep learning methods to approximate the remaining nonlinear portion. In the examples we show from production rigs used to animate lead characters, our approach reduces the computational time spent on evaluating deformations by a factor of 5x-10x. This significant savings allows us to run the complex, film-quality rigs in real-time even when using a CPU-only implementation on a mobile device. [less]

Approximate svBRDF Estimation From Mobile Phone Video

Rachel A. Albert, Dorian Yao Chan, Dan B Goldman, James F. O'Brien EGSR 2018

We describe a new technique for obtaining a spatially varying BRDF (svBRDF) of a flat object using printed fiducial markers and a cell phone capable of continuous flash video ... [more] We describe a new technique for obtaining a spatially varying BRDF (svBRDF) of a flat object using printed fiducial markers and a cell phone capable of continuous flash video. Our homography-based video frame alignment method does not require the fiducial markers to be visible in every frame, thereby enabling us to capture larger areas at a closer distance and higher resolution than in previous work. Pixels in the resulting panorama are fit with a BRDF based on a recursive subdivision algorithm, utilizing all the light and view positions obtained from the video. We show the versatility of our method by capturing a variety of materials with both one and two camera input streams and rendering our results on 3D objects under complex illumination. [less]

Seeing Is Believing: How People Fail to Identify Fake Images on the Web

Mona Kasra, Cuihua Shen, James F. O'Brien CHI 2018

The growing ease with which digital images can be convincingly manipulated and widely distributed on the Internet makes viewers increasingly susceptible to visual misinformation ... [more] The growing ease with which digital images can be convincingly manipulated and widely distributed on the Internet makes viewers increasingly susceptible to visual misinformation and deception. In situations where ill-intentioned individuals seek to deliberately mislead and influence viewers through fake online images, the harmful consequences could be substantial. We describe an exploratory study of how individuals react, respond to, and evaluate the authenticity of images that accompany online stories in Internet-enabled communications channels. Our preliminary findings support the assertion that people perform poorly at detecting skillful image manipulation, and that they often fail to question the authenticity of images even when primed regarding image forgery through discussion. We found that viewers make credibility evaluation based mainly on non-image cues rather than the content depicted. Moreover, our study revealed that in cases where context leads to suspicion, viewers apply post-hoc analysis to support their suspicions regarding the authenticity of the image. [less]

Simulation of Subseismic Joint and Fault Networks Using a Heuristic Mechanical Model

Paul Gillespie, Giulio Casini, Hayley Iben, James F. O'Brien SSRD 2017

Flow simulations of fractured and faulted reservoirs require representation of subseismic structures about which subsurface data are limited. We describe a method for ... [more] Flow simulations of fractured and faulted reservoirs require representation of subseismic structures about which subsurface data are limited. We describe a method for simulating fracture growth that is mechanically based but heuristic, allowing for realistic modelling of fracture networks with reasonable run times. The method takes a triangulated meshed surface as input, together with an initial stress field. Fractures initiate and grow based on the stress field, and the growing fractures relieve the stress in the mesh. We show that a wide range of bedding-plane joint networks can be modelled simply by varying the distribution and anisotropy of the initial stress field. The results are in good qualitative agreement with natural joint patterns. We then apply the method to a set of parallel veins and demonstrate how the variations in thickness of the veins can be represented. Lastly, we apply the method to the simulation of normal fault patterns on salt domes. We derive the stress field on the bedding surface using the horizon curvature. The modelled fault network shows both radial and concentric faults. The new method provides an effective means of modelling joint and fault networks that can be imported to the flow simulator. [less]

Seeing Is Believing: Do People Fail to Identify Fake Images on the Web?

Mona Kasra, Cuihua Shen, James F. O'Brien AoIR 2016

Images have historically been perceived as photographic proof of the depicted events. However, the growing ease with which digital images can be convincingly manipulated ... [more] Images have historically been perceived as photographic proof of the depicted events. However, the growing ease with which digital images can be convincingly manipulated and then widely distributed on the Internet makes viewers increasingly susceptible to visual misinformation and deception. In situations where ill-intentioned individuals seek to deliberately mislead and influence viewers through forged online images, the harmful consequences could be substantial on both personal and social levels. This sort paper, describes preliminary work on an exploratory study of how individuals react, respond to, and evaluate the authenticity of images that accompany online stories in Internet-enabled communications channels (social networking site, blogs, email). Our preliminary findings support the assertion that people perform poorly at detecting skillful image manipulation, and that they often fail to question the authenticity of images even when primed regarding image forgery through discussion. We found that viewers make credibility evaluation based mainly on non-image cues rather that the content depicted. Moreover, our study revealed that in cases where context leads to suspicion, viewers apply post hoc analysis to support their suspicions regarding the authenticity of the image. [less]

Repurposing Hand Animation for Interactive Applications

Stephen Bailey, Martin Watt, James F. O'Brien SCA 2016

In this paper we describe a method for automatically animating interactive characters based on an existing corpus of key-framed hand-animation. The method learns separate ... [more] In this paper we describe a method for automatically animating interactive characters based on an existing corpus of key-framed hand-animation. The method learns separate low-dimensional embeddings for subsets of the hand-animation corresponding to different semantic labels. These embeddings use the Gaussian Process Latent Variable Model to map high-dimensional rig control parameters to a three-dimensional latent space. By using a particle model to move within one of these latent spaces, the method can generate novel animations corresponding to the space's semantic label. Bridges link each pose in one latent space that is similar to a pose in another space. Animations corresponding to a transitions between semantic labels are generated by creating animation paths that move though one latent space and traverse a bridge into another. We demonstrate this method by using it to interactively animate a character as it plays a simple game with the user. The character is from a previously produced animated film and the data we use for training is the data that was used to animate the character in the film. The animated motion from the film represents an enormous investment of skillful work. Our method allows this work to be repurposed and reused for interactively animating the familiar character from the film. [less]

Interactive Detailed Cutting of Thin Sheets

Pierre-Luc Manteaux, Wei-Lun Sun, Francois Faure, Marie-Paule Cani, James F. O'Brien MIG 2015

In this paper we propose a method for the interactive detailed cutting of deformable thin sheets. Our method builds on the ability of frame-based simulation to solve for ... [more] In this paper we propose a method for the interactive detailed cutting of deformable thin sheets. Our method builds on the ability of frame-based simulation to solve for dynamics using very few control frames while embedding highly detailed geometry - here an adaptive mesh that accurately represents the cut boundaries. Our solution relies on a non-manifold grid to compute shape functions that faithfully adapt to the topological changes occurring while cutting. New frames are dynamically inserted to describe new regions. We provide incremental mechanisms for updating simulation data, enabling us to achieve interactive rates. We illustrate our method with examples inspired by the traditional Kirigami artform. [less]

View-Dependent Adaptive Cloth Simulation with Buckling Compensation

Woojong Koh, Rahul Narain, James F. O'Brien TVCG 2015

This paper describes a method for view-dependent cloth simulation using dynamically adaptive mesh refinement and coarsening. Given a prescribed camera motion, the method ... [more] This paper describes a method for view-dependent cloth simulation using dynamically adaptive mesh refinement and coarsening. Given a prescribed camera motion, the method adjusts the criteria controlling refinement to account for visibility and apparent size in the camera's view. Objectionable dynamic artifacts are avoided by anticipative refinement and smoothed coarsening, while locking in extremely coarsened regions is inhibited by modifying the material model to compensate for unresolved sub-element buckling. This approach preserves the appearance of detailed cloth throughout the animation while avoiding the wasted effort of simulating details that would not be discernible to the viewer. The computational savings realized by this method increase as scene complexity grows. The approach produces a 2x speed-up for a single character and more than 4x for a small group as compared to view-independent adaptive simulations, and respectively 5x and 9x speed-ups as compared to non-adaptive simulations. [less]

Optimal Presentation of Imagery with Focus Cues on Multi-Plane Displays

Rahul Narain, Rachel A. Albert, Abdullah Bulbul, Gregory J. Ward, Marty Banks, James F. O'Brien SIGGRAPH 2015

We present a technique for displaying three-dimensional imagery of general scenes with nearly correct focus cues on multi-plane displays. These displays present an additive ... [more] We present a technique for displaying three-dimensional imagery of general scenes with nearly correct focus cues on multi-plane displays. These displays present an additive combination of images at a discrete set of optical distances, allowing the viewer to focus at different distances in the simulated scene. Our proposed technique extends the capabilities of multi-plane displays to general scenes with occlusions and non-Lambertian effects by using a model of defocus in the eye of the viewer. Requiring no explicit knowledge of the scene geometry, our technique uses an optimization algorithm to compute the images to be displayed on the presentation planes so that the retinal images when accommodating to different distances match the corresponding retinal images of the input scene as closely as possible. We demonstrate the utility of the technique using imagery acquired from both synthetic and real-world scenes, and analyze the system's characteristics including bounds on achievable resolution. [less]

Resampling Adaptive Cloth Simulations onto Fixed-Topology Meshes

George Brown, Armin Samii, James F. O'Brien, Rahul Narain SCA 2015 Poster

We describe a method for converting an adaptively remeshed simulation of cloth into an animated mesh with fixed topology. The topology of the mesh may be specified by the ... [more] We describe a method for converting an adaptively remeshed simulation of cloth into an animated mesh with fixed topology. The topology of the mesh may be specified by the user or computed automatically. In the latter case, we present a method for computing the optimal output mesh, that is, a mesh with spatially varying resolution which is fine enough to resolve all the detail present in the animation. This technique allows adaptive simulations to be easily used in applications that expect fixed-topology animated meshes. [less]

Mirror Mirror: Crowdsourcing Better Portraits

Jun-Yan Zhu, Aseem Agarwala, Alexei A. Efros, Eli Shechtman, Jue Wang SIGGRAPH Asia 2014

We describe a method for providing feedback on portrait expressions, and for selecting the most attractive expressions from large video/photo collections. We capture a ... [more] We describe a method for providing feedback on portrait expressions, and for selecting the most attractive expressions from large video/photo collections. We capture a video of a subject’s face while they are engaged in a task designed to elicit a range of positive emotions. We then use crowdsourcing to score the captured expressions for their attractiveness. We use these scores to train a model that can automatically predict attractiveness of different expressions of a given person. We also train a cross-subject model that evaluates portrait attractiveness of novel subjects and show how it can be used to automatically mine attractive photos from personal photo collections. Furthermore, we show how, with a little bit ($5-worth) of extra crowdsourcing, we can substantially improve the cross-subject model by “fine-tuning” it to a new individual using active learning. Finally, we demonstrate a training app that helps people learn how to mimic their best expressions. [less]

Exposing Photo Manipulation from Shading and Shadows

Eric Kee, James F. O'Brien, Hany Farid TOG 2014

We describe a method for detecting physical inconsistencies in lighting from the shading and shadows in an image. This method imposes a multitude of shading- and shadow-based ... [more] We describe a method for detecting physical inconsistencies in lighting from the shading and shadows in an image. This method imposes a multitude of shading- and shadow-based constraints on the projected location of a distant point light source. The consistency of a collection of such constraints is posed as a linear programming problem. A feasible solution indicates that the combination of shading and shadows is physically consistent, while a failure to find a solution provides evidence of photo tampering. [less]

Eyeglasses-free Display: Towards Correcting Visual Aberrations with Computational Light Field Displays

Fu-Chung Huang, Gordon Wetzstein, Brian A. Barsky, Ramesh Raskar ACM SIGGRAPH 2014

Millions of people worldwide need glasses or contact lenses to see or read properly. We introduce a computational display technology that predistorts the presented content ... [more] Millions of people worldwide need glasses or contact lenses to see or read properly. We introduce a computational display technology that predistorts the presented content for an observer, so that the target image is perceived without the need for eyewear. By designing optics in concert with prefiltering algorithms, the proposed display architecture achieves significantly higher resolution and contrast than prior approaches to vision-correcting image display. We demonstrate that inexpensive light field displays driven by efficient implementations of 4D prefiltering algorithms can produce the desired vision-corrected imagery, even for higher-order aberrations that are difficult to be corrected with glasses. The proposed computational display architecture is evaluated in simulation and with a low-cost prototype device. [less]

Adaptive Tearing and Cracking of Thin Sheets

Tobias Pfaff, Rahul Narain, Juan Miguel de Joya, James F. O'Brien SIGGRAPH 2014

This paper presents a method for adaptive fracture propagation in thin sheets. A high-quality triangle mesh is dynamically restructured to adaptively maintain detail ... [more] This paper presents a method for adaptive fracture propagation in thin sheets. A high-quality triangle mesh is dynamically restructured to adaptively maintain detail wherever it is required by the simulation. These requirements include refining where cracks are likely to either start or advance. Refinement ensures that the stress distribution around the crack tip is well resolved, which is vital for creating highly detailed, realistic crack paths. The dynamic meshing framework allows subsequent coarsening once areas are no longer likely to produce cracking. This coarsening allows efficient simulation by reducing the total number of active nodes and by preventing the formation of thin slivers around the crack path. A local reprojection scheme and a substepping fracture process help to ensure stability and prevent a loss of plasticity during remeshing. By including bending and stretching plasticity models, the method is able to simulate a large range of materials with very different fracture behaviors. [less]

Self-Refining Games using Player Analytics

Matt Stanton, Ben Humberston, Brandon Kase, James F. O'Brien, Kayvon Fatahalian, Adrien Treuille SIGGRAPH 2014

Data-driven simulation demands good training data drawn from a vast space of possible simulations. While fully sampling these large spaces is infeasible, we observe that ... [more] Data-driven simulation demands good training data drawn from a vast space of possible simulations. While fully sampling these large spaces is infeasible, we observe that in practical applications, such as gameplay, users explore only a vanishingly small subset of the dynamical state space. In this paper we present a sampling approach that takes advantage of this observation by concentrating precomputation around the states that users are most likely to encounter. We demonstrate our technique in a prototype self-refining game whose dynamics improve with play, ultimately providing realistically rendered, rich fluid dynamics in real time on a mobile device. Our results show that our analytics-driven training approach yields lower model error and fewer visual artifacts than a heuristic training strategy. [less]

View-Dependent Adaptive Cloth Simulation

Woojong Koh, Rahul Narain, James F. O'Brien SCA 2014

This paper describes a method for view-dependent cloth simulation using dynamically adaptive mesh refinement and coarsening. Given a prescribed camera motion, the method ... [more] This paper describes a method for view-dependent cloth simulation using dynamically adaptive mesh refinement and coarsening. Given a prescribed camera motion, the method adjusts the criteria controlling refinement to account for visibility and apparent size in the camera's view. Objectionable dynamic artifacts are avoided by anticipative refinement and smoothed coarsening. This approach preserves the appearance of detailed cloth throughout the animation while avoiding the wasted effort of simulating details that would not be discernible to the viewer. The computational savings realized by this method increase as scene complexity grows, producing a 2x speed-up for a single character and more than 4x for a small group. [less]

AverageExplorer: Interactive Exploration and Alignment of Visual Data Collections

Jun-Yan Zhu, Yong Jae Lee, Alexei A. Efros SIGGRAPH 2014

This paper proposes an interactive framework that allows a user to rapidly explore and visualize a large image collection using the medium of average images. Average images ... [more] This paper proposes an interactive framework that allows a user to rapidly explore and visualize a large image collection using the medium of average images. Average images have been gaining popularity as means of artistic expression and data visualization, but the creation of compelling examples is a surprisingly laborious and manual process. Our interactive, real-time system provides a way to summarize large amounts of visual data by weighted average(s) of an image collection, with the weights reflecting user-indicated importance. The aim is to capture not just the mean of the distribution, but a set of modes discovered via interactive exploration. We pose this exploration in terms of a user interactively “editing” the average image using various types of strokes, brushes and warps, similar to a normal image editor, with each user interaction providing a new constraint to update the average. New weighted averages can be spawned and edited either individually or jointly. Together, these tools allow the user to simultaneously perform two fundamental operations on visual data: user-guided clustering and user-guided alignment, within the same framework. We show that our system is useful for various computer vision and graphics applications [less]

User-Assisted Video Stabilization

Jiamin Bai, Aseem Agarwala, Maneesh Agrawala, Ravi Ramamoorthi EGSR 2014

We present a user-assisted video stabilization algorithm that is able to stabilize challenging videos when state-of-the-art automatic algorithms fail to generate ... [more] We present a user-assisted video stabilization algorithm that is able to stabilize challenging videos when state-of-the-art automatic algorithms fail to generate a satisfactory result. Current methods do not give the user any control over the look of the final result. Users either have to accept the stabilized result as is, or discard it should the stabilization fail to generate a smooth output. Our system introduces two new modes of interaction that allow the user to improve the unsatisfactory stabilized video. First, we cluster tracks and visualize them on the warped video. The user ensures that appropriate tracks are selected by clicking on track clusters to include or exclude them. Second, the user can directly specify how regions in the output video should look by drawing quadrilaterals to select and deform parts of the frame. These user-provided deformations reduce undesirable distortions in the video. Our algorithm then computes a stabilized video using the user-selected tracks, while respecting the user-modified regions. The process of interactively removing user-identified artifacts can sometimes introduce new ones, though in most cases there is a net improvement. We demonstrate the effectiveness of our system with a variety of challenging hand held videos. [less]

Can 3D Shape be Estimated from Focus Cues Alone?

Rachel A. Albert, Abdullah Bulbul, Rahul Narain, James F. O'Brien, Martin S. Banks VSS 2014

Focus cues—blur and accommodation—have generally been regarded as very coarse, ordinal cues to depth. This assessment has been largely determined by the inability to display ... [more] Focus cues—blur and accommodation—have generally been regarded as very coarse, ordinal cues to depth. This assessment has been largely determined by the inability to display these cues correctly with conventional displays. For example, when a 3D shape is displayed with sharp rendering (i.e., pinhole camera), the expected blur variation is not present and accommodation does not have an appropriate effect on the retinal image. When a 3D shape with rendered blur (i.e., camera with non-pinhole aperture) is displayed, the viewer's accommodation does not have the appropriate retinal effect. We asked whether the information provided by correct blur and accommodation can be used to determine shape. We conducted a shape-discrimination experiment in which subjects indicated whether a hinge stimulus was concave or convex. The stimuli were presented monocularly in a unique volumetric display that allows us to present correct or nearly correct focus cues. The hinge was textured using a back-projection technique, so the stimuli contained no useful shape cues except blur and accommodation. We used four rendering methods that vary in the validity of focus information. Two single-plane methods mimicked a conventional display and two volumetric methods mimicked natural viewing. A pinhole camera model was used in one single-plane condition, so image sharpness was independent of depth. In the other single-plane condition, natural blur was rendered thereby creating an appropriate blur gradient. In one volumetric condition, a linear blending rule was used to assign intensity to image planes. In the other volumetric condition, an optimized blending rule was used that creates a closer approximation to real-world viewing. Subject performance was at chance in the single-plane conditions. Performance improved substantially when in the volumetric conditions, slightly better in the optimized-blending condition. This is direct evidence that 3D shape judgments can be made from the information contained in blur and accommodation alone. [less]

Correct blur and accommodation information is a reliable cue to depth ordering.

Marina Zannoli, Rachel A. Albert, Abdullah Bulbul, Rahul Narain, James F. O'Brien, Martin Banks VSS 2014

Marshall et al. (1996) showed that blur could in principle also be used to determine depth ordering of two surfaces across an occlusion boundary from the correlation between ... [more] Marshall et al. (1996) showed that blur could in principle also be used to determine depth ordering of two surfaces across an occlusion boundary from the correlation between the boundary’s blur and the blur of the two surfaces. They tested this experimentally by presenting stimuli on a conventional display and manipulating rendered blur. This approximates the retinal image formed by surfaces at different depths and an occlusion boundary, but only when the viewer accommodates to the display screen. Accommodation to other distances creates incorrect blur. Viewers' judgments of depth ordering were inconsistent: they generally judged the sharper surface as nearer than the blurrier one regardless of boundary blur. We asked if more consistent performance occurs when accommodation has the appropriate effect on the retinal image. We used a volumetric display to present nearly correct focus cues. Images were displayed on four image planes at focal distances from 1.4-3.2 diopters. Viewers indicated the nearer of two textured surfaces separated by a sinusoidal boundary. The stimuli were presented either on one plane as in previous experiments or on two planes (separated either by 0.6 or by 1.2 diopters) such that focus cues are nearly correct. Viewers first fixated and accommodated to a cross on one of the planes. The stimulus was then presented either for 200ms, too short for accommodative change, or for 4s, allowing accommodative change. Responses were much more accurate in the two-plane condition than in the single-plane condition, which shows that appropriate blur can be used to determine depth ordering across an occlusion boundary. Responses were also more accurate with the longer presentations, which shows that accommodation aids depth-order determination. Thus, correct blur and accommodation information across an occlusion boundary yields more accurate depth-ordering judgments than indicated by previous work. [less]

The Perception of Surface Material from Disparity and Focus Cues

Martin Banks, Abdullah Bulbul, Rachel Albert, Rahul Narain, James F. O'Brien, Gregory Ward VSS 2014

The visual properties of surfaces reveal many things including a floor's cleanliness and a car's age. These judgments of material are based on the spread of light reflected ... [more] The visual properties of surfaces reveal many things including a floor's cleanliness and a car's age. These judgments of material are based on the spread of light reflected from a surface. The bidirectional reflectance distribution function (BRDF) quantifies the pattern of spread and how it depends on the direction of incident light, surface shape, and surface material. Two extremes are Lambertian and mirrored surfaces, which respectively have uniform and delta-function BRDFs. Most surfaces have more complicated BRDFs and we examined many of them using the Ward model as an approximation for real surfaces. Reflections are generally view dependent. This dependence creates a difference between the binocular disparities of a reflection and the surface itself. It also creates focus differences between the reflection and physical surface. In simulations we examined how material type affects retinal images. We calculated point-spread functions (PSFs) for reflections off different materials as a function of the eye's focus state. When surface roughness is zero, the reflection PSF changes dramatically with focus state. With greater roughness, the PSF change is reduced until there is no effect of focus state with sufficiently rough surfaces. The reflection PSF also has a dramatic effect on the ability to estimate disparity. We next examined people's ability to distinguish surface markings from reflections and to identify different types of material. We used a unique volumetric display that allows us to present nearly correct focus cues along with more traditional depth cues such as disparity. With binocular viewing, we observed a clear effect of the disparity of reflections on these judgments. We also found that disparity provided less useful information with rougher materials. With monocular viewing, we observed a small but consistent effect of the reflection's focal distance on judgments of markings vs. reflections and on identification of material. [less]

External mask based depth and light field camera

Dikpal Reddy, Jiamin Bai, Ravi Ramamoorthi ICCV 2013 Workshop

We present a method to convert a digital single-lensreflex (DSLR) camera into a high resolution consumer depth and light field camera by affixing an external aperture mask ... [more] We present a method to convert a digital single-lensreflex (DSLR) camera into a high resolution consumer depth and light field camera by affixing an external aperture mask to the main lens. Compared to the existing consumer depth and light field cameras, our camera is easy to construct with minimal additional costs and our design is camera and lens agnostic. The main advantage of our design is the ease of switching between an SLR camera and a native resolution depth/light field camera. Using an external mask is an important advantage over current light field camera designs since we do not need to modify the internals of the camera or the lens. Our camera sequentially acquires the angular components of the light field of a static scene by changing the location of the aperture in the mask. A consequence of our design is that the external aperture causes heavy vignetting in the acquired images. We calibrate the mask parameters and estimate multi-view scene depth under vignetting. In addition to depth, we show light field applications such as refocusing and defocus blur at the sensor resolution. [less]

Depth from Combining Defocus and Correspondence Using light-Field Cameras

Michael W. Tao, Sunil Hadap, Jitendra Malik, Ravi Ramamoorthi ICCV 2013

Light-field cameras have recently become available to the consumer market. An array of micro-lenses captures enough information that one can refocus images after acquisition ... [more] Light-field cameras have recently become available to the consumer market. An array of micro-lenses captures enough information that one can refocus images after acquisition, as well as shift one's viewpoint within the sub-apertures of the main lens, effectively obtaining multiple views. Thus, depth cues from both defocus and correspondence are available simultaneously in a single capture. Previously, defocus could be achieved only through multiple image exposures focused at different depths, while correspondence cues needed multiple exposures at different viewpoints or multiple cameras; moreover, both cues could not easily be obtained together. In this paper, we present a novel simple and principled algorithm that computes dense depth estimation by combining both defocus and correspondence depth cues. We analyze the x-u 2D epipolar image (EPI), where by convention we assume the spatial x coordinate is horizontal and the angular u coordinate is vertical (our final algorithm uses the full 4D EPI). We show that defocus depth cues are obtained by computing the horizontal (spatial) variance after vertical (angular) integration, and correspondence depth cues by computing the vertical (angular) variance. We then show how to combine the two cues into a high quality depth map, suitable for computer vision applications such as matting, full control of depth-of-field, and surface reconstruction. [less]

Fast Simulation of Mass-Spring Systems

Tiantian Liu, Adam Bargteil, James F. O'Brien, Ladislav Kavan SIGGRAPH Asia 2013

We describe a scheme for time integration of mass-spring systems that makes use of a solver based on block coordinate descent. This scheme provides a fast solution for classical ... [more] We describe a scheme for time integration of mass-spring systems that makes use of a solver based on block coordinate descent. This scheme provides a fast solution for classical linear (Hookean) springs. We express the widely used implicit Euler method as an energy minimization problem and introduce spring directions as auxiliary unknown variables. The system is globally linear in the node positions, and the non-linear terms involving the directions are strictly local. Because the global linear system does not depend on run-time state, the matrix can be pre-factored, allowing for very fast iterations. Our method converges to the same final result as would be obtained by solving the standard form of implicit Euler using Newton's method. Although the asymptotic convergence of Newton's method is faster than ours, the initial ratio of work to error reduction with our method is much faster than Newton's. For real-time visual applications, where speed and stability are more important than precision, we obtain visually acceptable results at a total cost per timestep that is only a fraction of that required for a single Newton iteration. When higher accuracy is required, our algorithm can be used to compute a good starting point for subsequent Newton's iteration. [less]

Exposing Photo Manipulation with Inconsistent Shadows

Eric Kee, James F. O'Brien, Hany Farid TOG 2013

We describe a geometric technique to detect physically inconsistent arrangements of shadows in an image. This technique combines multiple constraints from cast and attached ... [more] We describe a geometric technique to detect physically inconsistent arrangements of shadows in an image. This technique combines multiple constraints from cast and attached shadows to constrain the projected location of a point light source. The consistency of the shadows is posed as a linear programming problem. A feasible solution indicates that the collection of shadows is physically plausible, while a failure to find a solution provides evidence of photo tampering. [less]

Folding and Crumpling Adaptive Sheets

Rahul Narain, Tobias Pfaff, James F. O'Brien SIGGRAPH 2013

We present a technique for simulating plastic deformation in sheets of thin materials, such as crumpled paper, dented metal, and wrinkled cloth. Our simulation uses a framework ... [more] We present a technique for simulating plastic deformation in sheets of thin materials, such as crumpled paper, dented metal, and wrinkled cloth. Our simulation uses a framework of adaptive mesh refinement to dynamically align mesh edges with folds and creases. This framework allows efficient modeling of sharp features and avoids bend locking that would be otherwise caused by stiff in-plane behavior. By using an explicit plastic embedding space we prevent remeshing from causing shape diffusion. We include several examples demonstrating that the resulting method realistically simulates the behavior of thin sheets as they fold and crumple. [less]

Near-exhaustive Precomputation of Secondary Cloth Effects

Doyub Kim, Woojong Koh, Rahul Narain, Kayvon Fatahalian, Adrien Treuille, James F. O'Brien SIGGRAPH 2013

The central argument against data-driven methods in computer graphics rests on the curse of dimensionality: it is intractable to precompute "everything" about a complex ... [more] The central argument against data-driven methods in computer graphics rests on the curse of dimensionality: it is intractable to precompute "everything" about a complex space. In this paper, we challenge that assumption by using several thousand CPU-hours to perform a massive exploration of the space of secondary clothing effects on a character animated through a large motion graph. Our system continually explores the phase space of cloth dynamics, incrementally constructing a secondary cloth motion graph that captures the dynamics of the system. We find that it is possible to sample the dynamical space to a low visual error tolerance and that secondary motion graphs containing tens of gigabytes of raw mesh data can be compressed down to only tens of megabytes. These results allow us to capture the effect of high-resolution, off-line cloth simulation for a rich space of character motion and deliver it efficiently as part of an interactive application. [less]

Axis-Aligned Filtering for Interactive Physically-Based Diffuse Indirect Lighting

Soham Uday Mehta, Brandon Wang, Ravi Ramamoorthi, Fredo Durand SIGGRAPH 2013

We introduce an algorithm for interactive rendering of physically-based global illumination, based on a novel frequency analysis of indirect lighting. Our method combines ... [more] We introduce an algorithm for interactive rendering of physically-based global illumination, based on a novel frequency analysis of indirect lighting. Our method combines adaptive sampling by Monte Carlo ray or path tracing, using a standard GPU-accelerated raytracer, with real-time reconstruction of the resulting noisy images. Our theoretical analysis assumes diffuse indirect lighting, with general Lambertian and specular receivers. In practice, we demonstrate accurate interactive global illumination with diffuse and moderately glossy objects, at 1-3 fps. We show mathematically that indirect illumination is a structured signal in the Fourier domain, with inherent band-limiting due to the BRDF and geometry terms. We extend previous work on sheared and axis-aligned filtering for motion blur and shadows, to develop an image-space filtering method for interreflections. Our method enables 5-8 times reduced sampling rates and wall clock times, and converges to ground truth as more samples are added. To develop our theory, we overcome important technical challenges - unlike previous work, there is no light source to serve as a band-limit in indirect lighting, and we also consider non-parallel geometry of receiver and reflecting surfaces, without first-order approximations. [less]

Type-Constrained Direct Fitting of Quadric Surfaces

James Andrews, Carlo H. Séquin CAD 2013

We present a catalog of type-specific, direct quadric fitting methods: Given a selection of a point cloud or triangle mesh, and a desired quadric type (e.g. cone, ellipsoid, paraboloid ... [more] We present a catalog of type-specific, direct quadric fitting methods: Given a selection of a point cloud or triangle mesh, and a desired quadric type (e.g. cone, ellipsoid, paraboloid, etc), our methods recover a best-fit surface of the given type to the given data. Type-specific quadric fitting methods are scattered throughout the literature; here we present a thorough, practical collection in one place. We add new methods to handle neglected quadric types, such as non-circular cones and general rotationally symmetric quadrics. We improve upon existing methods for ellipsoid- and hyperboloid-specific fitting. Our catalog handles a wide range of quadric types with just two high-level fitting strategies, making it simpler to understand and implement. [less]

Automatic Cinemagraph Portraits

Jiamin Bai, Aseem Agarwala, Maneesh Agrawala, Ravi Ramamoorthi EGSR 2013

Cinemagraphs are a popular new type of visual media that lie in-between photos and video; some parts of the frame are animated and loop seamlessly, while other parts of the ... [more] Cinemagraphs are a popular new type of visual media that lie in-between photos and video; some parts of the frame are animated and loop seamlessly, while other parts of the frame remain completely still. Cinemagraphs are especially effective for portraits because they capture the nuances of our dynamic facial expressions. We present a completely automatic algorithm for generating portrait cinemagraphs from a short video captured with a hand-held camera. Our algorithm uses a combination of face tracking and point tracking to segment face motions into two classes: gross, large-scale motions that should be removed from the video, and dynamic facial expressions that should be preserved. This segmentation informs a spatially-varying warp that removes the large-scale motion, and a graph-cut segmentation of the frame into dynamic and still regions that preserves the finer-scale facial expression motions. We demonstrate the success of our method with a variety of results and a comparison to previous work. [less]

Sharpening Out of Focus Images using High-Frequency Transfer

Michael Tao, Jitendra Malik, Ravi Ramamoorthi EG 2013

Focus misses are common in image capture, such as when the camera or the subject moves rapidly in sports and macro photography. One option to sharpen focus-missed photographs ... [more] Focus misses are common in image capture, such as when the camera or the subject moves rapidly in sports and macro photography. One option to sharpen focus-missed photographs is through single image deconvolution, but high frequency data cannot be fully recovered; therefore, artifacts such as ringing and amplified noise become apparent. We propose a new method that uses assisting, similar but different, sharp image(s) provided by the user (such as multiple images of the same subject in different positions captured using a burst of photographs). Our first contribution is to theoretically analyze the errors in three sources of data—a slightly sharpened origi- nal input image that we call the target, single image deconvolution with an aggressive inverse filter, and warped assisting image(s) registered using optical flow. We show that these three sources have different error character- istics, depending on image location and frequency band (for example, aggressive deconvolution is more accurate in high-frequency regions like edges). Next, we describe a practical method to compute these errors, given we have no ground truth and cannot easily work in the Fourier domain. Finally, we select the best source of data for a given pixel and scale in the Laplacian pyramid. We accurately transfer high-frequency data to the input, while minimizing artifacts. We demonstrate sharpened results on out-of-focus images in macro, sports, portrait and wildlife photography. [less]

Simulating Liquids and Solid-Liquid Interactions with Lagrangian Meshes

Pascal Clausen, Martin Wicke, Jonathan Shewchuk, James F. O'Brien TOG 2013

This paper describes a Lagrangian finite element method that simulates the behavior of liquids and solids in a unified framework. Local mesh improvement operations maintain ... [more] This paper describes a Lagrangian finite element method that simulates the behavior of liquids and solids in a unified framework. Local mesh improvement operations maintain a high-quality tetrahedral discretization even as the mesh is advected by fluid flow. We conserve volume and momentum, locally and globally, by assigning each element an independent rest volume and adjusting it to correct for deviations during remeshing and collisions. Incompressibility is enforced with per-node pressure values, and extra degrees of freedom are selectively inserted to prevent pressure locking. Topological changes in the domain are explicitly treated with local mesh splitting and merging. Our method models surface tension with an implicit formulation based on surface energies computed on the boundary of the volume mesh. With this method we can model elastic, plastic, and liquid materials in a single mesh, with no need for explicit coupling. We also model heat diffusion and thermoelastic effects, which allow us to simulate phase changes. We demonstrate these capabilities in several fluid simulations at scales from millimeters to meters, including simulations of melting caused by external or thermoelastic heating. [less]

Generalized, Basis-Independent Kinematic Surface Fitting

James Andrews, Carlo H. Séquin JCAD 2013

Kinematic surfaces form a general class of surfaces, including surfaces of revolution, helices, spirals, and more. Standard methods for fitting such surfaces are either specialized ... [more] Kinematic surfaces form a general class of surfaces, including surfaces of revolution, helices, spirals, and more. Standard methods for fitting such surfaces are either specialized to a small subset of these surface types (either focusing exclusively on cylinders or exclusively on surfaces of revolution) or otherwise are basis-dependent (leading to scale-dependent results). Previous work has suggested re-scaling data to a fixed size bounding box to avoid the basis-dependence issues. We show that this method fails on some simple, common cases such as a box or a cone with small noise. We propose instead adapting a well-studied approximate maximum-likelihood method to the kinematic surface fitting problem, which solves the basis-dependence issue. Because this technique is not designed for a specific type of kinematic surface, it also opens the door to the possibility of new variants of kinematic surfaces, such as affinely-scaled surfaces of revolution. [less]

Interactive Albedo Editing in Path-Traced Volumetric Materials

Miloš Hašan, Ravi Ramamoorthi TOG 2013

Materials such as clothing or carpets, or complex assemblies of small leaves, flower petals or mosses, do not fit well into either BRDF or BSSRDF models. Their appearance is ... [more] Materials such as clothing or carpets, or complex assemblies of small leaves, flower petals or mosses, do not fit well into either BRDF or BSSRDF models. Their appearance is a complex combination of reflection, transmission, scattering, shadowing and inter-reflection. This complexity can be handled by simulating the full volumetric light transport within these materials by Monte Carlo algorithms, but there is no easy way to construct the necessary distributions of local material properties that would lead to the desired global appearance. In this paper, we consider one way to alleviate the problem: an editing algorithm that enables a material designer to set the local (single-scattering) albedo coefficients interactively, and see an immediate update of the emergent appearance in the image. This is a difficult problem, since the function from materials to pixel values is neither linear nor low-order polynomial. We combine the following two ideas to achieve high-dimensional heterogeneous edits: precomputing the homogeneous mapping of albedo to intensity, and a large Jacobian matrix, which encodes the derivatives of each image pixel with respect to each albedo coefficient. Combining these two datasets leads to an interactive editing algorithm with a very good visual match to a fully path-traced ground truth. [less]

Gloss Perception in Painterly and Cartoon Rendering

Adrien Boussean, James P. O'Shea, Frédo Durand, Ravi Ramamoorthi, Maneesh Agrawala TOG 2013

Depictions with traditional media such as painting and drawing represent scene content in a stylized manner. It is unclear however how well stylized images depict scene ... [more] Depictions with traditional media such as painting and drawing represent scene content in a stylized manner. It is unclear however how well stylized images depict scene properties like shape, material and lighting. In this paper, we describe the first study of material perception in stylized images (specifically painting and cartoon) and use non photorealistic rendering algorithms to evaluate how such stylization alters the perception of gloss. Our study reveals a compression of the range of representable gloss in stylized images so that shiny materials appear more diffuse in painterly rendering, while diffuse materials appear shinier in cartoon images. From our measurements we estimate the function that maps realistic gloss parameters to their perception in a stylized rendering. This mapping allows users of NPR algorithms to predict the perception of gloss in their images. The inverse of this function exaggerates gloss properties to make the contrast between materials in a stylized image more faithful. We have conducted our experiment both in a lab and on a crowdsourcing website. While crowdsourcing allows us to quickly design our pilot study, a lab experiment provides more control on how subjects perform the task. We provide a detailed comparison of the results obtained with the two approaches and discuss their advantages and drawbacks for studies like ours. [less]

Axis-Aligned Filtering for Interactive Sampled Soft Shadows

Soham Mehta, Brandon Wang, Ravi Ramamoorthi Siggraph Asia 2012

We develop a simple and efficient method for soft shadows from planar area light sources, based on explicit occlusion calculation by raytracing, followed by adaptive image-space ... [more] We develop a simple and efficient method for soft shadows from planar area light sources, based on explicit occlusion calculation by raytracing, followed by adaptive image-space filtering. Since the method is based on Monte Carlo sampling, it is accurate. Since the filtering is in image-space, it adds minimal overhead and can be performed at real-time frame rates. We obtain interactive speeds, using the Optix GPU raytracing framework. Our technical approach derives from recent work on frequency analysis and sheared pixel-light filtering for offline soft shadows. While sample counts can be reduced dramatically, the sheared filtering step is slow, adding minutes of overhead. We develop the theoretical analysis to instead consider axis-aligned filtering, deriving the sampling rates and filter sizes. We also show how the filter size can be reduced as the number of samples increases, ensuring a consistent result that converges to ground truth as in standard Monte Carlo rendering. [less]

Adaptive Anisotropic Remeshing for Cloth Simulation

Rahul Narain, Armin Samii, James F. O'Brien SIGGRAPH Asia 2012

We present a technique for cloth simulation that dynamically refines and coarsens triangle meshes so that they automatically conform to the geometric and dynamic detail ... [more] We present a technique for cloth simulation that dynamically refines and coarsens triangle meshes so that they automatically conform to the geometric and dynamic detail of the simulated cloth. Our technique produces anisotropic meshes that adapt to surface curvature and velocity gradients, allowing efficient modeling of wrinkles and waves. By anticipating buckling and wrinkle formation, our technique preserves fine-scale dynamic behavior. Our algorithm for adaptive anisotropic remeshing is simple to implement, takes up only a small fraction of the total simulation time, and provides substantial computational speedup without compromising the fidelity of the simulation. We also introduce a novel technique for strain limiting by posing it as a nonlinear optimization problem. This formulation works for arbitrary non-uniform and anisotropic meshes, and converges more rapidly than existing solvers based on Jacobi or Gauss-Seidel iterations. [less]

Correcting for Optical Aberrations using Multilayer Displays

Fu-Chung Huang, Douglas Lanman, Brian A. Barsky, Ramesh Raskar SIGGRAPH ASIA 2012

Optical aberrations of the human eye are currently corrected using eyeglasses, contact lenses, or surgery. We describe a fourth option: modifying the composition of displayed ... [more] Optical aberrations of the human eye are currently corrected using eyeglasses, contact lenses, or surgery. We describe a fourth option: modifying the composition of displayed content such that the perceived image appears in focus, after passing through an eye with known optical defects. Prior approaches synthesize pre-filtered images by deconvolving the content by the point spread function of the aberrated eye. Such methods have not led to practical applications, due to severely reduced contrast and ringing artifacts. We address these limitations by introducing multilayer pre-filtering, implemented using stacks of semi-transparent, light-emitting layers. By optimizing the layer positions and the partition of spatial frequencies between layers, contrast is improved and ringing artifacts are eliminated. We assess design constraints for multilayer displays; autostereoscopic light field displays are identified as a preferred, thin form factor architecture, allowing synthetic layers to be displaced in response to viewer movement and refractive errors. We assess the benefits of multilayer pre-filtering versus prior light field pre-distortion methods, showing pre-filtering works within the constraints of current display resolutions. We conclude by analyzing benefits and limitations using a prototype multilayer LCD. [less]

Frequency-Space Decomposition and Acquisition of Light Transport under Spatially Varying Illumination

Dikpal Reddy, Ravi Ramamoorthi, Brian Curless ECCV 2012

We show that, under spatially varying illumination, the light transport of diffuse scenes can be decomposed into direct, near-range (subsurface scattering and local inter-reflections ... [more] We show that, under spatially varying illumination, the light transport of diffuse scenes can be decomposed into direct, near-range (subsurface scattering and local inter-reflections) and far-range transports (diffuse inter-reflections). We show that these three component transports are redundant either in the spatial or the frequency domain and can be separated using appropriate illumination patterns. We propose a novel, efficient method to sequentially separate and acquire the component transports. First, we acquire the direct transport by extending the direct-global separation technique from floodlit images to full transport matrices. Next, we separate and acquire the near-range transport by illuminating patterns sampled uniformly in the frequency domain. Finally, we acquire the far-range transport by illuminating low-frequency patterns. We show that theoretically, our acquisition method achieves the lower bound our model places on the required number of patterns. We quantify the savings in number of patterns over the brute force approach. We validate our observations and acquisition method with rendered and real examples throughout. [less]

On Differential Photometric Reconstruction for Unknown, Isotropic BRDFs

Manmohan Chandraker, Jiamin Bai, Ravi Ramamoorthi PAMI 2012

This paper presents a comprehensive theory of photometric surface reconstruction from image derivatives, in the presence of a general, unknown isotropic BRDF. We derive ... [more] This paper presents a comprehensive theory of photometric surface reconstruction from image derivatives, in the presence of a general, unknown isotropic BRDF. We derive precise topological classes up to which the surface may be determined and specify exact priors for a full geometric reconstruction. These results are the culmination of a series of fundamental observations. First, we exploit the linearity of chain rule differentiation to discover photometric invariants that relate image derivatives to the surface geometry, regardless of the form of isotropic BRDF. For the problem of shape from shading, we show that a reconstruction may be performed up to isocontours of constant magnitude of the gradient. For the problem of photometric stereo, we show that just two measurements of spatial and temporal image derivatives, from unknown light directions on a circle, suffice to recover surface information from the photometric invariant. Surprisingly, the form of the invariant bears a striking resemblance to optical flow, however, it does not suffer from the aperture problem. This photometric flow is shown to determine the surface up to isocontours of constant magnitude of the surface gradient, as well as isocontours of constant depth. Further, we prove that specification of the surface normal at a single point completely determines the surface depth from these isocontours. In addition, we propose practical algorithms that require additional initial or boundary information, but recover depth from lower order derivatives. Our theoretical results are illustrated with several examples on synthetic and real data. [less]

Selectively De-Animating Video

Jiamin Bai, Aseem Agarwala, Maneesh Agrawala, Ravi Ramamoorthi SIGGRAPH 2012

We present a semi-automated technique for selectively de-animating video to remove the large-scale motions of one or more objects so that other motions are easier to see ... [more] We present a semi-automated technique for selectively de-animating video to remove the large-scale motions of one or more objects so that other motions are easier to see. The user draws strokes to indicate the regions of the video that should be immobilized, and our algorithm warps the video to remove the large-scale motion of these regions while leaving finer-scale, relative motions intact. However, such warps may introduce unnatural motions in previously motionless areas, such as background regions. We therefore use a graph-cut-based optimization to composite the warped video regions with still frames from the input video; we also optionally loop the output in a seamless manner. Our technique enables a number of applications such as clearer motion visualization, simpler creation of artistic cinemagraphs (photos that include looping motions in some regions), and new ways to edit appearance and complicated motion paths in video by manipulating a de-animated representation. We demonstrate the success of our technique with a number of motion visualizations, cinemagraphs and video editing examples created from a variety of short input videos, as well as visual and numerical comparison to previous techniques. [less]

Updated Sparse Cholesky Factors for Corotational Elastodynamics

Florian Hecht, Yeon Jin Lee, Jonathan Shewchuk, James F. O'Brien TOG 2012

We present warp-canceling corotation, a nonlinear finite element formulation for elastodynamic simulation that achieves fast performance by making only partial or ... [more] We present warp-canceling corotation, a nonlinear finite element formulation for elastodynamic simulation that achieves fast performance by making only partial or delayed changes to the simulation’s linearized system matrices. Coupled with an algorithm for incremental updates to a sparse Cholesky factorization, the method realizes the stability and scalability of a sparse direct method without the need for expensive refactorization at each time step. This finite element formulation combines the widely used corotational method with stiffness warping so that changes in the per-element rotations are initially approximated by inexpensive per-node rotations. When the errors of this approximation grow too large, the per-element rotations are selectively corrected by updating parts of the matrix chosen according to locally measured errors. These changes to the system matrix are propagated to its Cholesky factor by incremental updates that are much faster than refactoring the matrix from scratch. A nested dissection ordering of the system matrix gives rise to a hierarchical factorization in which changes to the system matrix cause limited, well-structured changes to the Cholesky factor. We show examples of simulations that demonstrate that the proposed formulation produces results that are visually comparable to those produced by a standard corotational formulation. Because our method requires computing only partial updates of the Cholesky factor, it is substantially faster than full refactorization and outperforms widely used iterative methods such as preconditioned conjugate gradients. Our method supports a controlled trade-off between accuracy and speed, and unlike most iterative methods its performance does not slow for stiffer materials but rather it actually improves. [less]

Compressive Structured Light for Recovering Inhomogeneous Participating Media

Jinwei Gu, Shree K. Nayar, Eitan Grinspun, Peter N. Belhumeur, Ravi Ramamoorthi PAMI 2012

We propose a new method named compressive structured light for recovering inhomogeneous participating media. Whereas conventional structured light methods emit coded ... [more] We propose a new method named compressive structured light for recovering inhomogeneous participating media. Whereas conventional structured light methods emit coded light patterns onto the surface of an opaque object to establish correspondence for triangulation, compressive structured light projects patterns into a volume of participating medium to produce images which are integral measurements of the volume density along the line of sight. For a typical participating medium encountered in the real world, the integral nature of the acquired images enables the use of compressive sensing techniques that can recover the entire volume density from only a few measurements. This makes the acquisition process more efficient and enables reconstruction of dynamic volumetric phenomena. Moreover, our method requires the projection of multiplexed coded illumination, which has the added advantage of increasing the signal-to-noise ratio of the acquisition. Finally, we propose an iterative algorithm to correct for the attenuation of the participating medium during the reconstruction process. We show the effectiveness of our method with simulations as well as experiments on the volumetric recovery of multiple translucent layers, 3D point clouds etched in glass, and the dynamic process of milk drops dissolving in water. [less]

Analytic Tangent Irradiance Environment Maps for Anisotropic Surfaces

Soham Mehta, Ravi Ramamoorthi, Mark Meyer, Christophe Hery EGSR 2012

We extend spherical harmonic irradiance maps to anisotropic surfaces, replacing Lambertian reflectance with the diffuse term of the popular Kajiya-Kay model. We show ... [more] We extend spherical harmonic irradiance maps to anisotropic surfaces, replacing Lambertian reflectance with the diffuse term of the popular Kajiya-Kay model. We show that there is a direct analogy, with the surface normal replaced by the tangent. Our main contribution is an analytic formula for the diffuse Kajiya-Kay BRDF in terms of spherical harmonics; this derivation is more complicated than for the standard diffuse lobe. We show that the terms decay even more rapidly than for Lambertian reflectance, going as l -3 , where l is the spherical harmonic order, and with only 6 terms (l = 0 and l = 2) capturing 99.8% of the energy. Existing code for irradiance environment maps can be trivially adapted for real-time rendering with tangent irradiance maps. We also demonstrate an application to offline rendering of the diffuse component of fibers, using our formula as a control variate for Monte Carlo sampling. [less]

Interactive Inverse 3D Modeling

James Andrews, Hailin Jin, Carlo H. Séquin CAD 2012

“Interactive Inverse 3D Modeling” is a user-guided approach to shape construction and redesign that extracts well-structured, parameterized, procedural descriptions ... [more] “Interactive Inverse 3D Modeling” is a user-guided approach to shape construction and redesign that extracts well-structured, parameterized, procedural descriptions from unstructured, hierarchically flat input data, such as point clouds, boundary representation meshes, or even multiple pictorial views of a given inspirational prototype. This approach combines traditional “forward” 3D modeling tools with a system of user-guided extraction modules and optimization routines. With a few cursor strokes users can express their preferences of the type of modeling primitives to be used in a particular area of the given prototype to be approximated, and they can also select the degree of parameterization associated with each modeling routine. The results are then pliable, structured descriptions that are well suited to implement the particular design modifications intended by the user. [less]

Importance Sampling of Reflection from Hair Fibers

Christophe Hery, Ravi Ramamoorthi JCCT 2012

Hair and fur are increasingly important visual features in production rendering, and physically-based light scattering models are now commonly used. In this paper, we enable ... [more] Hair and fur are increasingly important visual features in production rendering, and physically-based light scattering models are now commonly used. In this paper, we enable efficient Monte Carlo rendering of specular reflections from hair fibers. We describe a simple and practical importance sampling strategy for the reflection term in the Marschner hair model. Our implementation enforces approximate energy conservation, including at grazing angles by modifying the samples appropriately, and includes a Box-Muller transform to effectively sample a Gaussian lobe. These ideas are simple to implement, but have not been commonly reported in standard references. Moreover, we have found them to have broader applicability in sampling surface specular BRDFs. Our method has been widely used in production for more than a year, and complete pseudocode is provided. [less]

SimpleFlow: A Non-iterative, Sublinear Optical Flow Algorithm

Michael Tao, Jiamin Bai, Pushmeet Kohli, Sylvain Paris EG 2012

Optical flow is a critical component of video editing applications, e.g. for tasks such as object tracking, segmen- tation, and selection. In this paper, we propose an optical ... [more] Optical flow is a critical component of video editing applications, e.g. for tasks such as object tracking, segmen- tation, and selection. In this paper, we propose an optical flow algorithm called SimpleFlow whose running times increase sublinearly in the number of pixels. Central to our approach is a probabilistic representation of the motion flow that is computed using only local evidence and without resorting to global optimization. To estimate the flow in image regions where the motion is smooth, we use a sparse set of samples only, thereby avoiding the expensive computation inherent in traditional dense algorithms. We show that our results can be used as is for a variety of video editing tasks. For applications where accuracy is paramount, we use our result to bootstrap a global optimization. This significantly reduces the running times of such methods without sacrificing accuracy. We also demonstrate that the SimpleFlow algorithm can process HD and 4K footage in reasonable times. [less]

Exposing Digital Forgeries in Ballistic Motion

Valentina Conotter, James F. O'Brien, Hany Farid TIFS 2012

We describe a geometric technique to detect physically implausible trajectories of objects in video sequences. This technique explicitly models the three-dimensional ... [more] We describe a geometric technique to detect physically implausible trajectories of objects in video sequences. This technique explicitly models the three-dimensional ballistic motion of objects in free-flight and the two-dimensional projection of the trajectory into the image plane of a static or moving camera. Deviations from this model provide evidence of manipulation. The technique assumes that the object's trajectory is substantially influenced only by gravity, that the image of the object's center of mass can be determined from the images, and requires that any camera motion can be estimated from background elements. The computational requirements of the algorithm are modest, and any detected inconsistencies can be illustrated in an intuitive, geometric fashion. We demonstrate the efficacy of this analysis on videos of our own creation and on videos obtained from video-sharing websites. [less]

Real-Time Rendering of Rough Refraction

Charles de Rousiers, Adrien Bousseau, Kartic Subr, Nicolas Holzschuch, Ravi Ramamoorthi TVCG 2012

We present an algorithm to render objects made of transparent materials with rough surfaces in real-time, under all-frequency distant illumination. Rough surfaces cause ... [more] We present an algorithm to render objects made of transparent materials with rough surfaces in real-time, under all-frequency distant illumination. Rough surfaces cause wide scattering as light enters and exits objects, which significantly complicates the rendering of such materials. We present two contributions to approximate the successive scattering events at interfaces, due to rough refraction: First, an approximation of the Bidirectional Transmittance Distribution Function (BTDF), using spherical Gaussians, suitable for real-time estimation of environment lighting using pre-convolution; second, a combination of cone tracing and macro-geometry filtering to efficiently integrate the scattered rays at the exiting interface of the object. We demonstrate the quality of our approximation by comparison against stochastic ray-tracing. Furthermore we propose two extensions to our method for supporting spatially varying roughness on object surfaces and local lighting for thin objects. [less]

A Theory of Monte Carlo Visibility Sampling

Ravi Ramamoorthi, John Anderson, Mark Meyer, Derek Nowrouzezahrai TOG 2012

Soft shadows from area lights are one of the most crucial effects in high quality and production rendering, but Monte Carlo sampling of visibility is often the main source ... [more] Soft shadows from area lights are one of the most crucial effects in high quality and production rendering, but Monte Carlo sampling of visibility is often the main source of noise in rendered images. Indeed, it is common to use deterministic uniform sampling for the smoother shading effects in direct lighting, so that all of the Monte-Carlo noise arises from visibility sampling alone. In this paper, we analyze theoretically and empirically, using both statistical and Fourier methods, the effectiveness of different non- adaptive Monte Carlo sampling patterns for rendering soft shadows. We start with a single image scanline and a linear light source, and gradually consider more complex visibility functions at a pixel. We show an- alytically that the lowest expected variance is in fact achieved by uniform sampling (albeit at the cost of visual banding artifacts). Surprisingly, we show that for two or more discontinuities in the visibility function, a comparable error to uniform sampling is obtained by “uniform jitter” sampling, where a constant jitter is applied to all samples in a uniform pattern (as opposed to jittering each stratum as in standard stratified sampling). The variance can be reduced by up to a factor of two, compared to stratified or quasi-Monte Carlo techniques, without the banding in uniform sampling. We augment our statistical analysis with a novel 2D Fourier analysis across the pixel-light space. This allows us to characterize the banding frequencies in uniform sampling, and gives insights into the behavior of uniform jitter and stratified sampling. We next extend these results to planar area light sources. We show that the best sampling method can vary, depending on the type of light source (circular, gaussian or square/rectangular). The correlation of adjacent “light scanlines” in square light sources can reduce the effectiveness of uniform jitter sampling, while the smoother shape of circular and gaussian-modulated sources preserves its benefits—these findings are also exposed through our frequency analysis. In practical terms, the theory in this paper provides guidelines for selecting visibility sampling strategies, which can reduce the number of shadow samples by 20–40%, with simple modifications to existing rendering code. [less]

Exposing Photo Manipulation with Inconsistent Reflections

James F. O'Brien, Hany Farid TOG 2012

The advent of sophisticated photo editing software has made it increasingly easier to manipulate digital images. Often visual inspection cannot definitively distinguish ... [more] The advent of sophisticated photo editing software has made it increasingly easier to manipulate digital images. Often visual inspection cannot definitively distinguish the resulting forgeries from authentic photographs. In response, forensic techniques have emerged to detect geometric or statistical inconsistencies that result from specific forms of photo manipulation. In this paper we describe a new forensic technique that focuses on geometric inconsistencies that arise when fake reflections are inserted into a photograph or when a photograph containing reflections is manipulated. This analysis employs basic rules of reflective geometry and linear perspective projection, makes minimal assumptions about the scene geometry, and only requires the user to identify corresponding points on an object and its reflection. The analysis is also insensitive to common image editing operations such as resampling, color manipulations, and lossy compression. We demonstrate this technique with both visually plausible forgeries of our own creation and commercially produced forgeries. [less]

Practical Filtering for Efficient Ray-Traced Directional Occlusion

Kevin Egan, Fredo Durand, Ravi Ramamoorthi SIGGRAPH Asia 2011

Ambient occlusion and directional (spherical harmonic) occlusion have become a staple of production rendering because they capture many visually important qualities ... [more] Ambient occlusion and directional (spherical harmonic) occlusion have become a staple of production rendering because they capture many visually important qualities of global illumination while being reusable across multiple artistic lighting iterations. However, ray-traced solutions for hemispherical occlusion require many rays per shading point (typically 256-1024) due to the full hemi-spherical angular domain. Moreover, each ray can be expensive in scenes with moderate to high geometric complexity. However, many nearby rays sample similar areas, and the final occlusion result is often low frequency. We give a frequency analysis of shadow light fields using distant illumination with a general BRDF and nor- mal mapping, allowing us to share ray information even among complex receivers. We also present a new rotationally-invariant filter that easily handles samples spread over a large angular domain. Our method can deliver 4x speed up for scenes that are computationally bound by ray tracing costs. [less]

What An Image Reveals About Material Reflectance

Manmohan Chandraker, Ravi Ramamoorthi ICCV 2011

We derive precise conditions under which material reflectance properties may be estimated from a single image of a homogeneous curved surface (canonically a sphere), lit ... [more] We derive precise conditions under which material reflectance properties may be estimated from a single image of a homogeneous curved surface (canonically a sphere), lit by a directional source. Based on the observation that light is reflected along certain (a priori unknown) preferred directions such as the half-angle, we propose a semiparametric BRDF abstraction that lies between purely parametric and purely data-driven models. Formulating BRDF estimation as a particular type of semiparametric regression, both the preferred directions and the form of BRDF variation along them can be estimated from data. Our approach has significant theoretical, algorithmic and empirical benefits, lends insights into material behavior and enables novel applications. While it is well-known that fitting multi-lobe BRDFs may be ill-posed under certain conditions, prior to this work, precise results for the well-posedness of BRDF estimation had remained elusive. Since our BRDF representation is derived from physical intuition, but relies on data, we avoid pitfalls of both parametric (low generalizability) and non-parametric regression (low interpretability, curse of dimensionality). Finally, we discuss several applications such as single-image relighting, light source estimation and physically meaningful BRDF editing. [less]

A Linear Variational System for Modeling From Curves

James Andrews, Pushkar P. Joshi, Nathan Carr CGF 2011

We present a linear system for modelling 3D surfaces from curves. Our system offers better performance, stability and precision in control than previous non-linear systems ... [more] We present a linear system for modelling 3D surfaces from curves. Our system offers better performance, stability and precision in control than previous non-linear systems. By exploring the direct relationship between a standard higher-order Laplacian editing framework and Hermite spline curves, we introduce a new form of Cauchy constraint that makes our system easy to both implement and control. We introduce novel workflows that simplify the construction of 3D models from sketches. We show how to convert existing 3D meshes into our curve-based representation for subsequent editing and modelling, allowing our technique to be applied to a wide range of existing 3D content. [less]

Data-Driven Elastic Models for Cloth: Modeling and Measurement

Huamin Wang, Ravi Ramamoorthi, James F. O'Brien SIGGRAPH 2011

Cloth often has complicated nonlinear, anisotropic elastic behavior due to its woven pattern and fiber properties. However, most current cloth simulation techniques simply ... [more] Cloth often has complicated nonlinear, anisotropic elastic behavior due to its woven pattern and fiber properties. However, most current cloth simulation techniques simply use linear and isotropic elastic models with manually selected stiffness parameters. Such simple simulations do not allow differentiating the behavior of distinct cloth materials such as silk or denim, and they cannot model most materials with fidelity to their real-world counterparts. In this paper, we present a data-driven technique to more realistically animate cloth. We propose a piecewise linear elastic model that is a good approximation to nonlinear, anisotropic stretching and bending behaviors of various materials. We develop new measurement techniques for studying the elastic deformations for both stretching and bending in real cloth samples. Our setup is easy and inexpensive to construct, and the parameters of our model can be fit to observed data with a well-posed optimization procedure. We have measured a database of ten different cloth materials, each of which exhibits distinctive elastic behaviors. These measurements can be used in most cloth simulation systems to create natural and realistic clothing wrinkles and shapes, for a range of different materials. [less]

Perceptually Based Tone Mapping for Low-Light Conditions

Adam Kirk, James F. O'Brien SIGGRAPH 2011

In this paper we present a perceptually based algorithm for modeling the color shift that occurs for human viewers in low-light scenes. Known as the Purkinje effect, this ... [more] In this paper we present a perceptually based algorithm for modeling the color shift that occurs for human viewers in low-light scenes. Known as the Purkinje effect, this color shift occurs as the eye transitions from photopic, cone-mediated vision in well-lit scenes to scotopic, rod-mediated vision in dark scenes. At intermediate light levels vision is mesopic with both the rods and cones active. Although the rods have a spectral response distinct from the cones, they still share the same neural pathways. As light levels decrease and the rods become increasingly active they cause a perceived shift in color. We model this process so that we can compute perceived colors for mesopic and scotopic scenes from spectral image data. We also describe how the effect can be approximated from standard high dynamic range RGB images. Once we have determined rod and cone responses, we map them to RGB values that can be displayed on a standard monitor to elicit the intended color perception when viewed photopically. Our method focuses on computing the color shift associated with low-light conditions and leverages current HDR techniques to control the image’s dynamic range. We include results generated from both spectral and RGB input images. [less]

Illumination Decomposition for Material Recoloring with Consistent Interreflections

Robert Carroll, Ravi Ramamoorthi, Maneesh Agrawala SIGGRAPH 2011

Changing the color of an object is a basic image editing operation, but a high quality result must also preserve natural shading. A common approach is to first compute reflectance ... [more] Changing the color of an object is a basic image editing operation, but a high quality result must also preserve natural shading. A common approach is to first compute reflectance and illumination intrinsic images. Reflectances can then be edited independently, and recomposed with the illumination. However, manipulating only the reflectance color does not account for diffuse interreflections, and can result in inconsistent shading in the edited image. We propose an approach for further decomposing illumination into direct lighting, and indirect diffuse illumination from each material. This decomposition allows us to change indirect illumination from an individual material independently, so it matches the modified reflectance color. To address the underconstrained problem of decomposing illumination into multiple components, we take advantage of its smooth nature, as well as user-provided constraints. We demonstrate our approach on a number of examples, where we consistently edit material colors and the associated interreflections. [less]

Interactive Furniture Layout Using Interior Design Guidelines

Paul Merrell, Eric Schkufza, Zeyang Li, Maneesh Agrawala, Vladlen Koltun SIGGRAPH 2011

We present an interactive furniture layout system that assists users by suggesting furniture arrangements that are based on interior design guidelines. Our system incorporates ... [more] We present an interactive furniture layout system that assists users by suggesting furniture arrangements that are based on interior design guidelines. Our system incorporates the layout guidelines as terms in a density function and generates layout suggestions by rapidly sampling the density function using a hardware-accelerated Monte Carlo sampler. Our results demonstrate that the suggestion generation functionality measurably increases the quality of furniture arrangements produced by participants with no prior training in interior design. [less]

Sparse Reconstruction of Visual Appearance for Computer Graphics and Vision

Ravi Ramamoorthi Wavelets and Sparsity 2011

A broad range of problems in computer graphics rendering, appearance acquisition for graphics and vision, and imaging, involve sampling, reconstruction, and integration ... [more] A broad range of problems in computer graphics rendering, appearance acquisition for graphics and vision, and imaging, involve sampling, reconstruction, and integration of high-dimensional (4D-8D) signals. For example, precomputation based real-time rendering of glossy materials and intricate lighting effects like caustics, can involve (pre)-computing the response of the scene to different light and viewing directions, which is often a 6D dataset. Similarly, image-based appearance acquisition of facial details, car paint, or glazed wood, requires us to take images from different light and view directions. Even offline rendering of visual effects like motion blur from a fast-moving car, or depth of field, involves high-dimensional sampling across time and lens aperture. The same problems are also common in computational imaging applications such as light field cameras. In the past few years, computer graphics and computer vision researchers have made significant progress in subsequent analysis and compact factored or multiresolution representations for some of these problems. However, the initial full dataset must almost always still be acquired or computed by brute force. This is often prohibitively expensive, taking hours to days of computation and acquisition time, as well as being a challenge for memory usage and storage. For example, on the order of 10,000 megapixel images are needed for a 1 degree sampling of lights and views for high-frequency materials. We argue that dramatically sparser sampling and reconstruction of these signals is possible, before the full dataset is acquired or simulated. Our key idea is to exploit the structure of the data that often lies in lower-frequency, sparse, or low-dimensional spaces. Our framework will apply to a diverse set of problems such as sparse reconstruction of light transport matrices for relighting, sheared sampling and denoising for offline shadow rendering, time-coherent compressive sampling for appearance acquisition, and new approaches to computational photography and imaging. [less]

Optimizing Environment Maps for Material Depiction

Adrien Bousseau, Emmanuelle Chapoulie, Ravi Ramamoorthi, Maneesh Agrawala EGSR 2011

We present an automated system for optimizing and synthesizing environment maps that enhance the appearance of materials in a scene. We first identify a set of lighting ... [more] We present an automated system for optimizing and synthesizing environment maps that enhance the appearance of materials in a scene. We first identify a set of lighting design principles for material depiction. Each principle specifies the distinctive visual features of a material and describes how environment maps can emphasize those features. We express these principles as linear or quadratic image quality metrics, and present a general optimization framework to solve for the environment map that maximizes these metrics. We accelerate metric evaluation using an approach dual to precomputed radiance transfer (PRT). In contrast to standard PRT that integrates light transport over the lighting domain to generate an image, we pre-integrate light transport over the image domain to optimize for lighting. Finally we present two techniques for transforming existing photographic environment maps to better emphasize materials. We demonstrate the effectiveness of our approach by generating environment maps that enhance the depiction of a variety of materials including glass, metal, plastic, marble and velvet. [less]

A Theory of Differential Photometric Stereo for Unknown BRDFs

Manmohan Chandraker, Jiamin Bai, Ravi Ramamoorthi CVPR 2011

We present a comprehensive theory of photometric surface reconstruction from image derivatives. For unknown isotropic BRDFs, we show that two measurements of spatial ... [more] We present a comprehensive theory of photometric surface reconstruction from image derivatives. For unknown isotropic BRDFs, we show that two measurements of spatial and temporal image derivatives, under unknown light sources on a circle, suffice to determine the surface. This result is the culmination of a series of fundamental observations. First, we discover a photometric invariant that relates image derivatives to the surface geometry, regardless of the form of isotropic BRDF. Next, we show that just two pairs of differential images from unknown light directions suffice to recover surface information from the photometric invariant. This is shown to be equivalent to determining isocontours of constant magnitude of the surface gradient, as well as isocontours of constant depth. Further, we prove that specification of the surface normal at a single point completely determines the surface depth from these isocontours. In addition, we propose practical algorithms that require additional initial or boundary information, but recover depth from lower order derivatives. Our theoretical results are illustrated with several examples on synthetic and real data. [less]

On the Duality of Forward and Inverse Light Transport

Manmohan Chandraker, Jiamin Bai, Tian-Tsong Ng, Ravi Ramamoorthi PAMI 2011

Inverse light transport seeks to undo global illumination effects, such as interreflections, that pervade images of most scenes. This paper presents the theoretical and ... [more] Inverse light transport seeks to undo global illumination effects, such as interreflections, that pervade images of most scenes. This paper presents the theoretical and computational foundations for inverse light transport as a dual of forward rendering. Mathematically, this duality is established through the existence of underlying Neumann series expansions. Physically, it can be shown that each term of our inverse series cancels an interreflection bounce, just as the forward series adds them. While the convergence properties of the forward series are well-known, we show that the oscillatory convergence of the inverse series leads to more interesting conditions on material reflectance. Conceptually, the inverse problem requires the inversion of a large light transport matrix, which is impractical for realistic resolutions using standard techniques. A natural consequence of our theoretical framework is a suite of fast computational algorithms for light transport inversion -- analogous to finite element radiosity, Monte Carlo and wavelet-based methods in forward rendering -- that rely at most on matrix-vector multiplications. We demonstrate two practical applications, namely, separation of individual bounces of the light transport and fast projector radiometric compensation to display images free of global illumination artifacts in real-world environments. [less]

From the Rendering Equation to Stratified Light Transport Inversion

Tian-Tsong Ng, Ramampreet Singh Pahwa, Jiamin Bai, Kar-Han Tan, Ravi Ramamoorthi IJCV 2011

Recent advances in fast light transport acquisition have motivated new applications for forward and inverse light transport. While forward light transport enables image ... [more] Recent advances in fast light transport acquisition have motivated new applications for forward and inverse light transport. While forward light transport enables image relighting, inverse light transport provides new possibilities for analyzing and cancelling interreflections, to enable applications like projector radiometric compensation and light bounce separation. With known scene geometry and diffuse reflectance, inverse light transport can be easily derived in closed form. However, with unknown scene geometry and reflectance properties, we must acquire and invert the scene’s light transport matrix to undo the effects of global illumination. For many photometric setups such as that of a projector-camera system, the light transport matrix often has a size of 105 × 105 or larger. Direct matrix inversion is accurate but impractical computationally at these resolutions. In this work, we explore a theoretical analysis of inverse light transport, relating it to its forward counterpart, expressed in the form of the rendering equation. It is well known that forward light transport has a Neumann series that corresponds to adding bounces of light. In this paper, we show the existence of a similar inverse series, that zeroes out the corresponding physical bounces of light. We refer to this series solution as stratified light transport inversion, since truncating to a certain number of terms corresponds to cancelling the corresponding interreflection bounces. The framework of stratified inversion is general and may provide insight for other problems in light transport and beyond, that involve large-size matrix inversion. It is also efficient, requiring only sparse matrix-matrix multiplications. Our practical application is to radiometric compensation, where we seek to project patterns onto real-world surfaces, undoing the effects of global illumination. We use stratified light transport inversion to efficiently invert the acquired light transport matrix for a static scene, after which inter-reflection cancellation is a simple matrix-vector multiplication to compensate the input image for projection. [less]

Interactive Extraction and Re-Design of Sweep Geometries

James Andrews, Pushkar P. Joshi, Carlo H. Séquin CGI 2011

We introduce two interactive extraction modules that can fit the parameters of generalized sweeps to large, unstructured meshes for immediate, high-level, detail-preserving ... [more] We introduce two interactive extraction modules that can fit the parameters of generalized sweeps to large, unstructured meshes for immediate, high-level, detail-preserving modification. These modules represent two extremes in a spectrum of parameterized shapes: rotational sweeps defined by a few global parameters, and progressive sweeps forming generalized cylinders with many slowly varying local parameters. Both modules are initialized and controlled by the user drawing a few strokes onto the displayed original model. We demonstrate the system on various shapes, ranging from clean, mechanical geometries to organic forms with intricate surface details. [less]

Bringing Clothing into Desired Configurations with Limited Perception

Marco Cusumano-Towner, Arjun Singh, Stephen Miller, James F. O'Brien, Pieter Abbeel ICRA 2011

We consider the problem of autonomously bringing an article of clothing into a desired configuration using a general-purpose two-armed robot. We propose a hidden Markov ... [more] We consider the problem of autonomously bringing an article of clothing into a desired configuration using a general-purpose two-armed robot. We propose a hidden Markov model (HMM) for estimating the identity of the article and tracking the article's configuration throughout a specific sequence of manipulations and observations. At the end of this sequence, the article's configuration is known, though not necessarily desired. The estimated identity and configuration of the article are then used to plan a second sequence of manipulations that brings the article into the desired configuration. We propose a relaxation of a strain-limiting finite element model for cloth simulation that can be solved via convex optimization; this serves as the basis of the transition and observation models of the HMM. The observation model uses simple perceptual cues consisting of the height of the article when held by a single gripper and the silhouette of the article when held by two grippers. The model accurately estimates the identity and configuration of clothing articles, enabling our procedure to autonomously bring a variety of articles into desired configurations that are useful for other tasks, such as folding. [less]

Modeling and Perception of Deformable One-Dimensional Objects

Shervin Javdani, Sameep Tandon, Jie Tang, James F. O'Brien, Pieter Abbeel ICRA 2011

Recent advances in the modeling of deformable one-dimensional objects (DOOs) such as surgical suture, rope, and hair show significant promise for improving the simulation ... [more] Recent advances in the modeling of deformable one-dimensional objects (DOOs) such as surgical suture, rope, and hair show significant promise for improving the simulation, perception, and manipulation of such objects. An important application of these tasks lies in the area of medical robotics, where robotic surgical assistants have the potential to greatly reduce surgeon fatigue and human error by improving the accuracy, speed, and robustness of surgical tasks such as suturing. However, different types of DOOs exhibit a variety of bending and twisting behaviors that are highly dependent on material properties. This paper proposes an approach for fitting simulation models of DOOs to observed data. Our approach learns an energy function such that observed DOOs configurations lie in local energy minima. Our experiments on a variety of DOOs show that models fitted to different types of DOOs using our approach enable accurate prediction of future configurations. Additionally, we explore the application of our learned model to the perception of DOOs. [less]

Frequency Analysis and Sheared Filtering for Shadow Light Fields of Complex Occluders

Kevin Egan, Florian Hecht, Frédo Durand, Ravi Ramamoorthi TOG 2011

Monte Carlo ray tracing of soft shadows produced by area lighting and intricate geometries, such as the shadows through plant leaves or arrays of blockers, is a critical challenge ... [more] Monte Carlo ray tracing of soft shadows produced by area lighting and intricate geometries, such as the shadows through plant leaves or arrays of blockers, is a critical challenge. The final image often has relatively smooth shadow patterns, since it integrates over the light source. However, Monte Carlo rendering exhibits considerable noise even at high sample counts because of the large variance of the integrand due to the intricate shadow function. This article develops an efficient diffuse soft shadow technique for mid to far occluders that relies on a new 4D cache and sheared reconstruction filter. For this, we first derive a frequency analysis of shadows for planar area lights and complex occluders. Our analysis subsumes convolution soft shadows for parallel planes as a special case. It allows us to derive 4D sheared filters that enable lower sampling rates for soft shadows. While previous sheared-reconstruction techniques were able primarily to index samples according to screen position, we need to perform reconstruction at surface receiver points that integrate over vastly different shapes in the reconstruction domain. This is why we develop a new light-field-like 4D data structure to store shadowing values and depth information. Any ray tracing system that shoots shadow rays can easily incorporate our method to greatly reduce sampling rates for diffuse soft shadows. [less]

Design Principles for Visual Communication

Maneesh Agrawala, Wilmot Li, Floraine Berthouzoz CACM

Design principles connect the visual design of a visualization with the viewer’s perception and cognition of the underlying information the visualization is meant to ... [more] Design principles connect the visual design of a visualization with the viewer’s perception and cognition of the underlying information the visualization is meant to convey. Identifying and formulating good design principles often requires analyzing the best hand-designed visualizations, examining prior research on the perception and cognition of visualizations, and, when necessary, conducting user studies into how visual techniques affect perception and cognition. Given a set of design rules and quantitative evaluation criteria, we can use procedural techniques and/or energy optimization to build automated visualization-design systems. [less]

Real-Time Rough Refraction

Charles De Rousiers, Adrien Bousseau, Kartic Subr, Nicolas Holzschuch, Ravi Ramamoorthi I3D 2011

We present an algorithm to render objects of transparent materials with rough surfaces in real-time, under distant illumination. Rough surfaces cause wide scattering ... [more] We present an algorithm to render objects of transparent materials with rough surfaces in real-time, under distant illumination. Rough surfaces cause wide scattering as light enters and exits objects, which significantly complicates the rendering of such materials. We present two contributions to approximate the successive scattering events at interfaces, due to rough refraction : First, an approximation of the Bidirectional Transmittance Distribution Function (BTDF), using spherical Gaussians, suitable for real-time estimation of environment lighting using pre-convolution; second, a combination of cone tracing and macro-geometry filtering to efficiently integrate the scattered rays at the exiting interface of the object. We demonstrate the quality of our approximation by comparison against stochastic raytracing. [less]

Eden: A Professional Multitouch Tool for Constructing Virtual Organic Environments

Kenrick Kin, Tom Miller, Björn Bollensdorff, Tony DeRose, Björn Hartmann, Maneesh Agrawala CHI 2011

Set construction is the process of selecting and positioning virtual geometric objects to create a virtual environment used in a computer-animated film. Set construction ... [more] Set construction is the process of selecting and positioning virtual geometric objects to create a virtual environment used in a computer-animated film. Set construction artists often have a clear mental image of the set composition, but find it tedious to build their intended sets with current mouse and keyboard interfaces. We investigate whether multitouch input can ease the process of set construction. Working with a professional set construction artist at Pixar Animation Studios, we designed and developed Eden, a fully functional multitouch set construction application. In this paper, we describe our design process and how we balanced the advantages and disadvantages of multitouch input to develop usable gestures for set construction. Based on our design process and the user experiences of two set construction artists, we present a general set of lessons we learned regarding the design of a multitouch interface. [less]

FingerGlass: Efficient Multiscale Interaction on Multitouch Screens

Dominik Käser, Maneesh Agrawala, Mark Pauly

Many tasks in graphical user interfaces require users to inter- act with elements at various levels of precision. We present FingerGlass, a bimanual technique designed ... [more] Many tasks in graphical user interfaces require users to inter- act with elements at various levels of precision. We present FingerGlass, a bimanual technique designed to improve the precision of graphical tasks on multitouch screens. It enables users to quickly navigate to different locations and across multiple scales of a scene using a single hand. The other hand can simultaneously interact with objects in the scene. Unlike traditional pan-zoom interfaces, FingerGlass retains contextual information during the interaction. We evaluated our technique in the context of precise object selection and translation and found that FingerGlass significantly outperforms three state-of-the-art baseline techniques in both objective and subjective measurements: users acquired and translated targets more than 50% faster than with the second- best technique in our experiment. [less]

CommentSpace: Structured Support for Collaborative Visual Analysis

Wesley Willett, Jeffrey Heer, Joseph Hellerstein, Maneesh Agrawala

Collaborative visual analysis tools can enhance sensemaking by facilitating social interpretation and parallelization of effort. These systems enable distributed ... [more] Collaborative visual analysis tools can enhance sensemaking by facilitating social interpretation and parallelization of effort. These systems enable distributed exploration and evidence gathering, allowing many users to pool their effort as they discuss and analyze the data. We explore how adding lightweight tag and link structure to comments can aid this analysis process. We present CommentSpace, a collaborative system in which analysts comment on visualizations and websites and then use tags and links to organize findings and identify others' contributions. In a series of studies comparing CommentSpace to a system without support for tags and links, we find that a small, fixed vocabulary of tags (question, hypothesis, to-do) and links (evidence-for, evidence-against) helps analysts more reliably locate evidence and establish common ground. We also demonstrate that tags and links can help teams complete evidence gathering and synthesis tasks and that organizing comments using tags and links improves analytic results. Finally, we find that managing and incentivizing participation is important for analysts to progress from exploratory analysis to the organization and synthesis tasks where tags and links are most useful. [less]

Computer generation of ribbed sculptures

James Hamlin, Carlo H. Séquin JMA 2010

Charles Perry's monumental sculpture Solstice is analysed and its generative geometrical logic based on a twisted toroidal sweep is captured in a computer programme with ... [more] Charles Perry's monumental sculpture Solstice is analysed and its generative geometrical logic based on a twisted toroidal sweep is captured in a computer programme with interactively adjustable control parameters. This programme is then used to generate other models of ribbed sculptures based on one or more interlinked torus knots. From this family of sculptures related to Perry's Solstice we derive a broader paradigm for the generation of "ribbed" sculptures. It is based on one or two simple, mathematically defined "guide rails", which are then populated with a dense set of thinner "ribs" to create lightweight, transparent surfaces. With this broadened concept and a few suitably modified and parameterized programmes we can emulate many other ribbed sculptures by Charles Perry and also create new sculpture designs and mathematical visualization models that profit from the semi-transparent look of these structures. [less]

Multi-Resolution Isotropic Strain Limiting

Huamin Wang, James F. O'Brien, Ravi Ramamoorthi SIGGRAPH Asia 2010

In this paper we describe a fast strain-limiting method that allows stiff, incompliant materials to be simulated efficiently. Unlike prior approaches, which act on springs ... [more] In this paper we describe a fast strain-limiting method that allows stiff, incompliant materials to be simulated efficiently. Unlike prior approaches, which act on springs or individual strain components, this method acts on the strain tensors in a coordinate-invariant fashion allowing isotropic behavior. Our method applies to both two- and three-dimensional strains, and only requires computing the singular value decomposition of the deformation gradient, either a small 2x2 or 3x3 matrix, for each element. We demonstrate its use with triangular and tetrahedral linear-basis elements. For triangulated surfaces in three-dimensional space, we also describe a complementary edge-angle-limiting method to limit out-of-plane bending. All of the limits are enforced through an iterative, non-linear, Gauss-Seidel-like constraint procedure. To accelerate convergence, we propose a novel multi-resolution algorithm that enforces fitted limits at each level of a non-conforming hierarchy. Compared with other constraint-based techniques, our isotropic multi-resolution strain-limiting method is straightforward to implement, efficient to use, and applicable to a wide range of shell and solid materials. [less]

Automatic Generation of Destination Maps

Johannes Kopf, Maneesh Agrawala, David Bargeron, David Salesin, Michael F. Cohen SIGGRAPH Asia 2010

Destination maps are navigational aids designed to show anyone within a region how to reach a location (the destination). Hand-designed destination maps include only the ... [more] Destination maps are navigational aids designed to show anyone within a region how to reach a location (the destination). Hand-designed destination maps include only the most important roads in the region and are non-uniformly scaled to ensure that all of the important roads from the highways to the residential streets are visible. We present the first automated system for creating such destination maps based on the design principles used by mapmakers. Our system includes novel algorithms for selecting the important roads based on mental representations of road networks, and for laying out the roads based on a non-linear optimization procedure. The final layouts are labeled and rendered in a variety of styles ranging from informal to more formal map styles. The system has been used to generate over 57,000 destination maps by thousands of users. We report feedback from both a formal and informal user study, as well as provide quantitative measures of success. [less]

Symmetrical Embeddings of Regular Maps R5.13 and R5.6

Carlo H. Séquin

This report is a documentation of my trial-and-error design process to find a symmetrical embedding of the regular map R5.13 on a genus-5 2-manifold. It documents the non-linear ... [more] This report is a documentation of my trial-and-error design process to find a symmetrical embedding of the regular map R5.13 on a genus-5 2-manifold. It documents the non-linear way in which my mind homed-in on a valid solution and then refined that solution to obtain a satisfactory geometrical model. This design-thinking log may serve as a case study for a design approach that switches back and forth between doodling with physical materials, computer-aided template and model construction, and verification of the results on tangible visualization models. Lessons learned on R5.13 were subsequently applied to solve the embedding of the regular map R5.6. [less]

Personalized Photograph Ranking and Selection System

Che-Hua Yeh, Yuan-Chen Ho, Brian A. Barsky, Ming Ouhyoung ACM MM 2010

In this paper, we propose a novel personalized ranking system for amateur photographs. Although some of the features used in our system are similar to previous work, new features ... [more] In this paper, we propose a novel personalized ranking system for amateur photographs. Although some of the features used in our system are similar to previous work, new features, such as texture, RGB color, portrait (through face detection), and black-and-white, are included for individual preferences. Our goal of automatically ranking photographs is not intended for award-wining professional photographs but for photographs taken by amateurs, especially when individual preference is taken into account. The performance of our system in terms of precision-recall diagram and binary classification accuracy (93%) is close to the best results to date for both overall system and individual features. Two personalized ranking user interfaces are provided: one is feature-based and the other is example-based. Although both interfaces are effective in providing personalized preferences, our user study showed that example-based was preferred by twice as many people as feature-based. [less]

A Dual Theory of Inverse and Forward Light Transport

Jiamin Bai, Manmohan Chandraker, Tian-Tsong Ng, Ravi Ramamoorthi ECCV 2010

We present the theoretical and computational foundations for inverse light transport as a dual of forward rendering. Inverse light transport seeks to undo global illumination ... [more] We present the theoretical and computational foundations for inverse light transport as a dual of forward rendering. Inverse light transport seeks to undo global illumination effects, such as interreflections, that pervade images of most scenes. Mathematically, this duality is established through the existence of underlying Neumann series expansions. Physically, we show that each term of our inverse series cancels an interreflection bounce, just as the forward series adds them. While the convergence properties of the forward series are well-known, we show that the oscillatory convergence of the inverse series leads to more interesting conditions on material reflectance. Conceptually, the inverse problem requires the inversion of a large transport matrix, which is impractical for realistic resolutions. A natural consequence of our theoretical framework is a suite of fast computational algorithms for light transport inversion - analogous to finite element radiosity, Monte Carlo and wavelet-based methods in forward rendering - that rely at most on matrix-vector multiplications. We demonstrate two practical applications, namely, separation of individual bounces of the light transport and fast projector radiometric compensation to display images free of global illumination artifacts in real-world environments. [less]

Error-tolerant Image Compositing

Michael W. Tao, Micah K. Johnson, Sylvain Paris ECCV 2010

Gradient-domain compositing is an essential tool in computer vision and its applications, e.g., seamless cloning, panorama stitching, shadow removal, scene completion and ... [more] Gradient-domain compositing is an essential tool in computer vision and its applications, e.g., seamless cloning, panorama stitching, shadow removal, scene completion and reshuffling. While easy to implement, these gradient-domain techniques often generate bleeding artifacts where the composited image regions do not match. One option is to modify the region boundary to minimize such mismatches. However, this option may not always be sufficient or applicable, e.g., the user or algorithm may not allow the selection to be altered. We propose a new approach to gradient-domain compositing that is robust to inaccuracies and prevents color bleeding without changing the boundary location. Our approach improves standard gradient-domain compositing in two ways. First, we define the boundary gradients such that the produced gradient field is nearly integrable. Second, we control the integration process to concentrate residuals where they are less conspicuous. We show that our approach can be formulated as a standard least-squares problem that can be solved with a sparse linear system akin to the classical Poisson equation. We demonstrate results on a variety of scenes. The visual quality and run-time complexity compares favorably to other approaches. [less]

Example-Based Wrinkle Synthesis for Clothing Animation

Huamin Wang, Florian Hecht, Ravi Ramamoorthi, James F. O'Brien SIGGRAPH 2010

This paper describes a method for animating the appearance of clothing, such as pants or a shirt, that fits closely to a figure's body. Compared to flowing cloth, such as loose ... [more] This paper describes a method for animating the appearance of clothing, such as pants or a shirt, that fits closely to a figure's body. Compared to flowing cloth, such as loose dresses or capes, these types of garments involve nearly continuous collision contact and small wrinkles, that can be troublesome for traditional cloth simulation methods. Based on the observation that the wrinkles in close-fitting clothing behave in a predominantly kinematic fashion, we have developed an example-based wrinkle synthesis technique. Our method drives wrinkle generation from the pose of the figure's kinematic skeleton. This approach allows high quality clothing wrinkles to be combined with a coarse cloth simulation that computes the global and dynamic aspects of the clothing motion. While the combined results do not exactly match a high-resolution reference simulation, they do capture many of the characteristic fine-scale features and wrinkles. Further, the combined system runs at interactive rates, making it suitable for applications where high-resolution offline simulations would not be a viable option. The wrinkle synthesis method uses a precomputed database built by simulating the high-resolution clothing as the articulated figure is moved over a range of poses. In principle, the space of poses is exponential in the total number of degrees of freedom; however clothing wrinkles are primarily affected by the nearest joints, allowing each joint to be processed independently. During synthesis, mesh interpolation is used to consider the influence of multiple joints, and combined with a coarse simulation to produce the final results at interactive rates. [less]

Dynamic Local Remeshing for Elastoplastic Simulation

Martin Wicke, Daniel Ritchie, Bryan Klingner, Sebastian Burke, Jonathan Shewchuk, James F. O'Brien SIGGRAPH 2010

We propose a finite element simulation method that addresses the full range of material behavior, from purely elastic to highly plastic, for physical domains that are substantially ... [more] We propose a finite element simulation method that addresses the full range of material behavior, from purely elastic to highly plastic, for physical domains that are substantially reshaped by plastic flow, fracture, or large elastic deformations. To mitigate artificial plasticity, we maintain a simulation mesh in both the current state and the rest shape, and store plastic offsets only to represent the non-embeddable portion of the plastic deformation. To maintain high element quality in a tetrahedral mesh undergoing gross changes, we use a dynamic meshing algorithm that attempts to replace as few tetrahedra as possible, and thereby limits the visual artifacts and artificial diffusion that would otherwise be introduced by repeatedly remeshing the domain from scratch. Our dynamic mesher also locally refines and coarsens a mesh, and even creates anisotropic tetrahedra, wherever a simulation requests it. We illustrate these features with animations of elastic and plastic behavior, extreme deformations, and fracture. [less]

My Search for Symmetrical Embeddings of Regular Maps

Carlo H. Séquin Bridges 2010

Various approaches are discussed for obtaining highly symmetrical and aesthetically pleasing space models of regular maps embedded in surfaces of genus 2 to 5. For many ... [more] Various approaches are discussed for obtaining highly symmetrical and aesthetically pleasing space models of regular maps embedded in surfaces of genus 2 to 5. For many cases, geometrical intuition and preliminary visualization models made from paper strips or plastic pipes are quite competitive with exhaustive computer searches. A couple of particularly challenging problems are presented as detailed case studies. The symmetrical patterns discovered could be further modified to create Escher-like tilings on low-genus handle bodies. [less]

Image Warps for Artistic Perspective Manipulation

Robert Carroll, Aseem Agarwala, Maneesh Agrawala

Painters and illustrators commonly sketch vanishing points and lines to guide the construction of perspective images. We present a tool that gives users the ability to ... [more] Painters and illustrators commonly sketch vanishing points and lines to guide the construction of perspective images. We present a tool that gives users the ability to manipulate perspective in photographs using image space controls similar to those used by artists. Our approach computes a 2D warp guided by constraints based on projective geometry. A user annotates an image by marking a number of image space constraints including planar regions of the scene, straight lines, and associated vanishing points. The user can then use the lines, vanishing points, and other point constraints as handles to control the warp. Our system optimizes the warp such that straight lines remain straight, planar regions transform according to a homography, and the entire mapping is as shape-preserving as possible. While the result of this warp is not necessarily an accurate perspective projection of the scene, it is often visually plausible. We demonstrate how this approach can be used to produce a variety of effects, such as changing the perspective composition of a scene, exploring artistic perspectives not realizable with a camera, and matching perspectives of objects from different images so that they appear consistent for compositing. [less]

Illustrating How Mechanical Assemblies Work

Niloy J. Mitra, Yong-Liang Yang, Dong-Ming Yan, Wilmot Li, Maneesh Agrawala

How things work visualizations use a variety of visual techniques to depict the operation of complex mechanical assemblies. We present an automated approach for generating ... [more] How things work visualizations use a variety of visual techniques to depict the operation of complex mechanical assemblies. We present an automated approach for generating such visualizations. Starting with a 3D CAD model of an assembly, we first infer the motions of individual parts and the interactions between parts based on their geometry and a few user specified constraints. We then use this information to generate visualizations that incorporate motion arrows, frame sequences and animation to convey the causal chain of motions and mechanical interactions between parts. We present results for a wide variety of assemblies. [less]

Sparsely Precomputing The Light Transport Matrix for Real-Time Rendering

Fu-Chung Huang, Ravi Ramamoorthi EGSR 2010

Precomputation-based methods have enabled real-time rendering with natural illumination, all-frequency shadows, and global illumination. However, a major bottleneck ... [more] Precomputation-based methods have enabled real-time rendering with natural illumination, all-frequency shadows, and global illumination. However, a major bottleneck is the precomputation time, that can take hours to days. While the final real-time data structures are typically heavily compressed with clustered principal component analysis and/or wavelets, a full light transport matrix still needs to be precomputed for a synthetic scene, often by exhaustive sampling and raytracing. This is expensive and makes rapid prototyping of new scenes prohibitive. In this paper, we show that the precomputation can be made much more efficient by adaptive and sparse sampling of light transport. We first select a small subset of “dense vertices”, where we sample the angular dimensions more completely (but still adaptively). The remaining “sparse vertices” require only a few angular samples, isolating features of the light transport. They can then be interpolated from nearby dense vertices using locally low rank approximations. We demonstrate sparse sampling and precomputation 5× faster than previous methods. [less]

Common Sense Community: Scaffolding Mobile Sensing and Analysis for Novice Users

Wesley Willett, Paul Aoki, Neil Kumar, Sushmita Subramanian, Allison Woodruff

As sensing technologies become increasingly distributed and democratized, citizens and novice users are becoming responsible for the kinds of data collection and analysis ... [more] As sensing technologies become increasingly distributed and democratized, citizens and novice users are becoming responsible for the kinds of data collection and analysis that have traditionally been the purview of professional scientists and analysts. Leveraging this citizen engagement effectively, however, requires not only tools for sensing and data collection but also mechanisms for understanding and utilizing input from both novice and expert stakeholders. When successful, this process can result in actionable findings that leverage and engage community members and build on their experiences and observations. We explored this process of knowledge production through several dozen interviews with novice community members, scientists, and regulators as part of the design of a mobile air quality monitoring system. From these interviews, we derived design principles and a framework for describing data collection and knowledge generation in citizen science settings, culminating in the user-centered design of a system for community analysis of air quality data. Unlike prior systems, ours breaks analysis tasks into discrete mini-applications designed to facilitate and scaffold novice contributions. An evaluation we conducted with community members in an area with air quality concerns indicates that these mini-applications help participants identify relevant phenomena and generate local knowledge contributions. [less]

Two New Approaches to Depth of Field Post-Processing: Pyramid Spreading and Tensor Filtering

Todd J. Kosloff, Brian A. Barsky VISIGRAPP 2010

Depth of field refers to the swath that is imaged in sharp focus through an optics system, such as a camera lens. Control over depth of field is an important artistic tool, which ... [more] Depth of field refers to the swath that is imaged in sharp focus through an optics system, such as a camera lens. Control over depth of field is an important artistic tool, which can be used, for example, to emphasize the subject of a photograph. The most efficient algorithms for simulating depth of field are post-processing methods. Post-processing can be made more efficient by making various approximations. We start with the assumption that the point spread function (PSF) is Gaussian. This assumption introduces structure into the problem which we exploit to achieve speed. Two methods will be presented. In our first approach, which we call pyramid spreading, PSFs are spread into a pyramid. By writing larger PSFs to coarser levels of the pyramid, the performance remains constant, independent of the size of the PSFs. After spreading all the PSFs, the pyramid is then collapsed to yield the final blurred image. Our second approach, called the tensor method, exploits the fact that blurring is a linear operator. The operator is treated as a large tensor which is compressed by finding structure in it. The compressed representation is then used to directly blur the image. Both methods present new perspectives on the problem of efficiently blurring an image. [less]

Simulation of Needle Insertion and Tissue Deformation for Modeling Prostate Brachytherapy

Nuttapong Chentanez, Ron Alterovitz, Daniel Ritchie, Lita Cho, Kris Hauser, Ken Goldberg, Jonathan Shewchuk, James F. O'Brien ABS 2010

Realistic modeling of needle insertion during brachytherapy can be used for training and in automated planning to reduce errors between intended and actual placement ... [more] Realistic modeling of needle insertion during brachytherapy can be used for training and in automated planning to reduce errors between intended and actual placement of the needle tip. We have developed a three-dimensional tetrahedral finite element simulation that models tissue deformation, needle flexation, and their coupled interaction. [less]

Using Blur to Affect Perceived Distance and Size

Robert (Robin) Held, Emily Cooper, James F. O'Brien, Marty Banks TOG 2010

We present a probabilistic model of how viewers may use defocus blur in conjunction with other pictorial cues to estimate the absolute distances to objects in a scene. Our ... [more] We present a probabilistic model of how viewers may use defocus blur in conjunction with other pictorial cues to estimate the absolute distances to objects in a scene. Our model explains how the pattern of blur in an image together with relative depth cues indicates the apparent scale of the image’s contents. From the model, we develop a semi-automated algorithm that applies blur to a sharply rendered image and thereby changes the apparent distance and scale of the scene’s contents. To examine the correspondence between the model/algorithm and actual viewer experience, we conducted an experiment with human viewers and compared their estimates of absolute distance to the model’s predictions. We did this for images with geometrically correct blur due to defocus and for images with commonly used approximations to the correct blur. The agreement between the experimental data and model predictions was excellent. The model predicts that some approximations should work well and that others should not. Human viewers responded to the various types of blur in much the way the model predicts. The model and algorithm allow one to manipulate blur precisely and to achieve the desired perceived scale efficiently. [less]

An intuitive explanation of third-order surface behavior

Pushkar P. Joshi, Carlo H. Séquin CAGD

We present a novel parameterization-independent exposition of the third-order geometric behavior of a surface point. Unlike existing algebraic expositions, our work ... [more] We present a novel parameterization-independent exposition of the third-order geometric behavior of a surface point. Unlike existing algebraic expositions, our work produces an intuitive explanation of third-order shape, analogous to the principal curvatures and directions that describe second-order shape. We extract four parameters that provide a quick and concise understanding of the third-order surface behavior at any given point. Our shape parameters are useful for easily characterizing different third-order surface shapes without having to use tensor algebra. Our approach generalizes to higher orders, allowing us to extract similarly intuitive parameters that fully describe fourth- and higher-order surface behavior. [less]

Perceptual Guidelines for Creating Rectangular Treemaps

Nicholas Kong, Jeffrey Heer, Maneesh Agrawala

Treemaps are space-filling visualizations that make efficient use of limited display space to depict large amounts of hi- erarchical data. Creating perceptually effective ... [more] Treemaps are space-filling visualizations that make efficient use of limited display space to depict large amounts of hi- erarchical data. Creating perceptually effective treemaps requires carefully managing a number of design parameters including the aspect ratio and luminance of rectangles. Moreover, treemaps encode values using area, which has been found to be less accurate than judgments of other visual encodings, such as length. We conduct a series of controlled experiments aimed at producing a set of design guidelines for creating effective rectangular treemaps. We find no evidence that luminance affects area judgments, but observe that aspect ratio does have an effect. Specifically, we find that the accuracy of area comparisons suffers when the compared rectangles have extreme aspect ratios or when both are squares. Contrary to common assumptions, the optimal distribution of rectangle aspect ratios within a treemap should include non-squares, but should avoid extreme aspect ratios. We then compare treemaps with hierarchical bar chart displays to identify the data densities at which length-encoded bar charts become less effective than area-encoded treemaps. We report the transition points at which treemaps exhibit judgment accuracy on par with bar charts for both leaf and non-leaf tree nodes. We also find that even at relatively low data densities treemaps result in faster comparisons than bar charts. Based on these results, we present a set of guidelines for the effective use of treemaps. [less]

Exploded View Diagrams of Mathematical Surfaces

Olga Karpenko, Wilmot Li, Niloy J. Mitra, Maneesh Agrawala

We present a technique for visualizing complicated mathematical surfaces that is inspired by hand-designed topological illustrations. Our approach generates exploded ... [more] We present a technique for visualizing complicated mathematical surfaces that is inspired by hand-designed topological illustrations. Our approach generates exploded views that expose the internal structure of such a surface by partitioning it into parallel slices, which are separated from each other along a single linear explosion axis. Our contributions include a set of simple, prescriptive design rules for choosing an explosion axis and placing cutting planes, as well as automatic algorithms for applying these rules. First we analyze the input shape to select the explosion axis based on the detected rotational and reflective symmetries of the input model. We then partition the shape into slices that are designed to help viewers better understand how the shape of the surface and its cross-sections vary along the explosion axis. Our algorithms work directly on triangle meshes, and do not depend on any specific parameterization of the surface. We generate exploded views for a variety of mathematical surfaces using our system. [less]

Removing Image Artifacts Due to Dirty Camera Lenses and Thin Occluders

Jinwei Gu, Ravi Ramamoorthi, Peter Belhumeur, Shree Nayar SIGGRAPH ASIA 09

Dirt on camera lenses, and occlusions from thin objects such as fences, are two important types of artifacts in digital imaging systems. These artifacts are not only an annoyance ... [more] Dirt on camera lenses, and occlusions from thin objects such as fences, are two important types of artifacts in digital imaging systems. These artifacts are not only an annoyance for photographers, but also a hindrance to computer vision and digital forensics. In this paper, we show that both effects can be described by a single image formation model, wherein an intermediate layer (of dust, dirt or thin occluders) both attenuates the incoming light and scatters stray light towards the camera. Because of camera defocus, these artifacts are low-frequency and either additive or multiplicative, which gives us the power to recover the original scene radiance pointwise. We develop a number of physics-based methods to remove these effects from digital photographs and videos. For dirty camera lenses, we propose two methods to estimate the attenuation and the scattering of the lens dirt and remove the artifacts either by taking several pictures of a structured calibration pattern beforehand, or by leveraging natural image statistics for post-processing existing images. For artifacts from thin occluders, we propose a simple yet effective iterative method that recovers the original scene from multiple apertures. The method requires two images if the depths of the scene and the occluder layer are known, or three images if the depths are unknown. The effectiveness of our proposed methods are demonstrated by both simulated and real experimental results. [less]

Adaptive Wavelet Rendering

Ryan Overbeck, Craig Donner, Ravi Ramamoorthi SIGGRAPH ASIA 09

Effects such as depth of field, area lighting, antialiasing and global illumination require evaluating a complex high-dimensional integral at each pixel of an image. We ... [more] Effects such as depth of field, area lighting, antialiasing and global illumination require evaluating a complex high-dimensional integral at each pixel of an image. We develop a new adaptive rendering algorithm that greatly reduces the number of samples needed for Monte Carlo integration. Our method renders directly into an image-space wavelet basis. First, we adaptively distribute Monte Carlo samples to reduce the variance of the wavelet basis scale coefficients, while using the wavelet coefficients to find edges. Working in wavelets, rather than pixels, allows us to sample not only image-space edges but also other features that are smooth in the image plane but have high variance in other integral dimensions. In the second stage, we reconstruct the image from these samples by using a suitable wavelet approximation. We achieve this by subtracting an estimate of the error in each wavelet coefficient from its magnitude, effectively producing the smoothest image consistent with the rendering samples. Our algorithm renders scenes with significantly fewer samples than basic Monte Carlo or adaptive techniques. Moreover, the method introduces minimal overhead, and can be efficiently included in an optimized ray-tracing system. [less]

User-Assisted Intrinsic Images

Adrien Bousseau, Sylvain Paris, Fredo Durand SIGGRAPH Asia 2009

For many computational photography applications, the lighting and materials in the scene are critical pieces of information. We seek to obtain intrinsic images, which decompose ... [more] For many computational photography applications, the lighting and materials in the scene are critical pieces of information. We seek to obtain intrinsic images, which decompose a photo into the product of an illumination component that represents lighting effects and a reflectance component that is the color of the observed material. This is an under-constrained problem and automatic methods are challenged by complex natural images. We describe a new approach that enables users to guide an optimization with simple indications such as regions of constant reflectance or illumination. Based on a simple assumption on local reflectance distributions, we derive a new propagation energy that enables a closed form solution using linear least-squares. We achieve fast performance by introducing a novel downsampling that preserves local color distributions. We demonstrate intrinsic image decomposition on a variety of images and show applications. [less]

Edge-Based Image Coarsening

Raanan Fattal, Robert Carroll, Maneesh Agrawala

This paper presents a new dimensionally-reduced linear image space that allows a number of recent image manipulation techniques to be performed efficiently and robustly ... [more] This paper presents a new dimensionally-reduced linear image space that allows a number of recent image manipulation techniques to be performed efficiently and robustly. The basis vectors spanning this space are constructed from a scale-adaptive image decomposition, based on kernels of the bilateral filter. Each of these vectors locally binds together pixels in smooth regions and leaves pixels across edges independent. Despite the drastic reduction in the number of degrees of freedom, this representation can be used to perform a number of recent gradient-based tonemapping techniques. In addition to reducing computation time, this space can prevent the bleeding artifacts which are common to Poisson-based integration methods. In addition, we show that this reduced representation is useful for energy-minimization methods in achieving efficient processing and providing better matrix conditioning at a minimal quality sacrifice. [less]

Generating Surface Crack Patterns

Hayley Iben, James F. O'Brien Graphical Models

We present a method for generating surface crack patterns that appear in materials such as mud, ceramic glaze, and glass. To model these phenomena, we build upon existing physically ... [more] We present a method for generating surface crack patterns that appear in materials such as mud, ceramic glaze, and glass. To model these phenomena, we build upon existing physically based methods. Our algorithm generates cracks from a stress field defined heuristically over a triangle discretization of the surface. The simulation produces cracks by evolving this field over time. The user can control the characteristics and appearance of the cracks using a set of simple parameters. By changing these parameters, we have generated examples similar to a variety of crack patterns found in the real world. We assess the realism of our results by comparison with photographs of real-world examples. Using a physically based approach also enables us to generate animations similar to time-lapse photography. [less]

Simulating Gaseous Fluids with Low and High Speeds

Yue Gao, Chen-Feng Li, Shi-Min Hu, Brian A. Barsky Pacific Graphics 09

Gaseous fluids may move slowly, as smoke does, or at high speed, such as occurs with explosions. High-speed gas flow is always accompanied by low-speed gas flow, which produces ... [more] Gaseous fluids may move slowly, as smoke does, or at high speed, such as occurs with explosions. High-speed gas flow is always accompanied by low-speed gas flow, which produces rich visual details in the fluid motion. Realistic visualization involves a complex dynamic flow field with both low and high speed fluid behavior. In computer graphics, algorithms to simulate gaseous fluids address either the low speed case or the high speed case, but no algorithm handles both efficiently. With the aim of providing visually pleasing results, we present a hybrid algorithm that efficiently captures the essential physics of both low- and high-speed gaseous fluids. We model the low speed gaseous fluids by a grid approach and use a particle approach for the high speed gaseous fluids. In addition, we propose a physically sound method to connect the particle model to the grid model. By exploiting complementary strengths and avoiding weaknesses of the grid and particle approaches, we produce some animation examples and analyze their computational performance to demonstrate the effectiveness of the new hybrid method. [less]

Three Techniques for Rendering Generalized Depth of Field Effects

Todd J. Kosloff, Brian A. Barsky MI 09

Depth of field refers to the swath that is imaged in sufficient focus through an optics system, such as a camera lens. Control over depth of field is an important artistic tool ... [more] Depth of field refers to the swath that is imaged in sufficient focus through an optics system, such as a camera lens. Control over depth of field is an important artistic tool that can be used to emphasize the subject of a photograph. In a real camera, the control over depth of field is limited by the laws of physics and by physical constraints. Depth of field has been rendered in computer graphics, but usually with the same limited control as found in real camera lenses. In this paper, we generalize depth of field in computer graphics by allowing the user to specify the distribution of blur throughout a scene in a more flexible manner. Generalized depth of field provides a novel tool to emphasize an area of interest within a 3D scene, to select objects from a crowd, and to render a busy, complex picture more understandable by focusing only on relevant details that may be scattered throughout the scene. We present three approaches for rendering generalized depth of field based on nonlinear distributed ray tracing, compositing, and simulated heat diffusion. Each of these methods has a different set of strengths and weaknesses, so it is useful to have all three available. The ray tracing approach allows the amount of blur to vary with depth in an arbitrary way. The compositing method creates a synthetic image with focus and aperture settings that vary per-pixel. The diffusion approach provides full generality by allowing each point in 3D space to have an arbitrary amount of blur. [less]

Radiometric Compensation Using Stratified Inverses

Tian-Tsong Ng, Ramanpreet S. Pahwa, Jiamin Bai, Tony Q. S. Quek, Kar-han Tan ICCV 2009

Through radiometric compensation, a projector-camera system can project a desired image onto a non-flat and non-white surface. This can be achieved by computing the inverse ... [more] Through radiometric compensation, a projector-camera system can project a desired image onto a non-flat and non-white surface. This can be achieved by computing the inverse light transport of a scene. A light transport matrix is in general large and on the order of 106 × 106 elements. Therefore, computing the inverse light transport matrix is computationally and memory intensive. Two prior methods were proposed to simplify matrix inversion by ignoring scene inter-reflection between individual or clusters of camera pixels. However, compromising scene inter-reflection in spatial domain introduces spatial artifacts and how to systematically adjust the compensation quality is not obvious. In this work, we show how scene inter-reflection can be systematically approximated by stratifying the light transport of a scene. The stratified light transport enables a similar stratification in the inverse light transport. We can show that the stratified inverse light transport converges to the true inverse. For radiometric compensation, the set of strat- ified inverse light transport provides a systematic way of quantifying the tradeoff between computational efficiency and accuracy. The framework of stratified matrix inversion is general and can have other applications, especially for applications that involve large-size sparse matrices. [less]

Interactive Simulation of Surgical Needle Insertion and Steering

Nuttapong Chentanez, Ron Alterovitz, Daniel Ritchie, Lita Cho, Kris Hauser, Ken Goldberg, Jonathan Shewchuk, James F. O'Brien SIGGRAPH 2009

We present algorithms for simulating and visualizing the insertion and steering of needles through deformable tissues for surgical training and planning. Needle insertion ... [more] We present algorithms for simulating and visualizing the insertion and steering of needles through deformable tissues for surgical training and planning. Needle insertion is an essential component of many clinical procedures such as biopsies, injections, neurosurgery, and brachytherapy cancer treatment. The success of these procedures depends on accurate guidance of the needle tip to a clinical target while avoiding vital tissues. Needle insertion deforms body tissues, making accurate placement difficult. Our interactive needle insertion simulator models the coupling between a steerable needle and deformable tissue. We introduce (1) a novel algorithm for local remeshing that quickly enforces the conformity of a tetrahedral mesh to a curvilinear needle path, enabling accurate computation of contact forces, (2) an efficient method for coupling a 3D finite element simulation with a 1D inextensible rod with stick-slip friction, and (3) optimizations that reduce the computation time for physically based simulations.We can realistically and interactively simulate needle insertion into a prostate mesh of 13,375 tetrahedra and 2,763 vertices at a 25 Hz frame rate on an 8-core 3.0 GHz Intel Xeon PC. The simulation models prostate brachytherapy with needles of varying stiffness, steering needles around obstacles, and supports motion planning for robotic needle insertion. We evaluate the accuracy of the simulation by comparing against real-world experiments in which flexible, steerable needles were inserted into gel tissue phantoms. [less]

Real-Time Deformation and Fracture in a Game Environment

Eric G. Parker, James F. O'Brien SCA 2009

This paper describes a simulation system that has been developed to model the deformation and fracture of solid objects in a real-time gaming context. Based around a corotational ... [more] This paper describes a simulation system that has been developed to model the deformation and fracture of solid objects in a real-time gaming context. Based around a corotational tetrahedral finite element method, this system has been constructed from components published in the graphics and computational physics literatures. The goal of this paper is to describe how these components can be combined to produce an engine that is robust to unpredictable user interactions, fast enough to model reasonable scenarios at real-time speeds, suitable for use in the design of a game level, and with appropriate controls allowing content creators to match artistic direction. Details concerning parallel implementation, solver design, rendering method, and other aspects of the simulation are elucidated with the intent of providing a guide to others wishing to implement similar systems. Examples from in-game scenes captured on the Xbox 360, PS3, and PC platforms are included. This paper recieved the award for best paper at SCA 2009. [less]

3D Clothing Fitting Based on the Geometric Feature Matching

Zhong Li, Xiaogang Jin, Brian Barsky, Jun Liu CAD/Graphics 2009

The 3D clothing fitting on a body model is an important research topic in the garment computer aided design (GCAD). During the fitting process, the match between the clothing ... [more] The 3D clothing fitting on a body model is an important research topic in the garment computer aided design (GCAD). During the fitting process, the match between the clothing and body models is still a problem for researchers. In this paper, we provide a 3D clothing fitting method based on the feature point match. We firstly use a new cubic-order weighted fitting patch to estimate the geometric properties of each vertex on two mesh models. Feature points are then extracted from two models and a new matching function is constructed to match them according to curvature and torsion. We interactively select several key feature points from two limited feature point sets to compute the transformation matrix of the clothing model. Finally the second match is performed to achieve the precise match between the clothing and body models. The experimental results show that our 3D clothing fitting method is simple and effective. [less]

Optimizing Content-Preserving Projections for Wide-Angle Images

Robert Carroll, Maneesh Agrawala, Aseem Agarwala SIGGRAPH 2009

Any projection of a 3D scene into a wide-angle image unavoidably results in distortion. Current projection methods either bend straight lines in the scene, or locally distort ... [more] Any projection of a 3D scene into a wide-angle image unavoidably results in distortion. Current projection methods either bend straight lines in the scene, or locally distort the shapes of scene objects. We present a method that minimizes this distortion by adapting the projection to content in the scene, such as salient scene regions and lines, in order to preserve their shape. Our optimization technique computes a spatially-varying projection that respects user-specified constraints while minimizing a set of energy terms that measure wide-angle image distortion. We demonstrate the effectiveness of our approach by showing results on a variety of wide-angle photographs, as well as comparisons to standard projections. [less]

Generating Photo Manipulation Tutorials by Demonstration

Floraine Grabler, Maneesh Agrawala, Wilmot Li, Mira Dontcheva, Takeo Igarashi

We present a demonstration-based system for automatically generating succinct step-by-step visual tutorials of photo manipulations. An author first demonstrates ... [more] We present a demonstration-based system for automatically generating succinct step-by-step visual tutorials of photo manipulations. An author first demonstrates the manipulation using an instrumented version of GIMP that records all changes in interface and application state. From the example recording, our system automatically generates tutorials that illustrate the manipulation using images, text, and annotations. It leverages automated image labeling (recognition of facial features and outdoor scene structures in our implementation) to generate more precise text descriptions of many of the steps in the tutorials. A user study comparing our automatically generated tutorials to hand-designed tutorials and screen-capture video recordings finds that users are 20–44% faster and make 60–95% fewer errors using our tutorials. While our system focuses on tutorial generation, we also present some initial work on generating content-dependent macros that use image recognition to automatically transfer selection operations from the example image used in the demonstration to new target images. While our macros are limited to transferring selection operations we demonstrate automatic transfer of several common retouching techniques including eye recoloring, whitening teeth and sunset enhancement. [less]

An Empirical BSSRDF Model

Craig Donner, Jason Lawrence, Ravi Ramamoorthi, Toshiya Hachisuka, Henrik Wann Jensen, Shree Nayar SIGGRAPH 09

We present a new model of the homogeneous BSSRDF based on large-scale simulations. Our model captures the appearance of materials that are not accurately represented using ... [more] We present a new model of the homogeneous BSSRDF based on large-scale simulations. Our model captures the appearance of materials that are not accurately represented using existing single scattering models or multiple isotropic scattering models (e.g. the diffusion approximation). We use an analytic function to model the 2D hemispherical distribution of exitant light at a point on the surface, and a table of parameter values of this function computed at uniformly sampled locations over the remaining dimensions of the BSSRDF domain. This analytic function is expressed in elliptic coordinates and has six parameters which vary smoothly with surface position, incident angle, and the underlying optical properties of the material (albedo, mean free path length, phase function and the relative index of refraction). Our model agrees well with measured data, and is compact, requiring only 250MB to represent the full spatial- and angular-distribution of light across a wide spectrum of materials. In practice, rendering a single material requires only about 100KB to represent the BSSRDF. [less]

Frequency Analysis and Sheared Reconstruction for Rendering Motion Blur

Kevin Egan, Yu-Ting Tseng, Nicolas Holzschuch, Fredo Durand, Ravi Ramamoorthi SIGGRAPH 09

Motion blur is crucial for high-quality rendering, but is also very expensive. Our first contribution is a frequency analysis of motionblurred scenes, including moving ... [more] Motion blur is crucial for high-quality rendering, but is also very expensive. Our first contribution is a frequency analysis of motionblurred scenes, including moving objects, specular reflections, and shadows. We show that motion induces a shear in the frequency domain, and that the spectrum of moving scenes can be approximated by a wedge. This allows us to compute adaptive space-time sampling rates, to accelerate rendering. For uniform velocities and standard axis-aligned reconstruction, we show that the product of spatial and temporal bandlimits or sampling rates is constant, independent of velocity. Our second contribution is a novel sheared reconstruction filter that is aligned to the first-order direction of motion and enables even lower sampling rates. We present a rendering algorithm that computes a sheared reconstruction filter per pixel, without any intermediate Fourier representation. This often permits synthesis of motion-blurred images with far fewer rendering samples than standard techniques require. [less]

Moving Gradients: A Path-Based Method for Plausible Image Interpolation

Dhruv Mahajan, Fu-Chung Huang, Wojciech Matusik, Ravi Ramamoorthi, Peter Belhumeur SIGGRAPH 09

We describe a method for plausible interpolation of images, with a wide range of applications like temporal up-sampling for smooth playback of lower frame rate video, smooth ... [more] We describe a method for plausible interpolation of images, with a wide range of applications like temporal up-sampling for smooth playback of lower frame rate video, smooth view interpolation, and animation of still images. The method is based on the intuitive idea, that a given pixel in the interpolated frames traces out a path in the source images. Therefore, we simply move and copy pixel gradients from the input images along this path. A key innovation is to allow arbitrary (asymmetric) transition points, where the path moves from one image to the other. This flexible transition preserves the frequency content of the originals without ghosting or blurring, and maintains temporal coherence. Perhaps most importantly, our framework makes occlusion handling particularly simple. The transition points allow for matches away from the occluded regions, at any suitable point along the path. Indeed, occlusions do not need to be handled explicitly at all in our initial graph-cut optimization. Moreover, a simple comparison of computed path lengths after the optimization, allows us to robustly identify occluded regions, and compute the most plausible interpolation in those areas. Finally, we show that significant improvements are obtained by moving gradients and using Poisson reconstruction. [less]

Tubular Sculptures

Carlo H. Séquin Bridges 2009

This paper reviews ways in which many artists have constructed large sculptures from tubular elements, ranging from single cylinders to toroidal or knotted structures ... [more] This paper reviews ways in which many artists have constructed large sculptures from tubular elements, ranging from single cylinders to toroidal or knotted structures, to assemblies of a large number of bent tubes. A few parameterized generators are introduced that facilitate design and evaluation of a variety of such sculptural forms. [less]

Visualizing High-Order Surface Geometry

Pushkar P. Joshi, Carlo H. Séquin CAD&A 2009

We have derived parameters that describe the higher-order geometric behavior of smooth surfaces. Our parameters are similar in spirit to the principal directions and ... [more] We have derived parameters that describe the higher-order geometric behavior of smooth surfaces. Our parameters are similar in spirit to the principal directions and principal curvatures that succinctly capture second-order shape behavior. We derive our parameters from a cylindrical Fourier decomposition around the surface normal. We present a visualization program for studying the influence of the various terms of different degrees on the shape of the local neighborhood of a surface point. We display a small surface patch that is controlled by two sets of parameters: One set is a simple polynomial description of the surface geometry in Cartesian coordinates. The other one is a set of Fourier components grouped by angular frequency and by their phase shifts. Manipulating the values in one parameter set changes the geometry of the patch and also updates the parameter values of the other set. [less]

An Effective Third-order Local Fitting Patch and Its Application

Zhong Li, Brian Barsky, Xiaogang Jin SMI 2009

In this paper, we extend Razdan and Bae’s second-order local fitting method [11] to construct an effective third-order fitting patch. Compared to other estimation algorithms ... [more] In this paper, we extend Razdan and Bae’s second-order local fitting method [11] to construct an effective third-order fitting patch. Compared to other estimation algorithms, this weighted bicubic B é zier patch more accurately obtains the normal vector and curvature estimation of a triangular mesh model. Furthermore, we define the principal geodesic torsion of each vertex on the mesh model and estimate it through this local fitting patch. In the end of this paper, we apply the third-order fitting patch for the mesh smoothing and hole-filling which can get the satisfactory results. [less]

Ribbed Surfaces for Art, Architecture, and Visualization

James Hamlin, Carlo H. Séquin CAD&A 2009

Sequences of parameterized Hermite curves following with their endpoints along two guide rails are used to create "transparent" surfaces and tubular sculptures. This parameterized ... [more] Sequences of parameterized Hermite curves following with their endpoints along two guide rails are used to create "transparent" surfaces and tubular sculptures. This parameterized set-up allows modeling a wide variety of shapes in a natural way by just changing a few parameters. Potential applications range from mathematical visualization models to architecture. [less]

CAD Tools for Creating Space-filling 3D Escher Tiles

Mark Howison, Carlo H. Séquin CAD&A 2009

We discuss the design and implementation of CAD tools for creating decorative solids that tile 3-space in a regular, isohedral manner. Starting with the simplest case of ... [more] We discuss the design and implementation of CAD tools for creating decorative solids that tile 3-space in a regular, isohedral manner. Starting with the simplest case of extruded 2D tilings, we describe geometric algorithms used for maintaining boundary representations of 3D tiles, including a Java implementation of an interactive constrained Delaunay triangulation library and a mesh-cutting algorithm used in layering extruded tiles to create more intricate designs. Finally, we demonstrate a CAD tool for creating 3D tilings that are derived from cubic lattices. The design process for these 3D tiles is more constrained, and hence more difficult, than in the 2D case, and it raises additional user interface issues. [less]

Interpolating Splines: Which is the fairest of them all?

Raph Levien, Carlo H. Séquin CAD&A 2009

Interpolating splines are a basic primitive for designing planar curves. There is a wide diversity in the literature but no consensus on a "best" spline, or even criteria for ... [more] Interpolating splines are a basic primitive for designing planar curves. There is a wide diversity in the literature but no consensus on a "best" spline, or even criteria for preferring one spline over another. For the case of G2-continuous splines, we emphasize two properties that can arguably be expected in any definition of "best" and show that any such spline is made from segments cut from a single generator curve, such as the Euler spiral. [less]

Depth of Field Postprocessing For Layered Scenes Using Constant-Time Rectangle Spreading

Todd Kosloff, Michael W. Tao, Brian Barsky GI 2009

Control over what is in focus and what is not in focus in an image is an important artistic tool. The range of depth in a 3D scene that is imaged in sufficient focus through an optics ... [more] Control over what is in focus and what is not in focus in an image is an important artistic tool. The range of depth in a 3D scene that is imaged in sufficient focus through an optics system, such as a camera lens, is called depth of field. Without depth of field, the entire scene appears completely in sharp focus, leading to an un- natural, overly crisp appearance. Current techniques for rendering depth of field in computer graphics are either slow or suffer from artifacts, or restrict the choice of point spread function (PSF). In this paper, we present a new image filter based on rectangle spread- ing which is constant time per pixel. When used in a layered depth of field framework, our filter eliminates the intensity leakage and depth discontinuity artifacts that occur in previous methods. We also present several extensions to our rectangle spreading method to allow flexibility in the appearance of the blur through control over the PSF. [less]

Determining the Benefits of Direct-Touch, Bimanual, and Multifinger Input on a Multitouch Workstation

Kenrick Kin, Maneesh Agrawala, Tony DeRose

Multitouch workstations support direct-touch, bimanual, and multifinger interaction. Previous studies have separately examined the benefits of these three interaction ... [more] Multitouch workstations support direct-touch, bimanual, and multifinger interaction. Previous studies have separately examined the benefits of these three interaction attributes over mouse-based interactions. In contrast, we present an empirical user study that considers these three interaction attributes together for a single task, such that we can quantify and compare the performances of each attribute. In our experiment users select multiple targets using either a mouse-based workstation equipped with one mouse, or a multitouch workstation using either one finger, two fingers (one from each hand), or multiple fingers. We find that the fastest multitouch condition is about twice as fast as the mouse-based workstation, independent of the number of targets. Direct-touch with one finger accounts for an average of 83% of the reduction in selection time. Bimanual interaction, using at least two fingers, one on each hand, accounts for the remaining reduction in selection time. Further, we find that for novice multitouch users there is no significant difference in selection time between using one finger on each hand and using any number of fingers for this task. Based on these observations we conclude with several design guidelines for developing multitouch user interfaces. [less]

Parallax Photography: Creating 3D Cinematic Effects from Stills

Ke Colin Zheng, Alex Colburn, Aseem Agarwala, Maneesh Agrawala, Brian Curless, David Salesin, Michael Cohen

We present an approach to convert a small portion of a light field with extracted depth information into a cinematic effect with simulated, smooth camera motion that exhibits ... [more] We present an approach to convert a small portion of a light field with extracted depth information into a cinematic effect with simulated, smooth camera motion that exhibits a sense of 3D parallax. We develop a taxonomy of the cinematic conventions of these effects, distilled from observations of documentary film footage and organized by the number of subjects of interest in the scene. We present an automatic, content-aware approach to apply these cinematic conventions to an input light field. A face detector identifies subjects of interest. We then optimize for a camera path that conforms to a cinematic convention, maximizes apparent parallax, and avoids missing information in the input. We describe a GPU accelerated, temporally coherent rendering algorithm that allows users to create more complex camera moves interactively, while experimenting with effects such as focal length, depth of field, and selective, depth-based desaturation or brightening.We evaluate and demonstrate our approach on a wide variety of scenes and present a user study that compares our 3D cinematic effects to their 2D counterparts. [less]

Precomputation-Based Rendering

Ravi Ramamoorthi Foundations and Trends

High quality image synthesis is a long-standing goal in computer graphics. Complex lighting, reflection, shadow and global illumination effects can be rendered with modern ... [more] High quality image synthesis is a long-standing goal in computer graphics. Complex lighting, reflection, shadow and global illumination effects can be rendered with modern image synthesis algorithms, but those methods are focused on offline computation of a single image. They are far from interactive, and the image must be recomputed from scratch when any aspect of the scene changes. On the other hand, real-time rendering often fixes the object geometry and other attributes, such as relighting a static image for lighting design. In these cases, the final image or rendering is a linear combination of basis images or radiance distributions due to individual lights.We can therefore precompute offline solutions to each individual light or lighting basis function, combining them efficiently for real-time image synthesis. Precomputationbased relighting and radiance transfer has a long history with a spurt of renewed interest, including adoption in commercial video games, due to recent mathematical developments and hardware advances. In this survey, we describe the mathematical foundations, history, current research and future directions for precomputation-based rendering. [less]

Affine Double and Triple Product Wavelet Integrals for Rendering

Bo Sun, Ravi Ramamoorthi TOG

Many problems in computer graphics involve integrations of products of functions. Double- and triple-product integrals are commonly used in applications such as all-frequency ... [more] Many problems in computer graphics involve integrations of products of functions. Double- and triple-product integrals are commonly used in applications such as all-frequency relighting or importance sampling, but are limited to distant illumination. In contrast, near-field lighting from planar area lights involves an affine transform of the source radiance at different points in space. Our main contribution is a novel affine double- and triple-product integral theory; this generalization enables one of the product functions to be scaled and translated. We study the computational complexity in a number of bases, with particular attention to the common Haar wavelets. We show that while simple analytic formulae are not easily available, there is considerable sparsity that can be exploited computationally. We demonstrate a practical application to compute near-field lighting from planar area sources, that can be easily combined with most relighting algorithms. We also demonstrate initial results for wavelet importance sampling with near-field area lights, and image processing directly in the wavelet domain. [less]

Refolding Planar Polygons

Hayley Iben, James F. O'Brien, Erik Demaine DCG

This paper describes an algorithm for generating a guaranteed intersection-free interpolation sequence between any pair of compatible polygons. Our algorithm builds ... [more] This paper describes an algorithm for generating a guaranteed intersection-free interpolation sequence between any pair of compatible polygons. Our algorithm builds on prior results from linkage unfolding, and if desired it can ensure that every edge length changes monotonically over the course of the interpolation sequence. The computational machinery that ensures against self-intersection is independent from a distance metric that determines the overall character of the interpolation sequence. This decoupled approach provides a powerful control mechanism for determining how the interpolation should appear, while still assuring against intersection and guaranteeing termination of the algorithm. Our algorithm also allows additional control by accommodating a set of algebraic constraints that can be weakly enforced throughout the interpolation sequence. [less]

Compressive Light Transport Sensing

Pieter Peers, Dhruv Mahajan, Bruce Lamond, Abhijeet Ghosh, Wojciech Matusik, Ravi Ramamoorthi, Paul Debevec TOG

In this article we propose a new framework for capturing light transport data of a real scene, based on the recently developed theory of compressive sensing. Compressive ... [more] In this article we propose a new framework for capturing light transport data of a real scene, based on the recently developed theory of compressive sensing. Compressive sensing offers a solid mathematical framework to infer a sparse signal from a limited number of nonadaptive measurements. Besides introducing compressive sensing for fast acquisition of light transport to computer graphics, we develop several innovations that address specific challenges for imagebased relighting, and which may have broader implications. We develop a novel hierarchical decoding algorithm that improves reconstruction quality by exploiting interpixel coherency relations. Additionally, we design new nonadaptive illumination patterns that minimize measurement noise and further improve reconstruction quality. We illustrate our framework by capturing detailed high-resolution reflectance fields for image-based relighting. [less]

Sizing the Horizon: The Effects of Chart Size and Layering on the Graphical Perception of Time Series Visualizations

Jeffrey Heer, Nicholas Kong, Maneesh Agrawala

We investigate techniques for visualizing time series data and evaluate their effect in value comparison tasks. We compare line charts with

Perceptual Interpretation of Ink Annotations on Line Charts

Nicholas Kong, Maneesh Agrawala

Asynchronous collaborators often use freeform ink annotations to point to visually salient perceptual features of line charts such as peaks or humps, valleys, rising slopes ... [more] Asynchronous collaborators often use freeform ink annotations to point to visually salient perceptual features of line charts such as peaks or humps, valleys, rising slopes and declining slopes. We present a set of techniques for interpreting such annotations to algorithmically identify the corresponding perceptual parts. Our approach is to first apply a parts-based segmentation algorithm that identifies the visually salient perceptual parts in the chart. Our system then analyzes the freeform annotations to infer the corresponding peaks, valleys or sloping segments. Once the system has identified the perceptual parts it can highlight them to draw further attention and reduce ambiguity of interpretation in asynchronous collaborative discussions. [less]

A layered, heterogeneous reflectance model for acquiring and rendering human skin

Craig Donner, Tim Weyrich, Eugene d'Eon, Ravi Ramamoorthi, Szymon Rusinkiewicz SIGGRAPH ASIA 08

We introduce a layered, heterogeneous spectral reflectance model for human skin. The model captures the inter-scattering of light among layers, each of which may have an ... [more] We introduce a layered, heterogeneous spectral reflectance model for human skin. The model captures the inter-scattering of light among layers, each of which may have an independent set of spatially-varying absorption and scattering parameters. For greater physical accuracy and control, we introduce an infinitesimally thin absorbing layer between scattering layers. To obtain parameters for our model, we use a novel acquisition method that begins with multi-spectral photographs. By using an inverse rendering technique, along with known chromophore spectra, we optimize for the best set of parameters for each pixel of a patch. Our method finds close matches to a wide variety of inputs with low residual error. We apply our model to faithfully reproduce the complex variations in skin pigmentation. This is in contrast to most previous work, which assumes that skin is homogeneous or composed of ho- mogeneous layers. We demonstrate the accuracy and flexibility of our model by creating complex skin visual effects such as veins, tat- toos, rashes, and freckles, which would be difficult to author using only albedo textures at the skin’s outer surface. Also, by varying the parameters to our model, we simulate effects from external forces, such as visible changes in blood flow within the skin due to external pressure. [less]

Interactive 3D Architectural Modeling from Unordered Photo Collections

Sudipta Sinha, Drew Steedly, Rick Szeliski, Maneesh Agrawala, Marc Pollefeys

We present an interactive system for generating photorealistic, textured, piecewise-planar 3D models of architectural structures and urban scenes from unordered sets ... [more] We present an interactive system for generating photorealistic, textured, piecewise-planar 3D models of architectural structures and urban scenes from unordered sets of photographs. To reconstruct 3D geometry in our system, the user draws outlines overlaid on 2D photographs. The 3D structure is then automatically computedby combining the 2D interaction with the multi-view geometric information recovered by performing structure from motion analysis on the input photographs. We utilize vanishing point constraints at multiple stages during the reconstruction, which is particularly useful for architectural scenes where parallel lines are abundant. Our approach enables us to accurately model polygonal faces from 2D interactions in a single image. Our system also supports useful operations such as edge snapping and extrusions. Seamless texture maps are automatically generated by combining multiple input photographs using graph cut optimization and Poisson blending. The user can add brush strokes as hints during the texture generation stage to remove artifacts caused by unmodeled geometric structures. We build models for a variety of architectural scenes from collections of up to about a hundred photographs. [less]

Video Puppetry: A Performative Interface for Cutout Animation

Connelly Barnes, David E. Jacobs, Jason Sanders, Dan B Goldman, Szymon Rusinkiewicz, Adam Finkelstein, Maneesh Agrawala

We present a video-based interface that allows users of all skill levels to quickly create cutout-style animations by performing the character motions. The puppeteer ... [more] We present a video-based interface that allows users of all skill levels to quickly create cutout-style animations by performing the character motions. The puppeteer first creates a cast of physical puppets using paper, markers and scissors. He then physically moves these puppets to tell a story. Using an inexpensive overhead camera our system tracks the motions of the puppets and renders them on a new background while removing the puppeteer's hands. Our system runs in real-time (at 30 fps) so that the puppeteer and the audience can immediately see the animation that is created. Our system also supports a variety of constraints and effects including articulated characters, multi-track animation, scene changes, camera controls, 21/2-D environments, shadows, and animation cycles. Users have evaluated our system both quantitatively and qualitatively: In tests of low-level dexterity, our system has similar accuracy to a mouse interface. For simple story telling, users prefer our system over either a mouse interface or traditional puppetry. We demonstrate that even first-time users, including an eleven-year-old, can use our system to quickly turn an original story idea into an animation. [less]

Searching the World's Herbaria: A System for Visual Identification of Plant Species

Peter N. Belhumeur, Daozheng Chen, Steven Feiner, David W. Jacobs, W. John Kress, Haibin Ling, Ida Lopez, Ravi Ramamoorthi, Sameer Sheorey, Sean White, Ling Zhang ECCV 2008

We describe a working computer vision system that aids in the identification of plant species. A user photographs an isolated leaf on a blank background, and the system extracts ... [more] We describe a working computer vision system that aids in the identification of plant species. A user photographs an isolated leaf on a blank background, and the system extracts the leaf shape and matches it to the shape of leaves of known species. In a few seconds, the system displays the top matching species, along with textual descriptions and additional images. This system is currently in use by botanists at the Smithsonian Institution National Museum of Natural History. The primary contributions of this paper are: a description of a working computer vision system and its user interface for an important new application area; the introduction of three new datasets containing thousands of single leaf images, each labeled by species and verified by botanists at the US National Herbarium; recognition results for two of the three leaf datasets; and descriptions throughout of practical lessons learned in constructing this system. [less]

Jinwei Gu, Shree Nayar, Eitan Grinspun, Peter Belhumeur, Ravi Ramamoorthi ECCV 2008

Large Ray Packets for Real-time Whitted Ray Tracing

Ryan Overbeck, Ravi Ramamoorthi, William R. Mark IRT 2008

In this paper, we explore large ray packet algorithms for acceleration structure traversal and frustum culling in the context of Whitted ray tracing, and examine how these ... [more] In this paper, we explore large ray packet algorithms for acceleration structure traversal and frustum culling in the context of Whitted ray tracing, and examine how these methods respond to varying ray packet size, scene complexity, and ray recursion complexity. We offer a new algorithm for acceleration structure traversal which is robust to degrading coherence and a new method for generating frustum bounds around reflection and refraction ray packets. We compare, adjust, and finally compose the most effective algorithms into a real-time Whitted ray tracer. With the aid of multi-core CPU technology, our system renders complex scenes with reflections, refractions, and/or point-light shadows anywhere from 4–20 FPS. [less]

Light Field Transfer: Global Illumination Between Real and Synthetic Objects

O. Cossairt, S. K. Nayar, Ravi Ramamoorthi SIGGRAPH 08

We present a novel image-based method for compositing real and synthetic objects in the same scene with a high degree of visual realism. Ours is the first technique to allow ... [more] We present a novel image-based method for compositing real and synthetic objects in the same scene with a high degree of visual realism. Ours is the first technique to allow global illumination and near-field lighting effects between both real and synthetic objects at interactive rates, without needing a geometric and material model of the real scene. We achieve this by using a light field interface between real and synthetic components—thus, indirect illumination can be simulated using only two 4D light fields, one captured from and one projected onto the real scene. Multiple bounces of inter- reflections are obtained simply by iterating this approach. The inter- activity of our technique enables its use with time-varying scenes, including dynamic objects. This is in sharp contrast to the alternative approach of using 6D or 8D light transport functions of real objects, which are very expensive in terms of acquisition and storage and hence not suitable for real-time applications. In our method, 4D radiance fields are simultaneously captured and projected by using a lens array, video camera, and digital projector. The method supports full global illumination with restricted object placement, and accommodates moderately specular materials. We implement a complete system and show several example scene compositions that demonstrate global illumination effects between dynamic real and synthetic objects. Our implementation requires a single point light source and dark background. [less]

Multiscale Texture Synthesis

Charles Han, Eric Risser, Ravi Ramamoorthi, Eitan Grinspun SIGGRAPH 2008

Example-based texture synthesis algorithms have gained widespread popularity for their ability to take a single input image and create a perceptually similar non-periodic ... [more] Example-based texture synthesis algorithms have gained widespread popularity for their ability to take a single input image and create a perceptually similar non-periodic texture. However, previous methods rely on single input exemplars that can capture only a limited band of spatial scales. For example, synthesizing a continent-like appearance at a variety of zoom levels would require an impractically high input resolution. In this paper, we develop a multiscale texture synthesis algorithm. We propose a novel example-based representation, which we call an exemplar graph, that simply requires a few low-resolution input exemplars at different scales. Moreover, by allowing loops in the graph, we can create infinite zooms and infinitely detailed textures that are impossible with current example-based methods. We also introduce a technique that ameliorates inconsistencies in the user’s input, and show that the application of this method yields improved interscale coherence and higher visual quality. We demonstrate optimizations for both CPU and GPU implementations of our method, and use them to produce animations with zooming and panning at multiple scales, as well as static gigapixel-sized images with features spanning many spatial scales. [less]

Automatic Generation of Tourist Maps

Floraine Grabler, Maneesh Agrawala, Robert W. Sumner, Mark Pauly

Tourist maps are essential resources for visitors to an unfamiliar city because they visually highlight landmarks and other points of interest. Yet, hand-designed maps ... [more] Tourist maps are essential resources for visitors to an unfamiliar city because they visually highlight landmarks and other points of interest. Yet, hand-designed maps are static representations that cannot adapt to the needs and tastes of the individual tourist. In this paper we present an automated system for designing tourist maps that selects and highlights the information that is most important to tourists. Our system determines the salience of map elements using bottom-up vision-based image analysis and top-down web-based information extraction techniques. It then generates a map that emphasizes the most important elements, using a combination of multiperspective rendering to increase visibility of streets and landmarks, and cartographic generalization techniques such as simplification, deformation, and displacement to emphasize landmarks and de-emphasize less important buildings. We show a number of automatically generated tourist maps of San Francisco and compare them to existing automated and manual approaches. [less]

The Assumed Light Direction for Perceiving Shape from Shading

James P. O'Shea, Martin S. Banks, Maneesh Agrawala

Recovering 3D shape from shading is an ill-posed problem that the visual system can solve only by making use of additional information such as the position of the light source ... [more] Recovering 3D shape from shading is an ill-posed problem that the visual system can solve only by making use of additional information such as the position of the light source. Previous research has shown that people tend to assume light is above and slightly to the left of the object [Sun and Perona 1998]. We present a study to investigate whether the visual system also assumes the angle between the light direction and the viewing direction. We conducted a shape perception experiment in which subjects estimated surface orientation on smooth, virtual 3D shapes displayed monocularly using local Lambertian shading without cast shadows. We varied the angle between the viewing direction and the light direction within a range +/- 66 deg (above/below), and subjects indicated local surface orientation by rotating a gauge figure to appear normal to the surface [Koenderink et al. 1992]. Observer settings were more accurate and precise when the light was positioned above rather than below the viewpoint. Additionally, errors were minimized when the angle between the light direction and the viewing direction was 20-30 deg. Measurements of surface slant and tilt error support this result. These findings confirm the light-from-above prior and provide evidence that the angle between the viewing direction and the light direction is assumed to be 20-30 deg above the viewpoint. [less]

Automated Generation of Interactive 3D Exploded View Diagrams

Wilmot Li, Maneesh Agrawala, Brian Curless, David Salesin

We present a system for creating and viewing interactive exploded views of complex 3D models. In our approach, a 3D input model is organized into an explosion graph that encodes ... [more] We present a system for creating and viewing interactive exploded views of complex 3D models. In our approach, a 3D input model is organized into an explosion graph that encodes how parts explode with respect to each other. We present an automatic method for computing explosion graphs that takes into account part hierarchies in the input models and handles common classes of interlocking parts. Our system also includes an interface that allows users to interactively explore our exploded views using both direct controls and higher-level interaction modes. [less]

Intricate Isohedral Tilings of 3D Euclidean Space

Carlo H. Séquin Bridges 2008

Various methods to create intricate tilings of 3D space are presented. They include modulated extrusions of 2D Escher tilings, free-form deformations of the fundamental ... [more] Various methods to create intricate tilings of 3D space are presented. They include modulated extrusions of 2D Escher tilings, free-form deformations of the fundamental domain of various 3D symmetry groups, highly symmetrical polyhedral toroids of genus 1, higher-genus cage structures derived from the cubic lattice as well as from the diamond and triamond lattices, and finally interlinked tiles with the connectivity of simple knots. [less]

An Analysis of the In-Out BRDF Factorization for View-Dependent Relighting

Dhruv Mahajan, Yu-Ting Tseng, Eugene d'Eon, Ravi Ramamoorthi ESR 08

Interactive rendering with dynamic natural lighting and changing view is a long-standing goal in computer graphics. Recently, precomputation-based methods for all-frequency ... [more] Interactive rendering with dynamic natural lighting and changing view is a long-standing goal in computer graphics. Recently, precomputation-based methods for all-frequency relighting have made substantial progress in this direction. Many of the most successful algorithms are based on a factorization of the BRDF into incident and outgoing directions, enabling each term to be precomputed independent of viewing direction, and recombined at run-time. However, there has so far been no theoretical understanding of the accuracy of this factorization, nor the number of terms needed. In this paper, we conduct a theoretical and empirical analysis of the BRDF in-out factorization. For Phong BRDFs, we obtain analytic results, showing that the number of terms needed grows linearly with the Phong exponent, while the factors correspond closely to spherical harmonic basis functions. More generally, the number of terms is quadratic in the frequency content of the BRDF along the reflected or half-angle direction. This analysis gives clear practical guidance on the number of factors needed for a given material. Different objects in a scene can each be represented with the correct number of terms needed for that particular BRDF, enabling both accuracy and interactivity. [less]

Making Big Things Look Small: Blur combined with other depth cues affects perceived size and distance

Robert (Robin) Held, Emily Cooper, James F. O'Brien, Marty Banks VSS 2008

Blur is commonly considered a weak distance cue, but photographic techniques that manipulate blur cause significant and compelling changes in the perceived distance ... [more] Blur is commonly considered a weak distance cue, but photographic techniques that manipulate blur cause significant and compelling changes in the perceived distance and size of objects. One such technique is "tilt-shift miniaturization," in which a camera's lens is translated and slanted relative to the film plane. The result is an exaggerated vertical blur gradient that makes scenes with a vertical distance gradient (e.g., bird's-eye view of landscape) appear significantly nearer and therefore smaller. We will begin by demonstrating this compelling effect, and then describe how we used it to examine the visual system's use of blur as a cue to distance and size. In a psychophysical experiment, we presented computer-generated, bird's-eye images of a highly realistic model of a city. Blur was manipulated in four ways: 1) sharp images with no blur; 2) horizontal blur gradients were applied to those images; 3) vertical gradients were applied; 4) a large aperture (diameter up to 60m) was used to create an image with an accurate correlation between blur and depth for realizable, small-scale scenes. Observers indicated the perceived distance to objects in the images. Technique 1 produced a convincing impression of a full-sized scene. Technique 2 produced no systematic miniaturization. Techniques 3 and 4 produced significant and similar miniaturization. Thus, the correlation between blur and the depth indicated by other cues affects perceived distance and size. The correlation must be only reasonably accurate to produce a significant and systematic effect. We developed a probabilistic model of the relationship between blur and distance. An interesting prediction of the model is that blur only affects perceived distance when coupled with other distance cues, which is manifested in the tilt-shift effect we observed in humans. Thus, blur is a useful cue to absolute distance when coupled with other depth information. [less]

A Precomputed Polynomial Representation for Interactive BRDF Editing with Global Illumination

Aner Ben-Artzi, Kevin Egan, Kevin Egan, Frédo Durand, Ravi Ramamoorthi TOG

The ability to interactively edit BRDFs in their final placement within a computer graphics scene is vital to making informed choices for material properties. We significantly ... [more] The ability to interactively edit BRDFs in their final placement within a computer graphics scene is vital to making informed choices for material properties. We significantly extend previous work on BRDF editing for static scenes (with fixed lighting and view) by developing a precomputed polynomial representation that enables interactive BRDF editing with global illumination. Unlike previous precomputation-based rendering techniques, the image is not linear in the BRDF when considering interreflections. We introduce a framework for precomputing a multibounce tensor of polynomial coefficients that encapsulates the nonlinear nature of the task. Significant reductions in complexity are achieved by leveraging the low-frequency nature of indirect light. We use a high-quality representation for the BRDFs at the first bounce from the eye and lower-frequency (often diffuse) versions for further bounces. This approximation correctly captures the general global illumination in a scene, including color-bleeding, near-field object reflections, and even caustics. We adapt Monte Carlo path tracing for precomputing the tensor of coefficients for BRDF basis functions. At runtime, the high-dimensional tensors can be reduced to a simple dot product at each pixel for rendering. We present a number of examples of editing BRDFs in complex scenes with interactive feedback rendered with global illumination. [less]

A Theory of Frequency Domain Invariants: Spherical Harmonic Identities for BRDF/Lighting Transfer and Image Consistency

Dhruv Mahajan, Ravi Ramamoorthi, Brian Curless PAMI 2008

This work develops a theory of frequency domain invariants in computer vision. We derive novel identities using spherical harmonics, which are the angular frequency domain ... [more] This work develops a theory of frequency domain invariants in computer vision. We derive novel identities using spherical harmonics, which are the angular frequency domain analog to common spatial domain invariants such as reflectance ratios. These invariants are derived from the spherical harmonic convolution framework for reflection from a curved surface. Our identities apply in a number of canonical cases, including single and multiple images of objects under the same and different lighting conditions. One important case we consider is two different glossy objects in two different lighting environments. For this case, we derive a novel identity, independent of the specific lighting configurations or BRDFs, that allows us to directly estimate the fourth image if the other three are available (Fig. 1,2). The identity can also be used as an invariant to detect tampering in the images (Fig. 2). We also adapt Wiener filtering from image processing, deriving the deconvolution filters to estimate complex lighting from the single image of an object (Fig. 3). While this paper is primarily theoretical, it has the potential to lay the mathematical foundations for two important practical applications. First, we can develop more general algorithms for inverse rendering problems, which can directly relight and change material properties by transferring the BRDF or lighting from another object or illumination (Fig. 1,2). Second, we can check the consistency of an image, to detect tampering or image splicing (Fig. 2). [less]

Geometrically exact dynamic splines

Adrien Theetten, Laurent Grisoni, Claude Andriot, Brian Barsky Computer-Aided Design 2008

We propose a complete model handling the physical simulation of deformable 1D objects. We formulate continuous expressions for stretching, bending and twisting energies ... [more] We propose a complete model handling the physical simulation of deformable 1D objects. We formulate continuous expressions for stretching, bending and twisting energies. These expressions are mechanically rigorous and geometrically exact. Both elastic and plastic deformations are handled to simulate a wide range of materials.We validate the proposed model in several classical test configurations. The use of geometrical exact energies with dynamic splines provides very accurate results as well as interactive simulation times, which shows the suitability of the proposed model for constrained CAD applications. We illustrate the application potential of the proposed model by describing a virtual system for cable positioning, which can be used to test compatibility between planned fixing clip positions, and mechanical cable properties. [less]

Design Considerations for Collaborative Visual Analytics

Jeffrey Heer, Maneesh Agrawala

Visualizations leverage the human visual system to support the process of sensemaking, in which information is collected, organized, and analyzed to generate knowledge ... [more] Visualizations leverage the human visual system to support the process of sensemaking, in which information is collected, organized, and analyzed to generate knowledge and inform action. Although most research to date assumes a single-user focus on perceptual and cognitive processes, in practice, sensemaking is often a social process involving parallelization of effort, discussion, and consensus building. Thus, to fully support sensemaking, interactive visualization should also support social interaction. However, the most appropriate collaboration mechanisms for supporting this interaction are not immediately clear. In this article, we present design considerations for asynchronous collaboration in visual analysis environments, highlighting issues of work parallelization, communication, and social organization. These considerations provide a guide for the design and evaluation of collaborative visualization systems. [less]

Generalized Selection via Interactive Query Relaxation

Jeffrey Heer, Maneesh Agrawala, Wesley Willett

Selection is a fundamental task in interactive applications, typically performed by clicking or lassoing items of interest. However, users may require more nuanced forms ... [more] Selection is a fundamental task in interactive applications, typically performed by clicking or lassoing items of interest. However, users may require more nuanced forms of selection. Selecting regions or attributes may be more important than selecting individual items. Selections may be over dynamic items and selections might be more easily created by relaxing simpler selections (e.g., "select all items like this one"). Creating such selections requires that interfaces model the declarative structure of the selection, not just individually selected items. We present direct manipulation techniques that couple declarative selection queries with a query relaxation engine that enables users to interactively generalize their selections. We apply our selection techniques in both information visualization and graphics editing applications, enabling generalized selection over both static and dynamic interface objects. A controlled study finds that users create more accurate selection queries when using our generalization techniques. [less]

Graphical Histories for Visualization: Supporting Analysis, Communication, and Evaluation

Jeffrey Heer, Jock D. Mackinlay, Chris Stolte, Maneesh Agrawala

Interactive history tools, ranging from basic undo and redo to branching timelines of user actions, facilitate iterative forms of interaction. In this paper, we investigate ... [more] Interactive history tools, ranging from basic undo and redo to branching timelines of user actions, facilitate iterative forms of interaction. In this paper, we investigate the design of history mechanisms for information visualization. We present a design space analysis of both architectural and interface issues, identifying design decisions and associated trade-offs. Based on this analysis, we contribute a design study of graphical history tools for Tableau, a database visualization system. These tools record and visualize interaction histories, support data analysis and communication of findings, and contribute novel mechanisms for presenting, managing, and exporting histories. Furthermore, we have analyzed aggregated collections of history sessions to evaluate Tableau usage. We describe additional tools for analyzing users’ history logs and how they have been applied to study usage patterns in Tableau. [less]

Effect of Character Animacy and Preparatory Motion on Perceptual Magnitude of Errors in Ballistic Motion

Paul Reitsma, James Andrews, Nancy Pollard EG 2008

An increasing number of projects have examined the perceptual magnitude of visible artifacts in animated motion. These studies have been performed using a mix of character ... [more] An increasing number of projects have examined the perceptual magnitude of visible artifacts in animated motion. These studies have been performed using a mix of character types, from detailed human models to abstract geometric objects such as spheres. We explore the extent to which character morphology influences user sensitivity to errors in a fixed set of ballistic motions replicated on three different character types. We find user sensitivity responds to changes in error type or magnitude in a similar manner regardless of character type, but that users display a higher sensitivity to some types of errors when these errors are displayed on more human-like characters. Further investigation of those error types suggests that being able to observe a period of preparatory motion before the onset of ballistic motion may be important. However, we found no evidence to suggest that a mismatch between the preparatory phase and the resulting ballistic motion was responsible for the higher sensitivity to errors that was observed for the most humanlike character. [less]

Agressive Tetrahedral Mesh Improvement

Bryan Klingner, Jonathan Shewchuk 2007 Meshing Roundtable

We present a tetrahedral mesh improvement schedule that usually creates meshes whose worst tetrahedra have a level of quality substantially better than those produced ... [more] We present a tetrahedral mesh improvement schedule that usually creates meshes whose worst tetrahedra have a level of quality substantially better than those produced by any previous method for tetrahedral mesh generation or mesh clean-up. Our goal is to aggressively optimize the worst tetrahedra, with speed a secondary consideration. Mesh optimization methods often get stuck in bad local optima (poor-quality meshes) because their repertoire of mesh transformations is weak. We employ a broader palette of operations than any previous mesh improvement software. Alongside the best traditional topological and smoothing operations, we introduce a topological transformation that inserts a new vertex (sometimes deleting others at the same time). We describe a schedule for applying and composing these operations that rarely gets stuck in a bad optimum. We demonstrate that all three techniques—smoothing, vertex insertion, and traditional transformations—are substantially more effective than any two alone. Our implementation usually improves meshes so that all dihedral angles are between 31 o and 149 o , or (with a different objective function) between 23 o and 136 o . [less]

Liquid Simulation on Lattice-Based Tetrahedral Meshes

Nuttapong Chentanez, Bryan Feldman, François Labelle, James F. O'Brien, Jonathan Shewchuk SCA 2007

This paper describes a simulation method for animating the behavior of incompressible liquids with complex free surfaces. The region occupied by the liquid is discretized ... [more] This paper describes a simulation method for animating the behavior of incompressible liquids with complex free surfaces. The region occupied by the liquid is discretized with a boundary-conforming tetrahedral mesh that grades from fine resolution near the surface to coarser resolution on the interior. At each time-step, semi-Lagrangian techniques are used to advect the fluid and its boundary forward, and a new conforming mesh is then constructed over the fluid-occupied region. The tetrahedral meshes are built using a variation of the body-centered cubic lattice structure that allows octree grading and deviation from the lattice-structure at boundaries. The semi-regular mesh structure can be generated rapidly and allows efficient computation and storage while still conforming well to boundaries and providing a mesh-quality guarantee. Pressure projection is performed using an algebraic multigrid method, and a thickening scheme is used to reduce volume loss when fluid features shrink below mesh resolution. Examples are provided to demonstrate that the resulting method can capture complex liquid motions that include fine detail on the free surfaces without suffering from excessive amounts volume loss or artificial damping. [less]

A Theory of Locally Low Dimensional Light Transport

Dhruv Mahajan, Ira Kemelmacher Shlizerman, Ravi Ramamoorthi, Peter Belhumeur SIGGRAPH 2007

Blockwise or Clustered Principal Component Analysis (CPCA) is commonly used to achieve real-time rendering of shadows and glossy reflections with precomputed radiance ... [more] Blockwise or Clustered Principal Component Analysis (CPCA) is commonly used to achieve real-time rendering of shadows and glossy reflections with precomputed radiance transfer (PRT). The vertices or pixels are partitioned into smaller coherent regions, and light transport in each region is approximated by a locally low dimensional subspace using PCA. Many earlier techniques such as surface light field and reflectance field compression use a similar paradigm. However, there has been no clear theoretical understanding of how light transport dimensionality increases with local patch size, nor of the optimal block size or number of clusters. In this paper, we develop a theory of locally low dimensional light transport, by using Szego’s eigenvalue theorem to analytically derive the eigenvalues of the covariance matrix for canonical cases. We show mathematically that for symmetric patches of area A, the number of basis functions for glossy reflections increases linearly with A, while for simple cast shadows, it often increases as √A. These results are confirmed numerically on a number of test scenes. Next, we carry out an analysis of the cost of rendering, trading off local dimensionality and the number of patches, deriving an optimal block size. Based on this analysis, we provide useful practical insights for setting parameters in CPCA and also derive a new adaptive subdivision algorithm. Moreover, we show that rendering time scales sub-linearly with the resolution of the image, allowing for interactive all-frequency relighting of 1024×1024 images. [less]

An Algorithm for Rendering Generalized Depth of Field Effects Based on Simulated Heat Diffusion

Todd Kosloff, Brian Barsky ICCSA 2007

Depth of field refers to the swath through a 3Dscene that is imaged in acceptable focus through an optics system, such as a camera lens. Control over depth of field is an important ... [more] Depth of field refers to the swath through a 3Dscene that is imaged in acceptable focus through an optics system, such as a camera lens. Control over depth of field is an important artistic tool that can be used to emphasize the sub- ject of a photograph. In a real camera, the control over depth of field is limited by the nature of the image formation process and by physical constraints. The depth of field effect has been simulated in computer graphics, but with the same limited control as found in real camera lenses. In this paper, we use diffusion in a non-homogeneous medium to generalize depth of field in computer graphics by enabling the user to independently specify the degree of blur at each point in three-dimensional space. Generalized depth of field provides a novel tool to emphasize an area of interest within a 3D scene, to pick objects out of a crowd, and to render a busy, complex picture more understandable by focusing only on relevant details that may be scattered throughout the scene. Our algorithm oper- ates by blurring a sequence of nonplanar layers that form the scene. Choosing a suitable blur algorithm for the layers is critical; thus, we develop appropriate blur semantics such that the blur algorithm will properly generalize depth of field. We found that diffusion in a non-homogeneous medium is the process that best suits these semantics. [less]

Isosurface Stuffing: Fast Tetrahedral Meshes with Good Dihedral Angles

François Labelle, Jonathan Shewchuk SIGGRAPH 2007

The isosurface stuffing algorithm fills an isosurface with a uniformly sized tetrahedral mesh whose dihedral angles are bounded between 10.7 o and 164.8 o , or (with a change ... [more] The isosurface stuffing algorithm fills an isosurface with a uniformly sized tetrahedral mesh whose dihedral angles are bounded between 10.7 o and 164.8 o , or (with a change in parameters) between 8.9 o and 158.8 o . The algorithm is whip fast, numerically robust, and easy to implement because, like Marching Cubes, it generates tetrahedra from a small set of precomputed stencils. A variant of the algorithm creates a mesh with internal grading: on the boundary, where high resolution is generally desired, the elements are fine and uniformly sized, and in the interior they may be coarser and vary in size. This combination of features makes isosurface stuffing a powerful tool for dynamic fluid simulation, large-deformation mechanics, and applications that require interactive remeshing or use objects defined by smooth implicit surfaces. It is the first algorithm that rigorously guarantees the suitability of tetrahedra for finite element methods in domains whose shapes are substantially more challenging than boxes. Our angle bounds are guaranteed by a computer-assisted proof. If the isosurface is a smooth 2-manifold with bounded curvature, and the tetrahedra are sufficiently small, then the boundary of the mesh is guaranteed to be a geometrically and topologically accurate approximation of the isosurface. [less]

Frequency Domain Normal Map Filtering

Charles Han, Bo Sun, Ravi Ramamoorthi, Eitan Grinspun SIGGRAPH 2007

Filtering is critical for representing image-based detail, such as textures or normal maps, across a variety of scales. While mipmapping textures is commonplace, accurate ... [more] Filtering is critical for representing image-based detail, such as textures or normal maps, across a variety of scales. While mipmapping textures is commonplace, accurate normal map filtering remains a challenging problem because of nonlinearities in shading--we cannot simply average nearby surface normals. In this paper, we show analytically that normal map filtering can be formalized as a spherical convolution of the normal distribution function (NDF) and the BRDF, for a large class of common BRDFs such as Lambertian, microfacet and factored measurements. This theoretical result explains many previous filtering techniques as special cases, and leads to a generalization to a broader class of measured and analytic BRDFs. Our practical algorithms leverage a significant body of previous work that has studied lighting-BRDF convolution. We show how spherical harmonics can be used to filter the NDF for Lambertian and low-frequency specular BRDFs, while spherical von Mises-Fisher distributions can be used for high-frequency materials. [less]

Symmetric Embedding of Locally Regular Hyperbolic Tilings

Carlo H. Séquin Bridges 2007

Energy Minimizers for Curvature-Based Surface Functionals

Pushkar P. Joshi, Carlo H. Séquin

We compare curvature-based surface functionals by comparing the aesthetic properties of their minimizers. We introduce an enhancement to the original inline curvature ... [more] We compare curvature-based surface functionals by comparing the aesthetic properties of their minimizers. We introduce an enhancement to the original inline curvature variation functional. This new functional also considers the mixed cross terms of the normal curvature derivative and is a more complete formulation of a curvature variation functional. To give designers an intuitive feel for the preferred shapes attained by these different functionals, we present a catalog of the minimum energy shapes for various symmetrical, unconstrained input surfaces of different genera. [less]

Computer-Aided Design and Realization of Geometrical Sculptures

The use of computer-aided design tools in the conception and realization of large-scale geometrical bronze sculptures is described. An inspirational piece of sculpture ... [more] The use of computer-aided design tools in the conception and realization of large-scale geometrical bronze sculptures is described. An inspirational piece of sculpture is analyzed and then captured in procedural form including several design parameters. These parameters not only allow the sculpture to be scaled to different sizes and individually optimized for each scale, but also facilitate the design of new sculptures that lie in the same conceptual family. The parameterized representation takes care of constraints and limitations in several of the implementation steps and provides additional aids for the assembly of a large sculpture from many smaller and more easily manufacturable pieces. [less]

Viewpoint-Coded Structured Light

Mark Young, Erik Beeson, James Davis, Szymon Rusinkiewicz, Ravi Ramamoorthi CVPR 2007

We introduce a theoretical framework and practical algorithms for replacing time-coded structured light patterns with viewpoint codes, in the form of additional camera ... [more] We introduce a theoretical framework and practical algorithms for replacing time-coded structured light patterns with viewpoint codes, in the form of additional camera locations. Current structured light methods typically use log(N) light patterns, encoded over time, to unambiguously reconstruct N unique depths. We demonstrate that each additional camera location may replace one frame in a temporal binary code. Our theoretical viewpoint coding analysis shows that, by using a high frequency stripe pattern and placing cameras in carefully selected locations, the epipolar projection in each camera can be made to mimic the binary encoding patterns normally projected over time. Results from our practical implementation demonstrate reliable depth reconstruction that makes neither temporal nor spatial continuity assumptions about the scene being captured. [less]

Dirty Glass: Rendering Contamination on Transparent Surfaces

Jinwei Gu, Ravi Ramamoorthi, Peter N. Belhumeur, Shree Nayar EGSR 2007

Rendering of clean transparent objects has been well studied in computer graphics. However, real-world transparent objects are seldom clean--their surfaces have a variety ... [more] Rendering of clean transparent objects has been well studied in computer graphics. However, real-world transparent objects are seldom clean--their surfaces have a variety of contaminants such as dust, dirt, and lipids. These contaminants produce a number of complex volumetric scattering effects that must be taken into account when creating photorealistic renderings. In this project, we take a significant step towards modeling and rendering these effects. We make the assumption that the contaminant is an optically thin layer and construct an analytic model based on pre-existing results in computer graphics and radiative transport theory for the net bidirectional reflectance/transmission distribution function. Moreover, the spatial textures created by the different types of contamination are also important in achieving visual realism. To this end, we measure the spatially varying thicknesses and the scattering parameters of a large number of glass panes with various types of dust, dirt, and lipids. We also develop a simple interactive synthesis tool to create novel instances of the measured contamination patterns. We show several results that demonstrate the use of our scattering model for rendering 3D scenes, as well as modifying real 2D photographs. [less]

A Real-Time Beam Tracer with Application to Exact Soft Shadows

Ryan Overbeck, Ravi Ramamoorthi, William R. Mark EGSR 2007

Efficiently calculating accurate soft shadows cast by area light sources remains a difficult problem. Ray tracing based approaches are subject to noise or banding, and most ... [more] Efficiently calculating accurate soft shadows cast by area light sources remains a difficult problem. Ray tracing based approaches are subject to noise or banding, and most other accurate methods either scale poorly with scene geometry or place restrictions on geometry and/or light source size and shape. Beam tracing is one solu- tion which has historically been considered too slow and complicated for most practical rendering applications. Beam tracing’s performance has been hindered by complex geometry intersection tests, and a lack of good accel- eration structures with efficient algorithms to traverse them. We introduce fast new algorithms for beam tracing, specifically for beam–triangle intersection and beam–kd-tree traversal. The result is a beam tracer capable of calculating precise primary visibility and point light shadows in real-time. Moreover, beam tracing provides full area elements instead of point samples, which allows us to maintain coherence through to secondary effects and utilize the GPU for high quality antialiasing and shading with minimal extra cost. More importantly, our analysis shows that beam tracing is particularly well suited to soft shadows from area lights, and we generate essentially exact noise-free soft shadows for complex scenes in seconds rather than minutes or hours. [less]

A Method for Cartoon-Style Rendering of Liquid Animations

Ashley M. Eden, Adam Bargteil, Tolga Goktekin, Sara Beth Eisinger, James F. O'Brien GI 2007

In this paper we present a visually compelling and informative cartoon rendering style for liquid animations. Our style is inspired by animations such as Futurama, The Little ... [more] In this paper we present a visually compelling and informative cartoon rendering style for liquid animations. Our style is inspired by animations such as Futurama, The Little Mermaid, and Bambi . We take as input a liquid surface obtained from a three-dimensional physically based liquid simulation system and output animations that evoke a cartoon style and convey liquid movement. Our method is based on four cues that emphasize properties of the liquid's shape and motion. We use bold outlines to emphasize depth discontinuities, patches of constant color to highlight near-silhouettes and areas of thinness, and, optionally place temporally coherent oriented textures on the liquid surface to help convey motion. [less]

Hyper-Seeing the Regular Hendeca-choron

Carlo H. Séquin ISAMA 2007

The hendecachoron is an abstract 4-dimensional polytope composed of eleven cells in the form of hemi-icosahedra. This paper tries to foster an understanding of this intriguing ... [more] The hendecachoron is an abstract 4-dimensional polytope composed of eleven cells in the form of hemi-icosahedra. This paper tries to foster an understanding of this intriguing object of high symmetry by discussing its construction in bottom-up and top down ways and providing visualization by computer graphics models. [less]

Design and Implementation of Pax Mundi II

On January 18, 2007 a ten foot tall bronze sculpture Pax Mundi II was installed in the courtyard of the H&R Block headquarters in Kansas City. This paper describes the computer-aided ... [more] On January 18, 2007 a ten foot tall bronze sculpture Pax Mundi II was installed in the courtyard of the H&R Block headquarters in Kansas City. This paper describes the computer-aided re-design process that started from the original Pax Mundi wood sculpture, as well as the fabrication and installation of the final sculpture. [less]

Time-Varying BRDFs

Bo Sun, Kalyan Sunkavalli, Ravi Ramamoorthi, Peter Belhumeur, Shree Nayar TVCG 2007

The properties of virtually all real-world materials change with time, causing their bidirectional reflectance distribution functions (BRDFs) to be time varying. However ... [more] The properties of virtually all real-world materials change with time, causing their bidirectional reflectance distribution functions (BRDFs) to be time varying. However, none of the existing BRDF models and databases take time variation into consideration; they represent the appearance of a material at a single time instance. In this paper, we address the acquisition, analysis, modeling, and rendering of a wide range of time-varying BRDFs (TVBRDFs). We have developed an acquisition system that is capable of sampling a material's BRDF at multiple time instances, with each time sample acquired within 36 sec. We have used this acquisition system to measure the BRDFs of a wide range of time-varying phenomena, which include the drying of various types of paints (watercolor, spray, and oil), the drying of wet rough surfaces (cement, plaster, and fabrics), the accumulation of dusts (household and joint compound) on surfaces, and the melting of materials (chocolate). Analytic BRDF functions are fit to these measurements and the model parameters' variations with time are analyzed. Each category exhibits interesting and sometimes nonintuitive parameter trends. These parameter trends are then used to develop analytic TVBRDF models. The analytic TVBRDF models enable us to apply effects such as paint drying and dust accumulation to arbitrary surfaces and novel materials. [less]

Real-Time Ambient Occlusion for Dynamic Character Skins

Adam Kirk, Okan Arikan I3D 2007

We present a single-pass hardware accelerated method to reconstruct compressed ambient occlusion values in real-time on dynamic character skins. This method is designed ... [more] We present a single-pass hardware accelerated method to reconstruct compressed ambient occlusion values in real-time on dynamic character skins. This method is designed to work with meshes that are deforming based on a low-dimensional set of parameters, as in character animation. The inputs to our method are rendered ambient occlusion values at the vertices of a mesh deformed into various poses, along with the corresponding degrees of freedom of those poses. The algorithm uses k-means clustering to group the degrees of freedom into a small number of pose clusters. Because the pose variation in a cluster is small, our method can define a low-dimensional pose representation using principal component analysis. Within each cluster, we approximate ambient occlusion as a linear function in the reduced-dimensional representation. When drawing the character, our method uses moving least squares to blend the reconstructed ambient occlusion values from a small number of pose clusters. This technique offers significant memory savings over storing uncompressed values, and can generate plausible ambient occlusion values for poses not seen in training. Because we are using linear functions our output is smooth, fast to evaluate, and easy to implement in a vertex or fragment shader. [less]

A first-order analysis of lighting, shading, and shadows

Ravi Ramamoorthi, Dhruv Mahajan, Peter Belhumeur TOG

The shading in a scene depends on a combination of many factors---how the lighting varies spatially across a surface, how it varies along different directions, the geometric ... [more] The shading in a scene depends on a combination of many factors---how the lighting varies spatially across a surface, how it varies along different directions, the geometric curvature and reflectance properties of objects, and the locations of soft shadows. In this paper, we conduct a complete first order or gradient analysis of lighting, shading and shadows, showing how each factor separately contributes to scene appearance, and when it is important. Gradients are well suited for analyzing the intricate combination of appearance effects, since each gradient term corresponds directly to variation in a specific factor. First, we show how the spatial and directional gradients of the light field change, as light interacts with curved objects. Second, we consider the individual terms responsible for shading gradients, such as lighting variation, convolution with the surface BRDF, and the object's curvature. This analysis indicates the relative importance of various terms, and shows precisely how they combine in shading. As one practical application, our theoretical framework can be used to adaptively sample images in high-gradient regions for efficient rendering. Third, we understand the effects of soft shadows, computing accurate visibility gradients. We generalize previous work to arbitrary curved occluders, and develop a local framework that is easy to integrate with conventional ray-tracing methods. Our visibility gradients can be directly used in practical gradient interpolation methods for efficient rendering. [less]

4D compression and relighting with high-resolution light transport matrices

Ewen Cheslack-Postava, Nolan Goodnight, Ren Ng, Ravi Ramamoorthi, Greg Humphreys I3D 2007

This paper presents a method for efficient compression and relighting with high-resolution, precomputed light transport matrices. We accomplish this using a 4D wavelet ... [more] This paper presents a method for efficient compression and relighting with high-resolution, precomputed light transport matrices. We accomplish this using a 4D wavelet transform, transforming the columns of the transport matrix, in addition to the 2D row transform used in previous work. We show that a standard 4D wavelet transform can actually inflate portions of the matrix, because high-frequency lights lead to high-frequency images that cannot easily be compressed. Therefore, we present an adaptive 4D wavelet transform that terminates at a level that avoids inflation and maximizes sparsity in the matrix data. Finally, we present an algorithm for fast relighting from adaptively compressed transport matrices. Combined with a GPU-based precomputation pipeline, this results in an image and geometry relighting system that performs significantly better than 2D compression techniques, on average 2x-3x better in terms of storage cost and rendering speed for equal quality matrices. [less]

A semi-Lagrangian contouring method for fluid simulation

Adam Bargteil, Tolga Goktekin, James F. O'Brien, John A. Strain ACM Trans. Graphics

In this paper we present a semi-Lagrangian surface tracking method for use with fluid simulations. Our method maintains an explicit polygonal mesh that defines the surface ... [more] In this paper we present a semi-Lagrangian surface tracking method for use with fluid simulations. Our method maintains an explicit polygonal mesh that defines the surface, and an octree data structure that provides both a spatial index for the mesh and a means for efficiently approximating the signed distance to the surface. At each timestep a new surface is constructed by extracting the zero set of an advected signed-distance function. Semi-Lagrangian backward path tracing is used to advect the signed-distance function. One of the primary advantages of this formulation is that it enables tracking of surface characteristics, such as color or texture coordinates, at negligible additional cost. We include several examples demonstrating that the method can be effectively used as part of a fluid simulation to animate complex and interesting fluid behaviors. [less]

Optimization of HDR brachytherapy dose distributions using linear programming with penalty costs

Ron Alterovitz, Etienne Lessard, Jean Pouliot, I-Chow Joe Hsu, James F. O'Brien, Ken Goldberg J. Medical Physics

Prostate cancer is increasingly treated with high-dose-rate (HDR) brachytherapy, a type of radiotherapy in which a radioactive source is guided through catheters temporarily ... [more] Prostate cancer is increasingly treated with high-dose-rate (HDR) brachytherapy, a type of radiotherapy in which a radioactive source is guided through catheters temporarily implanted in the prostate. Clinicians must set dwell times for the source inside the catheters so the resulting dose distribution minimizes deviation from dose prescriptions that conform to patient-specific anatomy. The primary contribution of this paper is to take the well-established dwell times optimization problem defined by Inverse Planning by Simulated Annealing (IPSA) developed at UCSF and exactly formulate it as a linear programming (LP) problem. Because LP problems can be solved exactly and deterministically, this formulation provides strong performance guarantees: one can rapidly find the dwell times solution that globally minimizes IPSA's objective function for any patient case and clinical criteria parameters. For a sample of 20 prostates with volume ranging from 23 to 103 cc, the new LP method optimized dwell times in less than 15 s per case on a standard PC. The dwell times solutions currently being obtained clinically using simulated annealing (SA), a probabilistic method, were quantitatively compared to the mathematically optimal solutions obtained using the LP method. The LP method resulted in significantly improved objective function values compared to SA (P = 1.54 * 10 -7 ), but none of the dosimetric indices indicated a statistically significant difference (P ≤ 0.01). The results indicate that solutions generated by the current version of IPSA are clinically equivalent to the mathematically optimal solutions. [less]

Efficient Shadows from Sampled Environment Maps

Aner Ben-Artzi, Ravi Ramamoorthi, Maneesh Agrawala JGT 06

This paper addresses the problem of efficiently calculating shadows from environment maps. Since accurate rendering of shadows from environment maps requires hundreds ... [more] This paper addresses the problem of efficiently calculating shadows from environment maps. Since accurate rendering of shadows from environment maps requires hundreds of lights, the expensive computation is determining visibility from each pixel to each light direction, such as by ray-tracing. We show that coherence in both spatial and angular domains can be used to reduce the number of shadow rays that need to be traced. Specifically, we use a coarse-to-fine evaluation of the image, predicting visibility by reusing visibility calculations from four nearby pixels that have already been evaluated. This simple method allows us to explicitly mark regions of uncertainty in the prediction. By only tracing rays in these and neighboring directions, we are able to reduce the number of shadow rays traced by up to a factor of 20 while maintaining error rates below 0.01%. For many scenes, our algorithm can add shadowing from hundreds of lights at twice the cost of rendering without shadows. [less]

A Texture Synthesis Method for Liquid Animations

Adam Bargteil, Funshing Sin, Jonathan E. Michaels, Tolga Goktekin, James F. O'Brien SCA 2006

In this paper we present a method for synthesizing textures on animated liquid surfaces generated by a physically based fluid simulation system. Rather than advecting ... [more] In this paper we present a method for synthesizing textures on animated liquid surfaces generated by a physically based fluid simulation system. Rather than advecting texture coordinates on the surface, we synthesize a new texture for every frame. We synthesize the texture with an optimization procedure which attempts to match the surface texture to an input sample texture. By synthesizing a new texture for every frame, our method is able to overcome the discontinuities and distortions of an advected parameterization. We achieve temporal coherence by initializing the surface texture with color values advected from the surface at the previous frame and including these colors in the energy function used during optimization. [less]

Hayley Iben, James F. O'Brien SCA 2006

We present a method for generating surface crack patterns that appear in materials such as mud, ceramic glaze, and glass. To model these phenomena, we build upon existing physically ... [more] We present a method for generating surface crack patterns that appear in materials such as mud, ceramic glaze, and glass. To model these phenomena, we build upon existing physically based methods. Our algorithm generates cracks from a stress field defined heuristically over a triangle discretization of the surface. The simulation produces cracks by evolving this field over time. The user can control the characteristics and appearance of the cracks using a set of simple parameters. By changing these parameters, we have generated examples similar to a variety of crack patterns found in the real world. We assess the realism of our results by a comparison with photographs of real-world examples. Using a physically based approach also enables us to generate animations similar to time-lapse photography. Awarded best paper at SCA 2006. [less]

Simultaneous Coupling of Fluids and Deformable Bodies

Nuttapong Chentanez, Tolga Goktekin, Bryan Feldman, James F. O'Brien SCA 2006

This paper presents a method for simulating the two-way interaction between fluids and deformable solids. The fluids are simulated using an incompressible Eulerian formulation ... [more] This paper presents a method for simulating the two-way interaction between fluids and deformable solids. The fluids are simulated using an incompressible Eulerian formulation where a linear pressure projection on the fluid velocities enforces mass conservation. Similarly, elastic solids are simulated using a semi-implicit integrator implemented as a linear operator applied to the forces acting on the nodes in Lagrangian formulation.The proposed method enforces coupling constraints between the fluid and the elastic systems by combining both the pressure projection and implicit integration steps into one set of simultaneous equations. Because these equations are solved simultaneously the resulting combined system treats closed regions in physically correct fashion, and has good stability characteristics allowing relatively large time steps. This general approach is not tied to any particular volume discretization of fluid or solid, and we present results implemented using both grid based and tetrahedral simulations. [less]

Fluid Animation with Dynamic Meshes

Bryan Klingner, Bryan Feldman, Nuttapong Chentanez, James F. O'Brien SIGGRAPH 2006

This paper presents a method for animating fluid with unstructured tetrahedral meshes that change at each time step. Meshes that conform well to changing boundaries and ... [more] This paper presents a method for animating fluid with unstructured tetrahedral meshes that change at each time step. Meshes that conform well to changing boundaries and that focus computation in the visually important parts of the domain can be generated quickly and reliably using existing techniques. We also describe a new approach to two-way coupling of fluid and rigid bodies that, while general, benefits from remeshing. Overall, the method provides a flexible environment for creating complex scenes involving fluid animation. [less]

Adam Bargteil, Funshing Sin, Jonathan Michaels, James F. O'Brien SIGGRAPH 2006 Tech Sketch

In this sketch we present a method for synthesizing textures on animated liquid surfaces generated by a physically based fluid simulation system. Rather than advecting ... [more] In this sketch we present a method for synthesizing textures on animated liquid surfaces generated by a physically based fluid simulation system. Rather than advecting texture coordinates on the surface, we synthesize a new texture for every frame. We synthesize the texture with an optimization procedure which attempts to match the surface texture to an input sample texture. By synthesizing a new texture for every frame, our method is able to overcome the discontinuities and distortions of an advected parameterization. We achieve temporal coherence by initializing the surface texture with color values advected from the surface at the previous frame and including these colors in the energy function used during optimization. [less]

Acquiring Scattering Properties of Participating Media by Dilution

Srinivasa G. Narasimhan, Mohit Gupta, Craig Donner , Ravi Ramamoorthi, Shree Nayar, Henrik Wann Jensen SIGGRAPH 2006

The visual world around us displays a rich set of volumetric effects due to participating media. The appearance of these media is governed by several physical properties ... [more] The visual world around us displays a rich set of volumetric effects due to participating media. The appearance of these media is governed by several physical properties such as particle densities, shapes and sizes, which must be input (directly or indirectly) to a rendering algorithm to generate realistic images. While there has been significant progress in developing rendering techniques (for instance, volumetric Monte Carlo methods and analytic approximations), there are very few methods that measure or estimate these properties for media that are of relevance to computer graphics. In this paper, we present a simple device and technique for robustly estimating the properties of a broad class of participating media that can be either (a) diluted in water such as juices, beverages, paints and cleaning supplies, or (b) dissolved in water such as powders and sugar/salt crystals, or (c) suspended in water such as impurities. The key idea is to dilute the concentrations of the media so that single scattering effects dominate and multiple scattering becomes negligible, leading to a simple and robust estimation algorithm. Furthermore, unlike previous approaches that require complicated or separate measurement setups for different types or properties of media, our method and setup can be used to measure media with a complete range of absorption and scattering properties from a single HDR photograph. Once the parameters of the diluted medium are estimated, a volumetric Monte Carlo technique may be used to create renderings of any medium concentration and with multiple scattering. We have measured the scattering parameters of forty commonly found materials, that can be immediately used by the computer graphics community. We can also create realistic images of combinations or mixtures of the original measured materials, thus giving the user a wide flexibility in making realistic images of participating media. [less]

Reflectance Sharing: Predicting Appearance from a Sparse Set of Images of a Known Shape

Todd Zickler, Sebastian Enrique, Ravi Ramamoorthi, Peter Belhumeur PAMI 2006 Aug

When the shape of an object is known, its appearance is determined by the spatially-varying reflectance function defined on its surface. Image-based rendering methods ... [more] When the shape of an object is known, its appearance is determined by the spatially-varying reflectance function defined on its surface. Image-based rendering methods that use geometry seek to estimate this function from image data. Most existing methods recover a unique angular reflectance function (e.g., BRDF) at each surface point and provide reflectance estimates with high spatial resolution. Their angular accuracy is limited by the number of available images, and as a result, most of these methods focus on capturing parametric or low-frequency angular reflectance effects, or allowing only one of lighting or viewpoint variation. We present an alternative approach that enables an increase in the angular accuracy of a spatially-varying reflectance function in exchange for a decrease in spatial resolution. By framing the problem as scattered-data interpolation in a mixed spatial and angular domain, reflectance information is shared across the surface, exploiting the high spatial resolution that images provide to fill the holes between sparsely observed view and lighting directions. Since the BRDF typically varies slowly from point to point over much of an object's surface, this method enables image-based rendering from a sparse set of images without assuming a parametric reflectance model. In fact, the method can even be applied in the limiting case of a single input image. [less]

Time-varying surface appearance: acquisition, modeling and rendering

Jinwei Gu, Chien-I Tu, Ravi Ramamoorthi, Peter Belhumeur, Wojciech Matusik, Shree Nayar SIGGRAPH 2006

In this project, we take a significant step towards measuring, modeling and rendering time-varying surface appearance. Traditional computer graphics rendering generally ... [more] In this project, we take a significant step towards measuring, modeling and rendering time-varying surface appearance. Traditional computer graphics rendering generally assumes that the appearance of surfaces remains static over time. Yet, there are a number of natural processes that cause surface appearance to vary dramatically, such as burning of wood, wetting and drying of rock and fabric, decay of fruit skins, or corrosion and rusting of steel and copper. Our research focuses on these various time-varying surface appearance phenomena. For acqusition, we built the first time-varying surface appearance database of 26 samples, including a variety of natural processes such as burning, drying on smooth and rough surfaces, decay, and corrosion. We also proposed a novel Space-Time Appearance Factorization (STAF) model, which factors space and time-varying effects and thus gives us much more control and editing capability to the original data. The STAF model includes an overall temporal appearance variation characteristic of the specific process, as well as space-dependent textures, rates and offsets, that control the different rates at which different spatial locations evolve, causing spatial patterns on the surface over time. Experimental results show that the model represents a variety of phenomena accurately. Moreover, it enables a number of novel rendering applications, such as transfer of the time-varying effect to a new static surface, control to accelerate time evolution in certain areas, extrapolation beyond the acquired sequence, and texture synthesis of time-varying appearance. [less]

A Compact Factored Representation of Heterogeneous Subsurface Scattering

Pieter Peers, Karl vom Berge, Wojciech Matusik, Ravi Ramamoorthi, Jason Lawrence, Szymon Rusinkiewicz, Philip Dutr{\'e} SIGGRAPH 2006

Heterogeneous subsurface scattering in translucent materials is one of the most beautiful but complex effects. We acquire spatial BSSRDF datasets using a projector, and ... [more] Heterogeneous subsurface scattering in translucent materials is one of the most beautiful but complex effects. We acquire spatial BSSRDF datasets using a projector, and develop a novel nonlinear factorization that separates a homogeneous kernel, and heterogeneous discontinuities. This enables rendering of complex spatially-varying translucent materials. [less]

Nuttapong Chentanez, Tolga Goktekin, Bryan Feldman, James F. O'Brien SIGGRAPH 2006 Tech Sketch

We describe a method for simultaneous two-way coupling of fluid and deformable bodies. The interaction between a fluid and deformable body can create complex and interesting ... [more] We describe a method for simultaneous two-way coupling of fluid and deformable bodies. The interaction between a fluid and deformable body can create complex and interesting motion that would be difficult to convincingly animate by hand. [less]

Inverse shade trees for non-parametric material representation and editing

Jason Lawrence, Aner Ben-Artzi, Christopher DeCoro, Wojciech Matusik, Hanspeter Pfister, Ravi Ramamoorthi, Szymon Rusinkiewicz SIGGRAPH 2006

Recent progress in the measurement of surface reflectance has created a demand for non-parametric appearance representations that are accurate, compact, and easy to use ... [more] Recent progress in the measurement of surface reflectance has created a demand for non-parametric appearance representations that are accurate, compact, and easy to use for rendering. Another crucial goal, which has so far received little attention, is editability: for practical use, we must be able to change both the directional and spatial behavior of surface reflectance (e.g., making one material shinier, another more anisotropic, and changing the spatial "texture maps" indicating where each material appears). We introduce an Inverse Shade Tree framework that provides a general approach to estimating the "leaves" of a user-specified shade tree from high-dimensional measured datasets of appearance. These leaves are sampled 1- and 2-dimensional functions that capture both the directional behavior of individual materials and their spatial mixing patterns. In order to compute these shade trees automatically, we map the problem to matrix factorization and introduce a flexible new algorithm that allows for constraints such as non-negativity, sparsity, and energy conservation. Although we cannot infer every type of shade tree, we demonstrate the ability to reduce multigigabyte measured datasets of the Spatially-Varying Bidirectional Reflectance Distribution Function (SVBRDF) into a compact representation that may be edited in real time. [less]

Real-Time BRDF Editing in Complex Lighting

Aner Ben-Artzi, Ryan Overbeck, Ravi Ramamoorthi SIGGRAPH 06

Current systems for editing BRDFs typically allow users to adjust analytic parameters while visualizing the results in a simplified setting (e.g. unshadowed point light ... [more] Current systems for editing BRDFs typically allow users to adjust analytic parameters while visualizing the results in a simplified setting (e.g. unshadowed point light). This paper describes a real-time rendering system that enables interactive edits of BRDFs, as rendered in their final placement on objects in a static scene, lit by direct, complex illumination. All-frequency effects (ranging from near-mirror reflections and hard shadows to diffuse shading and soft shadows) are rendered using a precomputation-based approach. Inspired by real-time relighting methods, we create a linear system that fixes lighting and view to allow real-time BRDF manipulation. In order to linearize the image's response to BRDF parameters, we develop an intermediate curve-based representation, which also reduces the rendering and precomputation operations to 1D while maintaining accuracy for a very general class of BRDFs. Our system can be used to edit complex analytic BRDFs (including anisotropic models), as well as measured reflectance data. We improve on the standard precomputed radiance transfer (PRT) rendering computation by introducing an incremental rendering algorithm that takes advantage of frame-to-frame coherence. We show that it is possible to render reference-quality images while only updating 10% of the data at each frame, sustaining frame-rates of 25-30fps. [less]

Patterns on the Genus-3 Klein Quartic

Carlo H. Séquin Bridges 2006

Projections of Klein's quartic surface of genus 3 into 3D space are used as canvases on which we present regular tessellations, Escher tilings, knot- and graph-embedding ... [more] Projections of Klein's quartic surface of genus 3 into 3D space are used as canvases on which we present regular tessellations, Escher tilings, knot- and graph-embedding problems, Hamiltonian cycles, Petrie polygons and equatorial weaves derived from them. Many of the solutions found have also been realized as small physical models made on rapid-prototyping machines. [less]

Hayley Iben, James F. O'Brien, Erik Demaine SoCG 2006

This paper describes an algorithm for generating a guaranteed-intersection-free interpolation sequence between any pair of compatible polygons. Our algorithm builds ... [more] This paper describes an algorithm for generating a guaranteed-intersection-free interpolation sequence between any pair of compatible polygons. Our algorithm builds on prior results from linkage unfolding, and if desired it can ensure that every edge length changes monotonically over the course of the interpolation sequence. The computational machinery that ensures against self-intersection is independent from the distance metric that determines the overall character of the interpolation sequence. This approach provides a powerful control mechanism for determining how the interpolation should appear, while still assuring against intersection and guaranteeing termination of the algorithm. Our algorithm also allows additional control by accommodating set of algebraic constraints that can be weakly enforced throughout the interpolation sequence. Awarded best paper at SoCG 2006. [less]

Extensions for 3D Graphics Rendering Engine used for Direct Tessellation of Spline Surfaces

Adrien Sfarti, Brian Barsky, Todd Kosloff, Egon Pasztor, Alex Kozlowski, Eric Roman, Alex Perelman ICCS 2006

In current 3D graphics architectures, the bus between the triangle server and the rendering engine GPU is clogged with triangle vertices and their many attributes (normal ... [more] In current 3D graphics architectures, the bus between the triangle server and the rendering engine GPU is clogged with triangle vertices and their many attributes (normal vectors, colors, texture coordinates). We have developed a new 3D graphics architecture using data compression to unclog the bus between the triangle server and the rendering engine. This new architecture has been described in [1]. In the present paper we describe further developments of the newly proposed architecture. The current paper shows several interesting extensions of our architecture such as backsurface rejection, NURBS real time tesselation and a description of a surface based API. We also show how the implementation of our architecture operates on top of the pixel shaders. [less]

New 3D Graphics Rendering Engine Architecture for Direct Tessellation of Spline Surfaces

Adrien Sfarti, Brian Barsky, Todd Kosloff, Egon Pasztor, Alex Kozlowski, Eric Roman, Alex Perelman 3IA 2006

In current 3D graphics architectures, the bus between the triangle server and the rendering engine GPU is clogged with triangle vertices and their many attributes (normal ... [more] In current 3D graphics architectures, the bus between the triangle server and the rendering engine GPU is clogged with triangle vertices and their many attributes (normal vectors, colors, texture coordinates). We develop a new 3D graphics architecture using data compression to unclog the bus between the triangle server and the rendering engine. The data compression is achieved by replacing the conventional idea of a GPU that renders triangles with a GPU that tessellates surface patches into triangles. [less]

Human Vision Based Detection of Non-Uniform Brightness on LCD Panels

Jee Hong Kim, Brian Barsky MVAII 2004

We propose a method to detect defects due to spatially non-uniform brightness on LCD panels by using a machine vision technique. The detection method is based on human vision ... [more] We propose a method to detect defects due to spatially non-uniform brightness on LCD panels by using a machine vision technique. The detection method is based on human vision so that proper subjective assessment experiments were conducted to investigate the correlation between the parameters related to non-uniformity and the degree how easily observable it is. The visibility of the defects reveals to depend mainly on the spatial gradient of brightness variation. Thus, in the proposed method, the spatial gradient that is calculated by using extracted contours will be utilized to detect the defects due to non-uniform brightness. The detection method comprises four parts: contour extraction, spatial gradient calculation, decision of defects, and display of defects. We applied the method to the images captured from practical LCD panels with non-uniformity defects and the results were consistent with detection by a human inspector. [less]

Exploiting Temporal Coherence for Incremental All-Frequency Relighting

Ryan Overbeck, Aner Ben-Artzi, Ravi Ramamoorthi, Eitan Grinspun EGSR 2006

Current PRT methods exploit spatial coherence of the lighting (such as with wavelets) and of light transport (such as with CPCA). We consider a significant, yet unexplored form ... [more] Current PRT methods exploit spatial coherence of the lighting (such as with wavelets) and of light transport (such as with CPCA). We consider a significant, yet unexplored form of coherence, temporal coherence of the lighting from frame to frame. We achieve speedups of 3x-4x over conventional PRT with minimal implementation effort, and can trivially be added to almost any existing PRT algorithm. [less]

Modeling Illumination Variation with Spherical Harmonics

Ravi Ramamoorthi

The appearance of objects including human faces can vary dramatically with the lighting. We present results that use spherical harmonic illumination basis functions ... [more] The appearance of objects including human faces can vary dramatically with the lighting. We present results that use spherical harmonic illumination basis functions to understand this variation for face modeling and recognition, as well as a number of other applications in graphics and vision. [less]

Computational Studies of Human Motion: Tracking and Motion Synthesis

David Forsyth, Okan Arikan, Leslie Ikemoto, James F. O'Brien, Deva Ramanan Foundations and Trends

We review methods for kinematic tracking of the human body in video. The review is part of a projected book that is intended to cross-fertilize ideas about motion representation ... [more] We review methods for kinematic tracking of the human body in video. The review is part of a projected book that is intended to cross-fertilize ideas about motion representation between the animation and computer vision communities. The review confines itself to the earlier stages of motion, focusing on tracking and motion synthesis; future material will cover activity representation and motion generation. In general, we take the position that tracking does not necessarily involve (as is usually thought) complex multimodal inference problems. Instead, there are two key problems, both easy to state. The first is lifting, where one must infer the configuration of the body in three dimensions from image data. Ambiguities in lifting can result in multimodal inference problem, and we review what little is known about the extent to which a lift is ambiguous. The second is data association, where one must determine which pixels in an image come from the body. We see a tracking by detection approach as the most productive, and review various human detection methods. Lifting, and a variety of other problems, can be simplified by observing temporal structure in motion, and we review the literature on data- driven human animation to expose what is known about this structure. Accurate generative models of human motion would be extremely useful in both animation and tracking, and we discuss the profound difficulties encountered in building such models. Discriminative methods – which should be able to tell whether an observed motion is human or not – do not work well yet, and we discuss why. There is an extensive discussion of open issues. In particular, we discuss the nature and extent of lifting ambiguities, which appear to be significant at short timescales and insignificant at longer timescales. This discussion suggests that the best tracking strategy is to track a 2D representation, and then lift it. We point out some puzzling phenomena associated with the choice of human motion representation – joint angles vs. joint positions. Finally, we give a quick guide to resources. [less]

Interactive Procedural Computer-Aided Design

Carlo H. Séquin CAD/Graphics 2005

The typical engineering design process can be decomposed into several phases: creative exploration of ideas, testing soundness of proposed concepts, refining concepts ... [more] The typical engineering design process can be decomposed into several phases: creative exploration of ideas, testing soundness of proposed concepts, refining concepts to realizable solutions, optimizing viable solutions with respect to performance/cost. Powerful computer algorithms have been developed for many of these tasks. Often these modules are rigid, allowing for little intervention by the designer, and the management of the interactions of these tasks mostly relies on human intelligence. Better user interfaces are required to integrate more fully human ingenuity and the assistance of the computer into the overall design process. The most powerful CAD systems should combine the power of programming, graphical visualization, and interactive adjustment of crucial design parameters. [less]

Semi-Automated Ultrasound Interpretation System Using Anatomical Knowledge Representation

Michael S. Downes, Brian Barsky VC 2005

Interpreting ultrasound data presents a significant challenge to medical personnel, which limits the clinical applications of the technology. We have addressed this ... [more] Interpreting ultrasound data presents a significant challenge to medical personnel, which limits the clinical applications of the technology. We have addressed this issue by developing a prototype computer-based system designed to aid non-expert medical practitioners in using ultrasound devices in a variety of different diagnostic situations. Essentially, the system treats the collection of images generated during an ultrasound examination as an ordered sequence of views of the anatomical environment and picks out key views in which the contents of the scan image changes. It stores descriptions of expected key views and matches incoming images to this key view sequence during an orientation phase of an examination. The prototype can guide a novice user through an examination of a patient’s abdomen and automatically identify anatomical structures within the region. Overall, the design represents a novel approach to processing and augmenting ultrasound data and to representing spatial knowledge. [less]

Elimination of Artifacts Due to Occlusion and Discretization Problems in Image Space Blurring Techniques

Brian Barsky, Michael Tobias, Derrick P. Chu, Daniel R. Horn GM 2005

Traditional computer graphics methods render images that appear sharp at all depths. Adding blur can add realism to a scene, provide a sense of scale, and draw a viewerÕs attention ... [more] Traditional computer graphics methods render images that appear sharp at all depths. Adding blur can add realism to a scene, provide a sense of scale, and draw a viewerÕs attention to a particular region of a scene. Our image-based blur algorithm needs to distinguish whether a portion of an image is either from a single object or is part of more than one object. This motivates two approaches to identify objects after an image has been rendered. We illustrate how these techniques can be used in conjunction with our image space method to add blur to a scene. [less]

Animating Gases with Hybrid Meshes

Bryan Feldman, James F. O'Brien, Bryan Klingner SIGGRAPH 2005

This paper presents a method for animating gases on unstructured tetrahedral meshes to efficiently model the interaction of the fluids with irregularly shaped obstacles ... [more] This paper presents a method for animating gases on unstructured tetrahedral meshes to efficiently model the interaction of the fluids with irregularly shaped obstacles. Because our discretization scheme parallels that of the standard staggered grid mesh we are able to combine tetrahedral cells with regular hexahedral cells in a single mesh. This hybrid mesh offers both accuracy near obstacles and efficiency in open regions. [less]

Efficiently Combining Positions and Normals for Precise 3D Geometry

Diego Nehab, Szymon Rusinkiewicz, James Davis, Ravi Ramamoorthi SIGGRAPH 2005

Range scanning, manual 3D editing, and other modeling approaches can provide information about the geometry of surfaces in the form of either 3D positions (e.g., triangle meshes ... [more] Range scanning, manual 3D editing, and other modeling approaches can provide information about the geometry of surfaces in the form of either 3D positions (e.g., triangle meshes or range images) or orientations (normal maps or bump maps). We present an algorithm that combines these two kinds of estimates to produce a new surface that approximates both. Our formulation is linear, allowing it to operate efficiently on complex meshes commonly used in graphics. It also treats high- and low-frequency components separately, allowing it to optimally combine outputs from data sources such as stereo triangulation and photometric stereo, which have different error-vs.-frequency characteristics. We demonstrate the ability of our technique to both recover high-frequency details and avoid low-frequency bias, producing surfaces that are more widely applicable than position or orientation data alone. [less]

A Practical Analytic Single Scattering Model for Real Time Rendering

Bo Sun, Ravi Ramamoorthi, Srinivasa Narasimhan, Shree Nayar SIGGRAPH 2005

We consider real-time rendering of scenes in participating media, capturing the effects of light scattering in fog, mist and haze. While a number of sophisticated approaches ... [more] We consider real-time rendering of scenes in participating media, capturing the effects of light scattering in fog, mist and haze. While a number of sophisticated approaches based on Monte Carlo and finite element simulation have been developed, those methods do not work at interactive rates. The most common real-time methods are essentially simple variants of the OpenGL fog model. While easy to use and specify, that model excludes many important qualitative effects like glows around light sources, the impact of volumetric scattering on the appearance of surfaces such as the diffusing of glossy highlights, and the appearance under complex lighting such as environment maps. In this paper, we present an alternative physically based approach that captures these effects while maintaining real-time performance and the ease-of-use of the OpenGL fog model. Our method is based on an explicit analytic integration of the single scattering light transport equations for an isotropic point light source in a homogeneous participating medium. We can implement the model in modern programmable graphics hardware using a few small numerical lookup tables stored as texture maps. Our model can also be easily adapted to generate the appearances of materials with arbitrary BRDFs, environment map lighting, and precomputed radiance transfer methods, in the presence of participating media. Hence, our techniques can be widely used in real-time rendering applications. [less]

Provably Good Moving Least Squares

Ravi Kolluri SODA 2005

We analyze a moving least squares algorithm for reconstructing a surface from point cloud data. Our algorithm defines an implicit function I whose zero set U is the reconstructed ... [more] We analyze a moving least squares algorithm for reconstructing a surface from point cloud data. Our algorithm defines an implicit function I whose zero set U is the reconstructed surface. We prove that I is a good approximation to the signed distance function of the sampled surface F and that U is geometrically close to and homeomorphic to F . Our proof requires sampling conditions similar to ε-sampling, used in Delaunay reconstruction algorithms. This paper won the Best Student Paper Award at SODA 2005. [less]

Fast and Detailed Approximate Global Illumination by Irradiance Decomposition

Okan Arikan, David A. Forsyth, James F. O'Brien SIGGRAPH 2005

In this paper we present an approximate method for accelerated computation of the final gathering step in a global illumination algorithm. Our method operates by decomposing ... [more] In this paper we present an approximate method for accelerated computation of the final gathering step in a global illumination algorithm. Our method operates by decomposing the radiance field close to surfaces into separate far- and near-field components that can be approximated individually. By computing surface shading using these approximations, instead of directly querying the global illumination solution, we have been able to obtain rendering time speed ups on the order of 10x compared to previous acceleration methods. Our approximation schemes rely mainly on the assumptions that radiance due to distant objects will exhibit low spatial and angular variation, and that the visibility between a surface and nearby surfaces can be reasonably predicted by simple location- and orientation-based heuristics. Motivated by these assumptions, our far-field scheme uses scattered-data interpolation with spherical harmonics to represent spatial and angular variation, and our near-field scheme employs and aggressively simple visibility heuristic. For our test scenes, the errors introduced when our assumptions fail do not result in visually objectionable artifacts or easily noticeable deviation from a ground-truth solution. We also discuss how our near-field approximation can be used with standard local illumination algorithms to produce significantly improved images at only negligible additional costs. [less]

Fluids in Deforming Meshes

Bryan Feldman, James F. O'Brien, Bryan Klingner, Tolga Goktekin SCA 2005

This paper describes a simple modification to an Eulerian fluid simulation that permits the underlying mesh to deform independent of the simulated fluid's motion. The modification ... [more] This paper describes a simple modification to an Eulerian fluid simulation that permits the underlying mesh to deform independent of the simulated fluid's motion. The modification consists of a straightforward adaptation of the commonly used semi-Lagrangian advection method to account for the mesh's motion. Because the method does not require more interpolation steps than standard semi-Lagrangian integration, it does not suffer from additional smoothing and requires only the added cost of updating the mesh. By specifying appropriate boundary conditions, mesh boundaries can behave like moving obstacles that act on the fluid resulting in a number of interesting effects. The paper includes several examples that have been computed on moving tetrahedral meshes. [less]

Pushing People Around

Okan Arikan, David Forsyth, James F. O'Brien SCA 2005

We present an algorithm for animating characters being pushed by an external source such as a user or a game environment. We start with a collection of motions of a real person ... [more] We present an algorithm for animating characters being pushed by an external source such as a user or a game environment. We start with a collection of motions of a real person responding to being pushed. When a character is pushed, we synthesize new motions by picking a motion from the recorded collection and modifying it so that the character responds to the push from the desired direction and location on its body. Determining the deformation parameters that realistically modify a recorded response motion is difficult. Choosing the response motion that will look best when modified is also non-trivial, especially in real-time. To estimate the envelope of deformation parameters that yield visually plausible modifications of a given motion, and to find the best motion to modify, we introduce an oracle. The oracle is trained using a set of synthesized response motions that are identified by a user as good and bad. Once trained, the oracle can, in real-time, estimate the visual quality of all motions in the collection and required deformation parameters to serve a desired push. Our method performs better than a baseline algorithm of picking the closest response motion in configuration space, because our method can find visually plausible transitions that do not necessarily correspond to similar motions in terms of configuration. Our method can also start with a limited set of recorded motions and modify them so that they can be used to serve different pushes on the upper body. [less]

Adam Bargteil, Tolga Goktekin, James F. O'Brien, John A. Strain SIGGRAPH 2005 Tech Sketch

In this sketch we present a semi-Lagrangian surface tracking method for use with fluid simulations. Our method main- tains an explicit polygonal mesh that defines the surface ... [more] In this sketch we present a semi-Lagrangian surface tracking method for use with fluid simulations. Our method main- tains an explicit polygonal mesh that defines the surface, and an octree data structure that provides both a spatial index for the mesh and an efficient means for evaluating the signed- distance function away from the surface. At each time step the surface is reconstructed from an implicit function defined by the composition of backward advection and the previous signed-distance function. One of the primary advantages of this formulation is that it enables tracking of surface charac- teristics, such as color or texture coordinates, at negligible additional cost. We include several examples demonstrating that the method can be used as part of a fluid simulation to effectively animate complex and interesting fluid behaviors. [less]

Symmetrical Hamiltonian Manifolds on Regular 3D and 4d Polytopes

Carlo H. Séquin Coxeter Day 2005

Hamiltonian cycles on the edge graphs of the regular polytopes in three and four dimensions are investigated with the primary goal of finding complete multi-colored coverages ... [more] Hamiltonian cycles on the edge graphs of the regular polytopes in three and four dimensions are investigated with the primary goal of finding complete multi-colored coverages of all the edges in the graph. The concept of a Hamiltonian path is then extended to the notion of Hamiltonian twomanifolds that visit all the given edges exactly once. For instance, the 4D simplex can be covered by a strip of 5 triangular facets that form a Moebius band! The use of Hamiltonian cycles to create physical dissection puzzles as well as geometrical sculptures is also investigated. The concepts are illustrated with computer graphics imagery and with small maquettes made with rapid prototyping techniques. [less]

Splitting Tori, Knots, and Moebius Bands

Carlo H. Séquin Bridges 2005

A study of sculptures and puzzles resulting from splitting lengthwise, tori, Moebius bands, various knots and graphs, illustrated with many models made on rapid prototyping ... [more] A study of sculptures and puzzles resulting from splitting lengthwise, tori, Moebius bands, various knots and graphs, illustrated with many models made on rapid prototyping machines. [less]

Adaptive Numerical Cumulative Distribution Functions for Efficient Importance Sampling

Jason Lawrence, Szymon Rusinkiewicz, Ravi Ramamoorthi EGSR 2004

As image-based surface reflectance and illumination gain wider use in physically-based rendering systems, it is becoming more critical to provide representations that ... [more] As image-based surface reflectance and illumination gain wider use in physically-based rendering systems, it is becoming more critical to provide representations that allow sampling light paths according to the distribution of energy in these high-dimensional measured functions. In this paper, we apply algorithms traditionally used for curve approximation to reduce the size of a multidimensional tabulated Cumulative Distribution Function (CDF) by one to three orders of magnitude without compromising its fidelity. These adaptive representations enable new algorithms for sampling environment maps according to the local orientation of the surface and for multiple importance sampling of image-based lighting and measured BRDFs. [less]

Skeletal Parameter Estimation from Optical Motion Capture Data

Adam Kirk, James F. O'Brien, David Forsyth CVPR 2005

In this paper we present an algorithm for automatically estimating a subject’s skeletal structure from optical mo- tion capture data. Our algorithm consists of a series ... [more] In this paper we present an algorithm for automatically estimating a subject’s skeletal structure from optical mo- tion capture data. Our algorithm consists of a series of steps that cluster markers into segment groups, determine the topological connectivity between these groups, and lo- cate the positions of their connecting joints. Our problem formulation makes use of fundamental distance constraints that must hold for markers attached to an articulated struc- ture, and we solve the resulting systems using a combination of spectral clustering and nonlinear optimization. We have tested our algorithms using data from both passive and ac- tive optical motion capture devices. Our results show that the system works reliably even with as few as one or two markers on each segment. For data recorded from human subjects, the system determines the correct topology and qualitatively accurate structure. Tests with a mechanical calibration linkage demonstrate errors for inferred segment lengths on average of only two percent. We discuss appli- cations of our methods for commercial human figure ani- mation, and for identifying human or animal subjects based on their motion independent of marker placement or feature selection. [less]

Adrien Sfarti, Brian Barsky, Todd Kosloff, Egon Pasztor, Alex Kozlowski, Eric Roman, Alex Perelman ICCS 2005

A Fourier Theory for Cast Shadows

Ravi Ramamoorthi, Melissa Koudelka, Peter Belhumeur PAMI 2005

Cast shadows can be significant in many computer vision applications, such as lighting-insensitive recognition and surface reconstruction. Nevertheless, most algorithms ... [more] Cast shadows can be significant in many computer vision applications, such as lighting-insensitive recognition and surface reconstruction. Nevertheless, most algorithms neglect them, primarily because they involve nonlocal interactions in nonconvex regions, making formal analysis difficult. However, many real instances map closely to canonical configurations like a wall, a V-groove type structure, or a pitted surface. In particular, we experiment with 3D textures like moss, gravel, and a kitchen sponge, whose surfaces include canonical configurations like V-grooves. This paper takes a first step toward a formal analysis of cast shadows, showing theoretically that many configurations can be mathematically analyzed using convolutions and Fourier basis functions. Our analysis exposes the mathematical convolution structure of cast shadows and shows strong connections to recent signal-processing frameworks for reflection and illumination. [less]

Spacetime Stereo: A Unifying Framework for Depth from Triangulation

James Davis, Diego Nehab, Ravi Ramamoorthi, Szymon Rusinkiewicz PAMI 2005

Depth from triangulation has traditionally been treated in a number of separate threads in the computer vision literature, with methods like stereo, laser scanning, and ... [more] Depth from triangulation has traditionally been treated in a number of separate threads in the computer vision literature, with methods like stereo, laser scanning, and coded structured light considered separately. In this paper, we propose a common framework, spacetime stereo, that unifies many of these previous methods. Viewing specific techniques as special cases of this general framework leads to insights regarding the solutions to many of the traditional problems of individual techniques. Specifically, we discuss a number of innovative possible applications such as improved recovery of static scenes under variable illumination, spacetime stereo for moving objects, structured light and laser scanning with multiple simultaneous stripes or patterns, and laser scanning of shiny objects. To suggest the practical utility of the framework, we use it to analyze one of these applications---recovery of static scenes under variable, but uncontrolled, illumination. Based on our analysis, we show that methods derived from the spacetime stereo framework can be used to recover depth in situations in which existing methods perform poorly. [less]

Reflectance Sharing: Image-based Rendering from a Sparse Set of Images

Todd Zickler, Sebastian Enrique, Ravi Ramamoorthi, Peter Belhumeur EGSR 2005

A Signal-Processing for Reflection

Ravi Ramamoorthi, Pat Hanrahan TOG 2004

We present a signal-processing framework for analyzing the reflected light field from a homogeneous convex curved surface under distant illumination. This analysis ... [more] We present a signal-processing framework for analyzing the reflected light field from a homogeneous convex curved surface under distant illumination. This analysis is of theoretical interest in both graphics and vision and is also of practical importance in many computer graphics problems—for instance, in determining lighting distributions and bidirectional reflectance distribution functions (BRDFs), in rendering with environment maps, and in image-based rendering. It is well known that under our assumptions, the reflection operator behaves qualitatively like a convolution. In this paper, we formalize these notions, showing that the reflected light field can be thought of in a precise quantitative way as obtained by convolving the lighting and BRDF, i.e. by filtering the incident illumination using the BRDF. Mathematically, we are able to express the frequency-space coefficients of the reflected light field as a product of the spherical harmonic coefficients of the illumination and the BRDF. These results are of practical importance in determining the well-posedness and conditioning of problems in inverse rendering—estimation of BRDF and lighting parameters from real photographs. Furthermore, we are able to derive analytic formulae for the spherical harmonic coefficients of many common BRDF and lighting models. From this formal analysis, we are able to determine precise conditions under which estimation of BRDFs and lighting distributions are well posed and well-conditioned. Our mathematical analysis also has implications for forward rendering—especially the efficient rendering of objects under complex lighting conditions specified by environment maps. The results, especially the analytic formulae derived for Lambertian surfaces, are also relevant in computer vision in the areas of recognition, photometric stereo and structure from motion. [less]

A Method for Animating Viscoelastic Fluids

Tolga Goktekin, Adam Bargteil, James F. O'Brien SIGGRAPH 2004

This paper describes a technique for animating the behavior of viscoelastic fluids, such as mucus, liquid soap, pudding, toothpaste, or clay, that exhibit a combination of both ... [more] This paper describes a technique for animating the behavior of viscoelastic fluids, such as mucus, liquid soap, pudding, toothpaste, or clay, that exhibit a combination of both fluid and solid characteristics. The technique builds upon prior Eulerian methods for animating incompressible fluids with free surfaces by including additional elastic terms in the basic Navier-Stokes equations. The elastic terms are computed by integrating and advecting strain-rate throughout the fluid. Transition from elastic resistance to viscous flow is controlled by von Mises’s yield condition, and subsequent behavior is then governed by a quasi-linear plasticity model. [less]

Adam Kirk, James F. O'Brien, David Forsyth SIGGRAPH 2004 Tech Sketch

In this sketch we present an algorithm for automatically estimating a subject's skeletal structure from optical motion capture data without using any a priori skeletal ... [more] In this sketch we present an algorithm for automatically estimating a subject's skeletal structure from optical motion capture data without using any a priori skeletal model. Our algorithm consists of a series of four steps that cluster markers into groups approximating rigid bodies, determine the topological connectivity between those groups, locate the positions of the connecting joints, and project those joint positions onto a rigid skeleton. These steps make use of a combination of spectral clustering and nonlinear optimization. Because it does not depend on prior rotation estimates, our algorithm can work reliably even when only one or two markers are attached to each body part, and our results do not suffer from error introduced by inaccurate rotation estimates. Furthermore, for applications where skeletal rotations are required, the skeleton computed by our algorithm actually provides an accurate and reliable means for computing them. We have tested an implementation of this algorithm with both passive and active motion capture data and found it to work well. Its computed skeletal estimates closely match measured values, and the algorithm behaves robustly even in the presence of noise, marker occlusion, and other errors typical of motion capture data. [less]

Hayley Iben, James F. O'Brien, Erik Demaine SIGGRAPH 2004 Tech Sketch

This sketch describes a guaranteed technique for generating intersection-free interpolation sequences between arbitrary, non-intersecting, planar polygons. The computational ... [more] This sketch describes a guaranteed technique for generating intersection-free interpolation sequences between arbitrary, non-intersecting, planar polygons. The computational machinery that ensures against self intersection guides a user-supplied distance heuristic that determines the overall character of the interpolation sequence. Additional control is provided to the user through specifying algebraic constraints that can be enforced throughout the sequence. [less]

Interpolating and Approximating Implicit Surfaces from Polygon Soup

Chen Shen, James F. O'Brien, Jonathan Shewchuk SIGGRAPH 2004

This paper describes a method for building interpolating or approximating implicit surfaces from polygonal data. The user can choose to generate a surface that exactly ... [more] This paper describes a method for building interpolating or approximating implicit surfaces from polygonal data. The user can choose to generate a surface that exactly interpolates the polygons, or a surface that approximates the input by smoothing away features smaller than some user-specified size. The implicit functions are represented using a moving least-squares formulation with constraints integrated over the polygons. The paper also presents an improved method for enforcing normal constraints and an iterative procedure for ensuring that the implicit surface tightly encloses the input vertices. [less]

Radiance Caching and Local Geometry Correction

Okan Arikan, David A. Forsyth, James F. O'Brien SIGGRAPH 2004 Tech Sketch

We present a final gather algorithm which splits the irradiance integral into two components. One component captures the incident radiance due to distant surfaces. This ... [more] We present a final gather algorithm which splits the irradiance integral into two components. One component captures the incident radiance due to distant surfaces. This incident radiance due to far field illumination is represented as a spatially varying field of spherical harmonic coefficients. Since distant surfaces do not cause rapid changes in incident radiance, this field is smooth and slowly varying and can be computed quickly and represented efficiently.In contrast, nearby surfaces may create drastic changes in irradiance, because their positions on the visible hemisphere can change quickly. We can find such nearby surfaces (scene triangles) by a local search. By assuming nearby surfaces are always visible, we can correct the far field irradiance estimate we obtain using the spherical harmonics, and restore the high frequency detail in indirect lighting. This correction can be performed efficiently because finding nearby surfaces is a local operation. [less]

An Opponent Process Approach to Modeling the Blue Shift of the Human Color Vision System

Brian Barsky, Todd Kosloff, Steven D. Upstill APGV 2004

Low light level affects human visual perception in various ways. Visual acuity is reduced and scenes appear bluer, darker, less saturated, and with reduced contrast. We confine ... [more] Low light level affects human visual perception in various ways. Visual acuity is reduced and scenes appear bluer, darker, less saturated, and with reduced contrast. We confine our attention to an approach to modeling the appearance of the bluish cast in dim light, which is known as blue shift. Both photographs and computer-generated images of night scenes can be made to appear more realistic by understanding these phenomena as well as how they are produced by the retina. The retina comprises two kinds of photoreceptors, called rods and cones. The rods are more sensitive in dim light than are the cones. Although there are three different kinds of cones with different spectral sensitivity curves, all rods have the same spectral response curve. Consequently, rods provide luminance information but no color discrimination. Thus, when the light is too dim to fully excite the cones, scenes appear desaturated. The opponent process theory of color vision [Hurvich and Jameson 1957] states that the outputs of the rods and cones are encoded as red-green, yellow-blue, and white-black opponent channels. We model loss of saturation and blue shift in this opponent color space. [less]

Rendering Skewed Plane of Sharp Focus and Associated Depth of Field

Brian Barsky, Egon Pasztor SIGGRAPH 2004

Depth of field is the region of a scene that is in focus in an image. This is measured relative to a plane-of-sharp focus. When using a physical camera, this plane is perpendicular ... [more] Depth of field is the region of a scene that is in focus in an image. This is measured relative to a plane-of-sharp focus. When using a physical camera, this plane is perpendicular to the optical axis of the camera lens, unless the camera is a view camera. This special camera enables many effects, including skewing the plane-of- sharp focus and associated depth of field. Using a view camera, the photographer can position and orient the lens plane and film plane independently; in fact, the film plane need not be perpendicular to the optical axis of the lens. This enables the photographer to control two unique types of effects: perspective correction, and arbitrary orientation of the plane-of- sharp-focus anywhere in the viewing volume. Perspective correction is vital for architecture photography, where it is desirable to maintain parallel vertical lines even when the view direction is angled up from the horizontal, as is the case, for example, in photographing a tall building from ground level. Vertical lines converge when they are not parallel to the film plane. This effect is not discussed in this sketch. The ability to orient the plane-of-sharp-focus seems to be unknown in computer graphics. Whenever depth of field has been rendered, it is always aligned with the viewing direction. Previous algorithms for rendering images with depth of field did not recognize that it can be possible for the volume of space that is "in focus" to be at any orientation with respect to the viewing direction (see Fig. 1). The effect is possible with a physical camera in the case of a view camera. [less]

Vision-Realistic Rendering: Simulation of the Scanned Foveal Image from Wavefront Data of Human Subjects

Brian Barsky APGV 2004

We introduce the concept of vision-realistic rendering – the com- puter generation of synthetic images that incorporate the charac- teristics of a particular individual ... [more] We introduce the concept of vision-realistic rendering – the com- puter generation of synthetic images that incorporate the charac- teristics of a particular individual’s entire optical system. Specif- ically, this paper develops a method for simulating the scanned foveal image from wavefront data of actual human subjects, and demonstrates those methods on sample images. First, a subject’s optical system is measured by a Shack- Hartmann wavefront aberrometry device. This device outputs a measured wavefront which is sampled to calculate an object space point spread function (OSPSF). The OSPSF is then used to blur in- put images. This blurring is accomplished by creating a set of depth images, convolving them with the OSPSF, and finally compositing to form a vision-realistic rendered image. Applications of vision-realistic rendering in computer graphics as well as in optometry and ophthalmology are discussed. [less]

Efficient BRDF Importance Sampling Using a Factored Representation

Jason Lawrence, Szymon Rusinkiewicz, Ravi Ramamoorthi SIGGRAPH 2004

High-quality Monte Carlo image synthesis requires the ability to importance sample realistic BRDF models. However, analytic sampling algorithms exist only for the Phong ... [more] High-quality Monte Carlo image synthesis requires the ability to importance sample realistic BRDF models. However, analytic sampling algorithms exist only for the Phong model and its derivatives such as Lafortune and Blinn-Phong. This paper demonstrates an importance sampling technique for a wide range of BRDFs, including complex analytic models such as Cook-Torrance and measured materials, which are being increasingly used for realistic image synthesis. Our approach is based on a compact factored representation of the BRDF that is optimized for sampling. We show that our algorithm consistently offers better efficiency than alternatives that involve fitting and sampling a Lafortune or Blinn-Phong lobe, and is more compact than sampling strategies based on tabulating the full BRDF. We are able to efficiently create images involving multiple measured and analytic BRDFs, under both complex direct lighting and global illumination. [less]

Triple Product Wavelet Integrals for All-Frequency Relighting

Ren Ng, Ravi Ramamoorthi, Pat Hanrahan SIGGRAPH 2004

This paper focuses on efficient rendering based on pre-computed light transport, with realistic materials and shadows under all-frequency direct lighting such as environment ... [more] This paper focuses on efficient rendering based on pre-computed light transport, with realistic materials and shadows under all-frequency direct lighting such as environment maps. The basic difficulty is representation and computation in the 6D space of light direction, view direction, and surface position. While image-based and synthetic methods for real-time rendering have been proposed, they do not scale to high sampling rates with variation of both lighting and viewpoint. Current approaches are therefore limited to lower dimensionality (only lighting or viewpoint variation, not both) or lower sampling rates (low frequency lighting and materials). We propose a new mathematical and computational analysis of pre-computed light transport. We use factored forms, separately pre-computing and representing visibility and material properties. Rendering then requires computing triple product integrals at each vertex, involving the lighting, visibility and BRDF. Our main contribution is a general analysis of these triple product integrals, which are likely to have broad applicability in computer graphics and numerical analysis. We first determine the computational complexity in a number of bases like point samples, spherical harmonics and wavelets. We then give efficient linear and sublinear-time algorithms for Haar wavelets, incorporating non-linear wavelet approximation of lighting and BRDFs. Practically, we demonstrate rendering of images under new lighting and viewing conditions in a few seconds, significantly faster than previous techniques. [less]

Spectral Surface Reconstruction from Noisy Point Clouds

Ravi Kolluri, Jonathan Shewchuk, James F. O'Brien SGP 2004

We introduce a noise-resistant algorithm for reconstructing a watertight surface from point cloud data. It forms a Delaunay tetrahedralization, then uses a variant of ... [more] We introduce a noise-resistant algorithm for reconstructing a watertight surface from point cloud data. It forms a Delaunay tetrahedralization, then uses a variant of spectral graph partitioning to decide whether each tetrahedron is inside or outside the original object. The reconstructed surface triangulation is the set of triangular faces where inside and outside tetrahedra meet. Because the spectral partitioner makes local decisions based on a global view of the model, it can ignore outliers, patch holes and undersampled regions, and surmount ambiguity due to measurement errors. Our algorithm can optionally produce a manifold surface. We present empirical evidence that our implementation is substantially more robust than several closely related surface reconstruction programs. [less]

Practical Rendering of Multiple Scattering Effects in Participating Media

Simon Premože, Michael Ashikhmin, Ravi Ramamoorthi, Shree K. Nayar EGSR 2004

Volumetric light transport effects are significant for many materials like skin, smoke, clouds, snow or water. In particular, one must consider the multiple scattering of ... [more] Volumetric light transport effects are significant for many materials like skin, smoke, clouds, snow or water. In particular, one must consider the multiple scattering of light within the volume. While it is possible to simulate such media using volumetric Monte Carlo or finite element techniques, those methods are very computationally expensive. On the other hand, simple analytic models have so far been limited to homogeneous and/or optically dense media and cannot be easily extended to include strongly directional effects and visibility in spatially varying volumes. We present a practical method for rendering volumetric effects that include multiple scattering. We show an expression for the point spread function that captures blurring of radiance due to multiple scattering. We develop a general framework for incorporating this point spread function, while considering inhomogeneous media—this framework could also be used with other analytic multiple scattering models. [less]

An Energy-Driven Approach to Linkage Unfolding

Jason Cantarella, Erik Demaine, Hayley Iben, James F. O'Brien SoCG 2004

We present a new algorithm for unfolding planar polygonal linkages without self-intersection based on following the gradient flow of a "repulsive" energy function. This ... [more] We present a new algorithm for unfolding planar polygonal linkages without self-intersection based on following the gradient flow of a "repulsive" energy function. This algorithm has several advantages over previous methods. (1) The output motion is represented explicitly and exactly as a piecewise-linear curve in angle space. As a consequence, an exact snapshot of the linkage at any time can be extracted from the output in strongly polynomial time (on a real RAM supporting arithmetic, sin and arcsin ). (2) Each linear step of the motion can be computed exactly in O ( n 2 ) time on a real RAM where n is the number of vertices. (3) We explicitly bound the number of linear steps (and hence running time) as a polynomial in n and the ratio between the maximum edge length and the initial minimum distance between a vertex and an edge. (4) Our method is practical and easy to implement. We provide a publicly accessible Java applet that implements the algorithm. Best paper award at SoCG 2004. [less]

Ravi Ramamoorthi, Melissa Koudelka, Peter Belhumeur ECCV 2004

Cast shadows can be significant in many computer vision applications such as lighting-insensitive recognition and surface reconstruction. However, most algorithms ... [more] Cast shadows can be significant in many computer vision applications such as lighting-insensitive recognition and surface reconstruction. However, most algorithms neglect them, primarily because they involve non-local interactions in non-convex regions, making formal analysis difficult. While general cast shadowing situations can be arbitrarily complex, many real instances map closely to canonical configurations like a wall, a V-groove type structure, or a pitted surface. In particular, we experiment on 3D textures like moss, gravel and a kitchen sponge, whose surfaces include canonical cast shadowing situations like V-grooves. This paper shows theoretically that many shadowing configurations can be mathemat- ically analyzed using convolutions and Fourier basis functions. Our analysis exposes the mathematical convolution structure of cast shadows, and shows strong connections to recently developed signal-processing frameworks for reflection and illumination. An analytic convolution formula is derived for a 2D V-groove, which is shown to correspond closely to many common shadowing situations, especially in 3D textures. Numerical simulation is used to extend these results to general 3D textures. These results also provide evidence that a common set of illumination basis functions may be appropriate for representing lighting variability due to cast shadows in many 3D textures. We derive a new analytic basis suited for 3D textures to represent illumination on the hemisphere, with some advantages over commonly used Zernike polynomials and spherical harmonics. New experiments on analyzing the variability in appearance of real 3D textures with illumination motivate and validate our theoretical analysis. Empirical results show that illumination eigenfunctions often correspond closely to Fourier bases, while the eigenvalues drop off significantly slower than those for irradiance on a Lambertian curved surface. These new empirical results are explained in this paper, based on our theory. [less]

Motion Synthesis from Anotations

Okan Arikan, David Forsyth, James F. O'Brien SIGGRAPH 2003

This paper describes a framework that allows a user to synthesize human motion while retaining control of its qualitative properties. The user paints a timeline with annotations ... [more] This paper describes a framework that allows a user to synthesize human motion while retaining control of its qualitative properties. The user paints a timeline with annotations --- like walk, run or jump --- from a vocabulary which is freely chosen by the user. The system then assembles frames from a motion database so that the final motion performs the specified actions at specified times. The motion can also be forced to pass through particular configurations at particular times, and to go to a particular position and orientation. Annotations can be painted positively (for example, must run), negatively (for example, may not run backwards) or as a don't-care. The system uses a novel search method, based around dynamic programming at several scales, to obtain a solution efficiently so that authoring is interactive. Our results demonstrate that the method can generate smooth, natural-looking motion. The annotation vocabulary can be chosen to fit the application, and allows specification of composite motions (run and jump} simultaneously, for example). The process requires a collection of motion data that has been annotated with the chosen vocabulary. This paper also describes an effective tool, based around repeated use of support vector machines, that allows a user to annotate a large collection of motions quickly and easily so that they may be used with the synthesis algorithm. [less]

Animating Suspended Particle Explosions

Bryan Feldman, James F. O'Brien, Okan Arikan SIGGRAPH 2003

This paper describes a method for animating suspended particle explosions. Rather than modeling the numerically troublesome, and largely invisible blast wave, the method ... [more] This paper describes a method for animating suspended particle explosions. Rather than modeling the numerically troublesome, and largely invisible blast wave, the method uses a relatively stable incompressible fluid model to account for the motion of air and hot gases. The fluid's divergence field is adjusted directly to account for detonations and the generation and expansion of gaseous combustion products. Particles immersed in the fluid track the motion of particulate fuel and soot as they are advected by the fluid. Combustion is modeled using a simple but effective process governed by the particle and fluid systems. The method has enough flexibility to also approximate sprays of burning liquids. This paper includes several demonstrative examples showing air bursts, explosions near obstacles, confined explosions, and burning sprays. Because the method is based on components that allow large time integration steps, it only requires a few seconds of computation per frame for the examples shown. [less]

All-Frequncy Shadows Using Non-linear Wavelet Lighting Approximation

Ren Ng, Ravi Ramamoorthi, Pat Hanrahan SIGGRAPH 2003

We present a method, based on pre-computed light transport, for real-time rendering of objects under all-frequency, time-varying illumination represented as a high-resolution ... [more] We present a method, based on pre-computed light transport, for real-time rendering of objects under all-frequency, time-varying illumination represented as a high-resolution environment map. Current techniques are limited to small area lights, with sharp shadows, or large low-frequency lights, with very soft shadows. Our main contribution is to approximate the environment map in a wavelet basis, keeping only the largest terms (this is known as a non-linear approximation). We obtain further compression by encoding the light transport matrix sparsely but accurately in the same basis. Rendering is performed by multiplying a sparse light vector by a sparse transport matrix, which is very fast. For accurate rendering, using non-linear wavelets is an order of magnitude faster than using linear spherical harmonics, the current best technique. [less]

Spectral Watertight Surface Reconstruction

Ravi Kolluri, Jonathan Shewchuk, James F. O'Brien SIGGRAPH 2003 Tech Sketch

We use spectral partitioning to reconstruct a watertight surface from point cloud data. This method is particularly effective for noisy and undersampled point sets with ... [more] We use spectral partitioning to reconstruct a watertight surface from point cloud data. This method is particularly effective for noisy and undersampled point sets with outliers, because decisions about the reconstructed surface are based on a global view of the model. [less]

Investigating Occlusion and Discretization Problems in Image Space Blurring Techniques

Brian Barsky, Michael J. Tobias, Daniel R. Horn, Derrick P. Chu VVG 2003

Traditional computer graphics methods render images that appear sharp at all depths. Adding blur can add realism to a scene, provide a sense of scale, and draw a viewer’s attention ... [more] Traditional computer graphics methods render images that appear sharp at all depths. Adding blur can add realism to a scene, provide a sense of scale, and draw a viewer’s attention to a particular region of a scene. Our image based blur algorithm needs to distinguish whether a portion of an image is either from a single object or is part of more than one object. This motivates two approaches to identify objects after an image has been rendered. We illustrate how these techniques can be used in conjunction with our image space method to add blur to a scene. [less]

Interactive Deformation Using Modal Analysis with Constraints

Kris Hauser, Chen Shen, James F. O'Brien Graphics Interface 2003

Modal analysis provides a powerful tool for efficiently simulating the behavior of deformable objects. This paper shows how manipulation, collision, and other constraints ... [more] Modal analysis provides a powerful tool for efficiently simulating the behavior of deformable objects. This paper shows how manipulation, collision, and other constraints may be implemented easily within a modal framework. Results are presented for several example simulations. These results demonstrate that for many applications the errors introduced by linearization are acceptable, and that the resulting simulations are fast and stable even for complex objects and stiff materials. [less]

Camera Models and Optical Systems Used in Computer Graphics: Part I, Object Based Techniques

Brian Barsky, Daniel R. Horn, Stanley A. Klein, Jeffrey A. Pang, Meng Yu ICCSA 03

Images rendered with traditional computer graphics techniques, such as scanline rendering and ray tracing, appear focused at all depths. However, there are advantages to ... [more] Images rendered with traditional computer graphics techniques, such as scanline rendering and ray tracing, appear focused at all depths. However, there are advantages to having blur, such as adding realism to a scene or drawing attention to a particular place in a scene. In this paper we describe the optics underlying camera models that have been used in computer graphics, and present object space techniques for rendering with those models. In our companion paper [3], we survey im- age space techniques to simulate these models. These techniques vary in both speed and accuracy. [less]

Camera Models and Optical Systems Used in Computer Graphics: Part II, Image Based Techniques

In our companion paper [5], we described the optics underlying camera models that have been used in computer graphics, and presented object space techniques for rendering ... [more] In our companion paper [5], we described the optics underlying camera models that have been used in computer graphics, and presented object space techniques for rendering with those models. In this paper, we survey image space techniques to simulate these models, and address topics including linear filtering, ray distribution buffers, light fields, and simulation techniques for interactive applications. [less]

Structured Importance Sampling of Environment Maps

Sameer Agarwal, Ravi Ramamoorthi, Serge J. Belongie, Henrik Wann Jensen SIGGRAPH 2003

We introduce structured importance sampling, a new technique for efficiently rendering scenes illuminated by distant natural illumination given in an environment map ... [more] We introduce structured importance sampling, a new technique for efficiently rendering scenes illuminated by distant natural illumination given in an environment map. Our method handles occlusion, high-frequency lighting, and is significantly faster than alternative methods based on Monte Carlo sampling. We achieve this speedup as a result of several ideas. First, we present a new metric for stratifying and sampling an environment map taking into account both the illumination intensity as well as the expected variance due to occlusion within the scene. We then present a novel hierarchical stratification algorithm that uses our metric to automatically stratify the environment map into regular strata. This approach enables a number of rendering optimizations, such as pre-integrating the illumination within each stratum to eliminate noise at the cost of adding bias, and sorting the strata to reduce the number of sample rays. We have rendered several scenes illuminated by natural lighting, and our results indicate that Structured importance sampling is better than the best previous Monte Carlo techniques, requiring one to two orders of magnitude fewer samples for the same image quality. [less]

James Davis, Ravi Ramamoorthi, Szymon Rusinkiewicz CVPR 2003

Using Specularities for Recognition

Margarita Osadchy, David Jacobs, Ravi Ramamoorthi ICCV 2003

Recognition systems have generally treated specular highlights as noise. We show how to use these highlights as a positive source of information that improves recognition ... [more] Recognition systems have generally treated specular highlights as noise. We show how to use these highlights as a positive source of information that improves recognition of shiny objects. This also enables us to recognize very challenging shiny transparent objects, such as wine glasses. Specifically, we show how to find highlights that are consistent with an hypothesized pose of an object of known 3D shape. We do this using only a qualitative description of highlight formation that is consistent with most models of specular reflection, so no specific knowledge of an object’s reflectance properties is needed. We first present a method that finds highlights produced by a dominant compact light source, whose position is roughly known. We then show how to estimate the lighting automatically for objects whose reflection is part specular and part Lambertian. We demonstrate this method for two classes of objects. First, we show that specular information alone can suffice to identify objects with no Lambertian reflectance, such as transparent wine glasses. Second, we use our complete system to recognize shiny objects, such as pottery. [less]

Jason Cantarella, Erik Demaine, Hayley Iben, James F. O'Brien DIMACS 2002

In this paper, we introduce a new energy-driven approach for straightening polygonal arcs and convexifying polygonal cycles without self-intersection based on following ... [more] In this paper, we introduce a new energy-driven approach for straightening polygonal arcs and convexifying polygonal cycles without self-intersection based on following the gradient flow of a "repulsive" energy function. [less]

Modelling with Implicit Surfaces that Interpolate

Greg Turk, James F. O'Brien TOG

We introduce new techniques for modelling with interpolating implicit surfaces. This form of implicit surface was first used for problems of surface reconstruction and ... [more] We introduce new techniques for modelling with interpolating implicit surfaces. This form of implicit surface was first used for problems of surface reconstruction and shape transformation, but the emphasis of our work is on model creation. These implicit surfaces are described by specifying locations in 3D through which the surface should pass, and also identifying locations that are interior or exterior to the surface. A 3D implicit function is created from these constraints using a variational scattered data interpolation approach, and the iso-surface of this function describes a surface. Like other implicit surface descriptions, these surfaces can be used for CSG and interference detection, may be interactively manipulated, are readily approximated by polygonal tilings, and are easy to ray trace. A key strength for model creation is that interpolating implicit surfaces allow the direct specification of both the location of points on the surface and the surface normals. These are two important manipulation techniques that are difficult to achieve using other implicit surface representations such as sums of spherical or ellipsoidal Gaussian functions (“blobbies”). We show that these properties make this form of implicit surface particularly attractive for interactive sculpting using the particle sampling technique introduced by Witkin and Heckbert. Our formulation also yields a simple method for converting a polygonal model to a smooth implicit model, as well as a new way to form blends between objects. [less]

Graphical Modeling and Animation of Ductile Fracture

James F. O'Brien, Adam Bargteil, Jessica Hodgins SIGGRAPH 2002

In this paper, we describe a method for realistically animating ductile fracture in common solid materials such as plastics and metals. The effects that characterize ductile ... [more] In this paper, we describe a method for realistically animating ductile fracture in common solid materials such as plastics and metals. The effects that characterize ductile fracture occur due to interaction between plastic yielding and the fracture process. By modeling this interaction, our ductile fracture method can generate realistic motion for a much wider range of materials than could be realized with a purely brittle model. This method directly extends our prior work on brittle fracture [O'Brien and Hodgins, SIGGRAPH 99]. We show that adapting that method to ductile as well as brittle materials requires only a simple to implement modification that is computationally inexpensive. This paper describes this modification and presents results demonstrating some of the effects that may be realized with it. [less]

Synthesizing sounds from rigid-body simulations

James F. O'Brien, Chen Shen, Christine Gatchalian SCA 2002

This paper describes a real-time technique for generating realistic and compelling sounds that correspond to the motions of rigid objects. By numerically precomputing ... [more] This paper describes a real-time technique for generating realistic and compelling sounds that correspond to the motions of rigid objects. By numerically precomputing the shape and frequencies of an object's deformation modes, audio can be synthesized interactively directly from the force data generated by a standard rigid-body simulation. Using sparse-matrix eigen-decomposition methods, the deformation modes can be computed efficiently even for large meshes. This approach allows us to accurately model the sounds generated by arbitrarily shaped objects based only on a geometric description of the objects and a handful of material parameters. [less]

Modal Analysis for Real-Time Viscoelastic Deformation

Chen Shen, Kris Hauser, Christine Gatchalian, James F. O'Brien SIGGRAPH 2002 Tech Sketch

This technical sketch describes how a standard analysis technique known as modal decomposition can be used for real-time model- ing of viscoelastic deformation. While ... [more] This technical sketch describes how a standard analysis technique known as modal decomposition can be used for real-time model- ing of viscoelastic deformation. While most prior work on inter- active deformation has relied on geometrically simple models and advantageously selected material parameters to achieve interactive speeds, the approach described here has two qualities that we believe should be required of a real-time deformation method: the simulation cost is decoupled from both the model’s geometric complexity and from stiffness of the material’s parameters. Additionally, the simulation may be advanced at arbitrarily large time-steps without introducing objectionable errors such as artificial damping. [less]

Modeling the Accumulation of Wind-Driven Snow

Bryan Feldman, James F. O'Brien SIGGRAPH 2002 Tech Sketch

This technical sketch presents a method for modeling the appearance of snow drifts formed by the accumulation of wind-blown snow near buildings and other obstacles. Our ... [more] This technical sketch presents a method for modeling the appearance of snow drifts formed by the accumulation of wind-blown snow near buildings and other obstacles. Our method combines previous work on snow accumulation with techniques for incompressible fluid flows. By computing the three-dimensional flow of air in the volume around the obstacles our method is able to model how the snow is convected, deposited, and lifted by the wind. The results demonstrate realistic snow accumulation patterns with deep windward and leeward drifts, furrows, and low accumulation in wind shadowed areas. [less]

Frequency Space Environment Map Rendering

Ravi Ramamoorthi, Pat Hanrahan SIGGRAPH 2002

We present a new method for real-time rendering of objects with complex isotropic BRDFs under distant natural illumination, as specified by an environment map. Our approach ... [more] We present a new method for real-time rendering of objects with complex isotropic BRDFs under distant natural illumination, as specified by an environment map. Our approach is based on spherical frequency space analysis and includes three main contributions. Firstly, we are able to theoretically analyze required sampling rates and resolutions, which have traditionally been determined in an ad-hoc manner. We also introduce a new compact representation, which we call a spherical harmonic reflection map (SHRM) , for efficient representation and rendering. Finally, we show how to rapidly prefilter the environment map to compute the SHRM ---our frequency domain prefiltering algorithm is generally orders of magnitude faster than previous angular (spatial) domain approaches. [less]

Analytic PCA construction for theoretical analysis of lighting variability in images of a Lambertian object

Ravi Ramamoorthi PAMI 2001

We analyze theoretically the subspace best approximating images of a convex Lambertian object taken from the same viewpoint, but under different distant illumination ... [more] We analyze theoretically the subspace best approximating images of a convex Lambertian object taken from the same viewpoint, but under different distant illumination conditions. Since the lighting is an arbitrary function, the space of all possible images is formally infinite-dimensional. However, previous empirical work has shown that images of largely diffuse objects actually lie very close to a 5-dimensional subspace. In this paper, we analytically construct the principal component analysis for images of a convex Lambertian object, explicitly taking attached shadows into account, and find the principal eigenmodes and eigenvalues with respect to lighting variability. Our analysis makes use of an analytic formula for the irradiance in terms of spherical-harmonic coefficients of the illumination, and shows, under appropriate assumptions, that the principal components or eigenvectors are identical to the spherical harmonic basis functions evaluated at the surface normal vectors. Our main contribution is in extending these results to the single-viewpoint case, showing how the principal eigenmodes and eigenvalues are affected when only a limited subset (the upper hemisphere) of normals is available, and the spherical harmonics are no longer orthonormal over the restricted domain. Our results are very close, both qualitatively and quantitatively, to previous empirical observations and represent the first essentially complete theoretical explanation of these observations. Our analysis is also likely to be of interest in other areas of computer vision and image-based rendering. In particular, our results indicate that using complex illumination for photometric problems in computer vision is not significantly more difficult than using directional sources. [less]

Synthesizing Sounds from Physically Based Motion

James F. O'Brien, Perry R. Cook, Georg Essl SIGGRAPH 2001

The goal of this work is to develop techniques for approximating sounds that are generated by the motions of solid objects. Our methods builds on previous work in the field ... [more] The goal of this work is to develop techniques for approximating sounds that are generated by the motions of solid objects. Our methods builds on previous work in the field of physically based animation that use deformable models to simulate the behavior of the solid objects. As the motions of the objects are computed, their surfaces are analyzed to determine how the motion will induce acoustic pressure waves in the surrounding medium. The waves are propogated to the listener where the results are used to generate sounds corresponding to the behavior of the simulated objects. [less]

Image Based Rendering and Illumination Using Spherical Mosaics

Chen Shen, Heung-Yeung Shum, James F. O'Brien SIGGRAPH 2001 Tech Sketch

The work described here extends the concentric mosaic representation developed by Shum and He to spherical mosaics that allow the viewer greater freedom of movement. Additionally ... [more] The work described here extends the concentric mosaic representation developed by Shum and He to spherical mosaics that allow the viewer greater freedom of movement. Additionally, by precomputing maps for diffuse and specular lighting terms, we use high dynamic range image data to compute realistic illumination for objects that can be interactively manipulated within the scene. [less]

Implicit Surfaces that Interpolate

Greg Turk, GHuong Quynh Dinh, James F. O'Brien, Gary Yngve Shape Modeling International 2001

Implicit surfaces are often created by summing a collection of radial basis functions. Recently, researchers have begun to create implicit surfaces that exactly interpolate ... [more] Implicit surfaces are often created by summing a collection of radial basis functions. Recently, researchers have begun to create implicit surfaces that exactly interpolate a given set of points by solving a simple linear system to assign weights to each basis function. Due to their ability to interpolate, these implicit surfaces are more easily controllable than traditional “blobby” implicits. There are several additional forms of control over these surfaces that make them attractive for a variety of applications. Surface normals may be directly specified at any location over the surface, and this allows the modeller to pivot the normal while still having the surface pass through the constraints. The degree of smoothness of the surface can be controlled by changing the shape of the basis functions, allowing the surface to be pinched or smooth. On a point-by-point basis the modeller may decide whether a constraint point should be exactly interpolated or approximated. Applications of these implicits include shape transformation, creating surfaces from computer vision data, creation of an implicit surface from a polygonal model, and medical surface reconstruction. [less]

Analysis of Planar Light Fields From Homogeneous Convex Curved Surfaces Under Distant Illumination

Ravi Ramamoorthi, Pat Hanrahan

We consider the flatland or 2D properties of the light field generated when a homogeneous convex curved surface reflects a distant illumination field. Besides being of ... [more] We consider the flatland or 2D properties of the light field generated when a homogeneous convex curved surface reflects a distant illumination field. Besides being of considerable theoretical interest, this problem has applications in computer vision and graphics---for instance, in determining lighting and bidirectional reflectance distribution functions (BRDFs), in rendering environment maps, and in image-based rendering. We demonstrate that the integral for the reflected light transforms to a simple product of coefficients in Fourier space. Thus, the operation of rendering can be viewed in simple signal processing terms as a filtering operation that convolves the incident illumination with the BRDF. This analysis leads to a number of interesting observations for computer graphics, computer vision, and visual perception. [less]

An Efficient Representation for Irradiance Environment Maps

Ravi Ramamoorthi, Pat Hanrahan SIGGRAPH 2001

We consider the rendering of diffuse objects under distant illumination, as specified by an environment map. Using an analytic expression for the irradiance in terms of ... [more] We consider the rendering of diffuse objects under distant illumination, as specified by an environment map. Using an analytic expression for the irradiance in terms of spherical harmonic coefficients of the lighting, we show that one needs to compute and use only 9 coefficients, corresponding to the lowest-frequency modes of the illumination, in order to achieve average errors of only 1%. In other words, the irradiance is insensitive to high frequencies in the lighting, and is well approximated using only 9 parameters. In fact, we show that the irradiance can be procedurally represented simply as a quadratic polynomial in the cartesian components of the surface normal, and give explicit formulae. These observations lead to a simple and efficient procedural rendering algorithm amenable to hardware implementation, a prefiltering method up to three orders of magnitude faster than previous techniques, and new representations for lighting design and image-based rendering. [less]

A Signal-Processing Framework for Inverse Rendering

Realism in computer-generated images requires accurate input models for lighting, textures and BRDFs. One of the best ways of obtaining high-quality data is through measurements ... [more] Realism in computer-generated images requires accurate input models for lighting, textures and BRDFs. One of the best ways of obtaining high-quality data is through measurements of scene attributes from real photographs by inverse rendering. However, inverse rendering methods have been largely limited to settings with highly controlled lighting. One of the reasons for this is the lack of a coherent mathematical framework for inverse rendering under general illumination conditions. Our main contribution is the introduction of a signal-processing framework which describes the reflected light field as a convolution of the lighting and BRDF, and expresses it mathematically as a product of spherical harmonic coefficients of the BRDF and the lighting. Inverse rendering can then be viewed as deconvolution. We apply this theory to a variety of problems in inverse rendering, explaining a number of previous empirical results. We will show why certain problems are ill-posed or numerically ill-conditioned, and why other problems are more amenable to solution. The theory developed here also leads to new practical representations and algorithms. For instance, we present a method to factor the lighting and BRDF from a small number of views, i.e. to estimate both simultaneously when neither is known. [less]

On the relationship between Radiance and Irradiance: Determining the illumination from images of a convex Lambertian object

Ravi Ramamoorthi, Pat Hanrahan JOSA 2001

We present a theoretical analysis of the relationship between incoming radiance and irradiance. Specifically, we address the question of whether it is possible to compute ... [more] We present a theoretical analysis of the relationship between incoming radiance and irradiance. Specifically, we address the question of whether it is possible to compute the incident radiance from knowledge of the irradiance at all surface orientations. This is a fundamental question in computer vision and inverse radiative transfer. We show that the irradiance can be viewed as a simple convolution of the incident illumination, i.e.\ radiance and a clamped cosine transfer function. Estimating the radiance can then be seen as a deconvolution operation. We derive a simple closed-form formula for the irradiance in terms of spherical-harmonic coefficients of the incident illumination and demonstrate that the odd-order modes of the lighting with order greater than one are completely annihilated. Therefore, these components cannot be estimated from the irradiance, contradicting a theorem due to Preisendorfer. A practical realization of the radiance-from-irradiance problem is the estimation of the lighting from images of a homogeneous convex curved Lambertian surface of known geometry under distant illumination, since a Lambertian object reflects light equally in all directions proportional to the irradiance. We briefly discuss practical and physical considerations, and describe a simple experimental test to verify our theoretical results. [less]

Animating Fracture

James F. O'Brien, Jessica Hodgins CACM

We have developed a simulation technique that uses non-linear finite element analysis and elastic fracture mechanics to compute physically plausible motion for three-dimensional ... [more] We have developed a simulation technique that uses non-linear finite element analysis and elastic fracture mechanics to compute physically plausible motion for three-dimensional, solid objects as they break, crack, or tear. When these objects deform beyond their mechanical limits, the system automatically determines where fractures should begin and in what directions they should propagate. The system allows fractures to propagate in arbitrary directions by dynamically restructuring the elements of a tetrahedral mesh. Because cracks are not limited to the original element boundaries, the objects can form irregularly shaped shards and edges as they shatter. The result is realistic fracture patterns such as the ones shown in our examples. This paper presents an overview of the fracture algorithm, the details are presented in our ACM SIGGRAPH 1999 and 2002 papers. [less]

Combining Active and Passive Simulations for Secondary Motion

James F. O'Brien, Victor Zordan, Jessica Hodgins CG&A

Objects that move in response to the actions of a main character often make an important contribution to the visual richness of an animated scene. We use the term "secondary ... [more] Objects that move in response to the actions of a main character often make an important contribution to the visual richness of an animated scene. We use the term "secondary motion" to refer to passive motions generated in response to the movements of characters and other objects or environmental forces. Secondary motions aren't normally the mail focus of an animated scene, yet their absence can distract or disturb the viewer, destroying the illusion of reality created by the scene. We describe how to generate secondary motion by coupling physically based simulations of passive objects to actively controlled characters. [less]

Animating Explosions

Gary D. Yngve, James F. O'Brien, Jessica K. Hodgins SIGGRAPH 2000

In this paper, we introduce techniques for animating explosions and their effects. The primary effect of an explosion is a disturbance that causes a shock wave to propagate ... [more] In this paper, we introduce techniques for animating explosions and their effects. The primary effect of an explosion is a disturbance that causes a shock wave to propagate through the surrounding medium. This disturbance determines the behavior of nearly all other secondary effects seen in explosions. We simulate the propagation of an explosion through the surrounding air using a computational fluid dynamics model based on the equations for compressible, viscous flow. To model the numerically stable formulation of shocks along blast wave fronts, we employ an integration method that can handle steep gradients without introducing inappropriate damping. The system includes two-way coupling between solid objects and surrounding fluid. Using this technique, we can generate a variety of effects including shaped explosive charges, a projectile propelled from a chamber by an explosion, and objects damaged by a blast. With appropriate rendering techniques, our explosion model can be used to create such visual effects such as fireballs, dust clouds, and the refraction of light caused by a blast wave. [less]

Automatic Joint Parameter Estimation from Magnetic Motion Capture Data

James F. O'Brien, Robert Bodenheimer, Gabriel Brostow, Jessica Hodgins GI 2000

This paper describes a technique for using magnetic motion capture data to determine the joint parameters of an articulated hierarchy. This technique makes it possible ... [more] This paper describes a technique for using magnetic motion capture data to determine the joint parameters of an articulated hierarchy. This technique makes it possible to determine limb lengths, joint locations, and sensor placement for a human subject without external measurements. Instead, the joint parameters are inferred with high accuracy from the motion data acquired during the capture session. The parameters are computed by performing a linear least squares fit of a rotary joint model to the input data. A hierarchical structure for the articulated model can also be determined in situations where the topology of the model is not known. Once the system topology and joint parameters have been recovered, the resulting model can be used to perform forward and inverse kinematic procedures. We present the results of using the algorithm on human motion capture data, as well as validation results obtained with data from a simulation and a wooden linkage of known dimensions. [less]

Efficient Image-Based Methods for Rendering Soft Shadows

Maneesh Agrawala, Ravi Ramamoorthi, Alan Heirich, Laurent Moll SIGGRAPH 2000

We present two efficient image-based approaches for computation and display of high-quality soft shadows from area light sources. Our methods are related to shadow maps ... [more] We present two efficient image-based approaches for computation and display of high-quality soft shadows from area light sources. Our methods are related to shadow maps and provide the associated benefits. The computation time and memory requirements for adding soft shadows to an image depend on image size and the number of lights, not geometric scene complexity. We also show that because area light sources are localized in space, soft shadow computations are particularly well suited to image-based rendering techniques. Our first approach---layered attenuation maps---achieves interactive rendering rates, but limits sampling flexibility, while our second method---coherence-based raytracing---of depth images, is not interactive, but removes the limitations on sampling and yields high quality images at a fraction of the cost of conventional raytracers. Combining the two algorithms allows for rapid previewing followed by efficient high-quality rendering. [less]

Graphical Modeling and Animation of Brittle Fracture

James F. O'Brien, Jessica Hodgins SIGRAPH 1999

In this paper, we augment existing techniques for simulating flexible objects to include models for crack initiation and propagation in three-dimensional volumes. By ... [more] In this paper, we augment existing techniques for simulating flexible objects to include models for crack initiation and propagation in three-dimensional volumes. By analyzing the stress tensors computed over a finite element model, the simulation determines where cracks should initiate and in what directions they should propagate. We demonstrate our results with animations of breaking bowls, cracking walls, and objects that fracture when they collide. By varying the shape of the objects, the material properties, and the initial conditions of the simulations, we can create strikingly different effects ranging from a wall that shatters when it is hit by a wrecking ball to a bowl that breaks in two when it is dropped on edge. This paper received the SIGGRAPH 99 Impact Award. [less]

Shape Transformation Using Variational Implicit Functions

James F. O'Brien, Greg Turk SIGGRAPH 1999

Traditionally, shape transformation using implicit functions is performed in two distinct steps: 1) creating two implicit functions, and 2) interpolating between these ... [more] Traditionally, shape transformation using implicit functions is performed in two distinct steps: 1) creating two implicit functions, and 2) interpolating between these two functions. We present a new shape transformation method that combines these two tasks into a single step. We create a transformation between two N-dimensional objects by casting this as a scattered data interpolation problem in N + 1 dimensions. For the case of 2D shapes, we place all of our data constraints within two planes, one for each shape. These planes are placed parallel to one another in 3D. Zero-valued constraints specify the locations of shape boundaries and positive-valued constraints are placed along the normal direction in towards the center of the shape. We then invoke a variational interpolation technique (the 3D generalization of thin-plate interpolation), and this yields a single implicit function in 3D. Intermediate shapes are simply the zero-valued contours of 2D slices through this 3D function. Shape transformation between 3D shapes can be performed similarly by solving a 4D interpolation problem. To our knowledge, ours is the first shape transformation method to unify the tasks of implicit function creation and interpolation. The transformations produced by this method appear smooth and natural, even between objects of differing topologies. If desired, one or more additional shapes may be introduced that influence the intermediate shapes in a sequence. Our method can also reconstruct surfaces from multiple slices that are not restricted to being parallel to one another. [less]

Animating Sand, Mud, and Snow

Robert Sumner, James F. O'Brien, Jessica Hodgins CGF

Computer animations often lack the subtle environmental changes that should occur due to the actions of the characters. Squealing car tires usually leave no skid marks ... [more] Computer animations often lack the subtle environmental changes that should occur due to the actions of the characters. Squealing car tires usually leave no skid marks, airplanes rarely leave jet trails in the sky, and most runners leave no footprints. In this paper, we describe a simulation model of ground surfaces that can be deformed by the impact of rigid body models of animated characters. To demonstrate the algorithms, we show footprints made by a runner in sand, mud, and snow as well as bicycle tire tracks, a bicycle crash, and a falling runner. The shapes of the footprints in the three surfaces are quite different, but the effects were controlled through only five essentially independent parameters. To assess the realism of the resulting motion, we compare the simulated footprints to human footprints in sand. [less]

Creating Generative Models from Range Images

Ravi Ramamoorthi, James Arvo SIGGRAPH '99

We describe a new approach for creating concise high-level generative models from range images or other methods of obtaining approximate point clouds. Using a variety ... [more] We describe a new approach for creating concise high-level generative models from range images or other methods of obtaining approximate point clouds. Using a variety of acquisition techniques and a user-defined class of models, our method produces a compact and intuitive object description that is robust to noise and is easy to edit. The algorithm has two inter-related phases---recognition, which chooses an appropriate model within a user-specified hierarchy, and parameter estimation, which adjusts the model to fit the data as closely as possible. We give a simple method for automatically making tradeoffs between simplicity and accuracy to determine the best model within a given hierarchy. We also describe general techniques to optimize a specific generative model that include methods for curve-fitting, and which exploit sparsity. Using a few simple generative hierarchies, that subsume many of the models previously used in computer vision, we demonstrate our approach for model recovery on real and synthetic data. [less]

Perception of Human Motion with Different Geometric Models

Jessica Hodgins, James F. O'Brien, Jack Tumblin TVCG 1998

Human figures have been animated using a variety of geometric models including stick figures, polygonal models, and NURBS-based models with muscles, flexible skin, or clothing ... [more] Human figures have been animated using a variety of geometric models including stick figures, polygonal models, and NURBS-based models with muscles, flexible skin, or clothing. This paper reports on experimental results indicating that a viewer’s perception of motion characteristics is affected by the geometric model used for rendering. Subjects were shown a series of paired motion sequences and asked if the two motions in each pair were the same or different. The motion sequences in each pair were rendered using the same geometric model. For the three types of motion variation tested, sensitivity scores indicate that subjects were better able to observe changes with the polygonal model than they were with the stick figure model. [less]

Robert Sumner, James F. O'Brien, Jessica Hodgins GI 98

Computer animations often lack the subtle environmental changes that should occur due to the actions of the char- acters. Squealing car tires usually leave no skid marks ... [more] Computer animations often lack the subtle environmental changes that should occur due to the actions of the char- acters. Squealing car tires usually leave no skid marks, airplanes rarely leave jet trails in the sky, and most run- ners leave no footprints. In this paper, we describe a sim- ulation model of ground surfaces that can be deformed by the impact of rigid body models of animated characters. To demonstrate the algorithms, we show footprints made by a runner in sand, mud, and snow as well as bicycle tire tracks, a bicycle crash, and a falling runner. The shapes of the footprints in the three surfaces are quite different, but the effects were controlled through only five essentially independent parameters. To assess the realism of the re- sulting motion, we compare the simulated footprints to video footage of human footprints in sand. Received the Michael A. J. Sweeney award for best student paper. [less]

James F. O'Brien, Victor Zordan, Jessica Hodgins SIGGRAPH 1997 Tech Sketch

The secondary motion of passive objects in the scene is essential for appealing and natural-looking animated characters, but because of the difficulty of controlling ... [more] The secondary motion of passive objects in the scene is essential for appealing and natural-looking animated characters, but because of the difficulty of controlling the motion of the primary character, most research in computer animation has largely ignored secondary motion. We use dynamic simulation to generate secondary motion. Simulation is an appropriate technique because secondary motion is passive, dictated only by forces from the environment or the primary actor and not from an internal source of energy in the object itself. Secondary motion does not lend itself easily to keyframing, procedural approaches, or motion capture because of the many degrees of freedom that must move in synchrony with the primary motion of the animated figure. [less]

Do Geometric Models Affect Judgments of Human Motion?

Jessica Hodgins, James F. O'Brien, Jack Tumblin GI 97

Human figures have been animated using a wide variety of geometric models including stick figures, polygonal models, and NURBS-based models with muscles, flexible skin ... [more] Human figures have been animated using a wide variety of geometric models including stick figures, polygonal models, and NURBS-based models with muscles, flexible skin, or clothing. This paper re- ports on experiments designed to ascertain whether a viewer’s perception of motion characteristics is affected by the geometric model used for rendering. Subjects were shown a series of paired motion sequences and asked if the two motions in each pair were “the same” or “different.” The two motion sequences in each pair used the same geometric model. For each trial, the pairs of motion sequences were grouped into two sets where one set was rendered with a stick figure model and the other set was rendered with a polygonal model. Sensitivity measures for each trial indicate that for these sequences subjects were better able to discriminate motion variations with the polygonal model than with the stick figure model. [less]

Fast Construction of Accurate Quaternion Splines

Ravi Ramamoorthi, Alan H. Barr SIGGRAPH '97

In 1992, Barr et al proposed a method for interpolating orientations with unit quaternion curves by minimizing covariant acceleration. This paper presents a simple improved ... [more] In 1992, Barr et al proposed a method for interpolating orientations with unit quaternion curves by minimizing covariant acceleration. This paper presents a simple improved method which uses cubic basis functions to achieve a speedup of up to three orders of magnitude. A new criterion for automatic refinement based on the Euler-Lagrange error functional is also introduced. [less]

Animating Human Athletics

Jessica Hodgins, Wayne Wooten, David Brogan, James F. O'Brien SIGGRAPH 1995

This paper describes algorithms for the animation of men and women performing three dynamic athletic behaviors: running, bicycling, and vaulting. We animate these behaviors ... [more] This paper describes algorithms for the animation of men and women performing three dynamic athletic behaviors: running, bicycling, and vaulting. We animate these behaviors using control algorithms that cause a physically realistic model to perform the desired maneuver. For example, control algorithms allow the simulated humans to maintain balance while moving their arms, to run or bicycle at a variety of speeds, and to perform a handspring vault. Algorithms for group behaviors allow a number of simulated bicyclists to ride as a group while avoiding simple patterns of obstacles. We add secondary motion to the animations with spring- mass simulations of clothing driven by the rigid-body motion of the simulated human. For each simulation, we compare the computed motion to that of humans performing similar maneuvers both qualitatively through the comparison of real and simulated video images and quantitatively through the comparison of simulated and biomechanical data. [less]

Dynamic Simulation of Splashing Fluids

James F. O'Brien, Jessica Hodgins Computer Animation 95

In this paper we describe a method for modeling the dynamic behavior of splashing fluids. The model simulates the behavior of a fluid when objects impact or float on its surface ... [more] In this paper we describe a method for modeling the dynamic behavior of splashing fluids. The model simulates the behavior of a fluid when objects impact or float on its surface. The forces generated by the objects create waves and splashes on the surface of the fluid. To demonstrate the realism and limitations of the model, images from a computer- generated animation are presented and compared with video frames of actual splashes occurring under similar Abstract: initial conditions. [less]

  • Trending Now
  • Foundational Courses
  • Data Science
  • Practice Problem
  • Machine Learning
  • System Design
  • DevOps Tutorial
  • BCA 4th Semester Syllabus (2023)

Computer Graphics and Multimedia Application

Introduction to computer graphics.

  • Applications of Computer Graphics
  • Interactive Graphical Techniques in Computer Graphics
  • Hard-Copy Devices in Computer Graphics
  • Display Processor in Computer Graphics
  • Raster-Scan Displays
  • Random-Scan Display
  • Line Clipping | Set 2 (Cyrus Beck Algorithm)
  • Mid-Point Line Generation Algorithm
  • Computer Graphics - 3D Translation Transformation
  • Composite Transformation in 2-D graphics
  • Window to Viewport Transformation in Computer Graphics with Implementation
  • Computer Graphics Curve in Computer Graphics
  • Polygon Mesh in Computer Graphics
  • Cubic Bezier Curve Implementation in C
  • Binary Space Partitioning
  • What is Multimedia?
  • CD-ROM Full Form
  • Computer Animation

Operating System

  • What is an Operating System?
  • Batch Processing Operating System
  • Memory Management in Operating System
  • Difference between Demand Paging and Segmentation
  • Page Replacement Algorithms in Operating Systems
  • Allocation of frames in Operating System
  • Process Schedulers in Operating System
  • CPU Scheduling in Operating Systems
  • Introduction of Process Synchronization
  • Introduction of Deadlock in Operating System
  • Functions of Operating System
  • Storage Structure in Operating Systems
  • Swap-Space Management in Operating system
  • File Systems in Operating System
  • Free space management in Operating System

Software Engineering

  • Introduction to Software Engineering - Software Engineering
  • Activities involved in Software Requirement Analysis
  • Software Design Process - Software Engineering
  • Design and Implementation in Operating System
  • Software Maintenance - Software Engineering
  • System configuration management - Software Engineering

Optimization techniques

  • Linear Programming
  • Queuing Models in Operating System
  • Optimal Page Replacement Algorithm
  • Job Sequencing Problem
  • Johnson's Rule in Sequencing Problems

Mathematics 3

  • Complex Numbers
  • Mathematics | Sequence, Series and Summations
  • Fourier Series Formula
  • Differential Equations
  • Second Order Linear Differential Equations
  • BCA 1st Semester Syllabus (2023)
  • BCA 2nd Semester Syllabus (2023)
  • BCA 3rd Semester Syllabus (2023)
  • BCA 5th Semester Syllabus (2023)
  • BCA 6th Semester Subjects and Syllabus (2023)
  • BCA Full Form
  • Bachelor of Computer Applications: Curriculum and Career Opportunity

Graphics are defined as any sketch or a drawing or a special network that pictorially represents some meaningful information. Computer Graphics is used where a set of images needs to be manipulated or the creation of the image in the form of pixels and is drawn on the computer. Computer Graphics can be used in digital photography, film, entertainment, electronic gadgets, and all other core technologies which are required. It is a vast subject and area in the field of computer science. Computer Graphics can be used in UI design, rendering, geometric objects, animation, and many more. In most areas, computer graphics is an abbreviation of CG. There are several tools used for the implementation of Computer Graphics. The basic is the <graphics.h> header file in Turbo-C, Unity for advanced and even OpenGL can be used for its Implementation. 

The term ‘Computer Graphics’ was coined by Verne Hudson and William Fetter from Boeing who were pioneers in the field. 

Computer Graphics refers to several things:

  • The manipulation and the representation of the image or the data in a graphical manner.
  • Various technology is required for the creation and manipulation.
  • Digital synthesis and its manipulation.

Types of Computer Graphics

  • Raster Graphics: In raster, graphics pixels are used for an image to be drawn. It is also known as a bitmap image in which a sequence of images is into smaller pixels. Basically, a bitmap indicates a large number of pixels together.
  • Vector Graphics: In vector graphics, mathematical formulae are used to draw different types of shapes, lines, objects, and so on.

Applications

  • Computer Graphics are used for an aided design for engineering and architectural system- These are used in electrical automobiles, electro-mechanical, mechanical, electronic devices. For example gears and bolts.
  • Computer Art – MS Paint.
  • Presentation Graphics – It is used to summarize financial statistical scientific or economic data. For example- Bar chart, Line chart.
  • Entertainment- It is used in motion pictures, music videos, television gaming.
  • Education and training- It is used to understand the operations of complex systems. It is also used for specialized system such for framing for captains, pilots and so on.
  • Visualization- To study trends and patterns.For example- Analyzing satellite photo of earth.

Please Login to comment...

  • computer-graphics
  • Computer Subject
  • 10 Best Todoist Alternatives in 2024 (Free)
  • How to Get Spotify Premium Free Forever on iOS/Android
  • Yahoo Acquires Instagram Co-Founders' AI News Platform Artifact
  • OpenAI Introduces DALL-E Editor Interface
  • 30 OOPs Interview Questions and Answers (2024)

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

Home — Essay Samples — Information Science and Technology — Computers — Computer Graphics

one px

Essays on Computer Graphics

Research in digital art, selecting a graphic novel: navigating the choices and options, made-to-order essay as fast as you need it.

Each essay is customized to cater to your unique preferences

+ experts online

Types of Digital Image Files

The advancement in computer science, bitmap vs vector images, image retouching, let us write you an essay from scratch.

  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours

How to Design Great Ui for Mobile Apps

Robust iris segmentation method based on a new active contour and artificial neural networks, survey paper on various image fusion techniques, a report on 3d image and imaging process, design of 3d printable concrete, relevant topics.

  • Digital Era
  • Computer Science
  • Cyber Security
  • Virtual Reality
  • Data Mining
  • Application Software
  • Augmented Reality
  • Computer Programming

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy . We’ll occasionally send you promo and account related email

No need to pay just yet!

We use cookies to personalyze your web-site experience. By continuing we’ll assume you board with our cookie policy .

  • Instructions Followed To The Letter
  • Deadlines Met At Every Stage
  • Unique And Plagiarism Free

computer graphics essay

computer graphics Recently Published Documents

Total documents.

  • Latest Documents
  • Most Cited Documents
  • Contributed Authors
  • Related Sources
  • Related Keywords

Beyond categories: dynamic qualitative analysis of visuospatial representation in arithmetic

AbstractVisuospatial representations of numbers and their relationships are widely used in mathematics education. These include drawn images, models constructed with concrete manipulatives, enactive/embodied forms, computer graphics, and more. This paper addresses the analytical limitations and ethical implications of methodologies that use broad categorizations of representations and argues the benefits of dynamic qualitative analysis of arithmetical-representational strategy across multiple semi-independent aspects of display, calculation, and interaction. It proposes an alternative methodological approach combining the structured organization of classification with the detailed nuance of description and describes a systematic but flexible framework for analysing nonstandard visuospatial representations of early arithmetic. This approach is intended for use by researchers or practitioners, for interpretation of multimodal and nonstandard visuospatial representations, and for identification of small differences in learners’ developing arithmetical-representational strategies, including changes over time. Application is illustrated using selected data from a microanalytic study of struggling students’ multiplication and division in scenario tasks.

IEEE Transactions on Visualization and Computer Graphics

Elements of the methodology of teaching vector graphics based on the free graphic editor libreoffice draw at the level of basic general education.

The article presents the methodology for teaching the theme "Creation and editing of vector graphic information" in basic school, which can be implemented both in full-time education and using distance learning technologies. The methodology is based on the use of the free vector graphic editor LibreOffice Draw and has been tested over several years in teaching vector computer graphics in the seventh grade in informatics course in full-time, as well as in a distance learning format in 2020. The authors substantiate the need to develop universal methods of teaching information technologies that are insensitive to the form of education (full-time or using distance educational technologies) based on the use of free software. Some principles of constructing a methodology for teaching vector graphics based on the new Federal State Educational Standard of Basic General Education are formulated. As the basic operating system used by the teacher, the domestic free operating system "Alt Education 9" is proposed. The article substantiates the choice of the graphic editor LibreOffice Draw as the optimal software tool to support teaching vector graphics in elementary school, formulates the criteria for choosing  LibreOffice Draw as a basic tool for studying computer graphics in grades 6–9 for the implementation of distance learning. A universal scheme for the implementation of a distance lesson in teaching information technology based on the use of free cross-platform software, in particular, teaching vector graphics, is proposed. 

The Mathematics of Smoothed Particle Hydrodynamics (SPH) Consistency

Since its inception Smoothed Particle Hydrodynamics (SPH) has been widely employed as a numerical tool in different areas of science, engineering, and more recently in the animation of fluids for computer graphics applications. Although SPH is still in the process of experiencing continual theoretical and technical developments, the method has been improved over the years to overcome some shortcomings and deficiencies. Its widespread success is due to its simplicity, ease of implementation, and robustness in modeling complex systems. However, despite recent progress in consolidating its theoretical foundations, a long-standing key aspect of SPH is related to the loss of particle consistency, which affects its accuracy and convergence properties. In this paper, an overview of the mathematical aspects of the SPH consistency is presented with a focus on the most recent developments.

EVALUATION OF THE RESULTS OF PEDAGOGICAL EXPERIMENTS AND TESTS OF DEVELOPMENT OF DESIGN COMPETENCIES OF FUTURE ENGINEERS WITH COMPUTER GRAPHICS

Graphic design understanding the application of computer graphics and image processing technology in graphic design to improve the employment rate of college graduates, illumination space: a feature space for radiance maps.

<p>From red sunsets to blue skies, the natural world contains breathtaking scenery with complex lighting which many computer graphics applications strive to emulate. Achieving such realism is a computationally challenging task and requires proficiency with rendering software. To aid in this process, radiance maps (RM) are a convenient storage structure for representing the real-world. In this form, it can be used to realistically illuminate synthetic objects or for backdrop replacement in chroma key compositing. An artist can also freely change a RM to another that better matches their desired lighting or background conditions. This motivates the need for a large collection of RMs such that an artist has a range of environmental conditions to choose from. Due to the practicality of RMs, databases of RMs have continually grown since its inception. However, a comprehensive collection of RMs is not useful without a method for searching through the collection.  This thesis defines a semantic feature space that allows an artist to interactively browse through databases of RMs, with applications for both lighting and backdrop replacement in mind. The set of features are automatically extracted from the RMs in an offline pre-processing step, and are queried in real-time for browsing. Illumination features are defined to concisely describe lighting properties of a RM, allowing an artist to find a RM to illuminate their target scene. Texture features are used to describe visual elements of a RM, allowing an artist to search the database for reflective or backdrop properties for their target scene. A combination of the two sets of features allows an artist to search for RMs with desirable illumination effects which match the background environment.</p>

THE DIFFUSENESS OF ILLUMINATION SUITABLE FOR REPRODUCING OBJECT SURFACE APPEARANCE USING COMPUTER GRAPHICS

The appearance of an object depends on its material, shape, and lighting. In particular, the diffuseness of the illumination has a significant effect on the appearance of material and surface texture. We investigated a diffuseness condition suitable for reproducing surface appearance using computer graphics. First, observers memorized the appearance and impression of objects by viewing pre-observation images rendered using various environment maps. Then they evaluated the appearance of the objects in test images rendered under different levels of diffuseness. As a result, moderate diffuseness conditions received a higher evaluation than low diffuseness conditions. This means that low or very high diffuseness unfamiliar in daily life is unsuitable for reproducing a faithful and ideal surface appearance. However, a particular material is difficult to memorize and evaluate its appearance. The results suggest that it is possible to define a diffuseness that adequately reproduces the appearance of an object using computer graphics.

Metode Pose to Pose untuk Membuat Animasi 3 Dimensi Islami "Keutamaan Berbuka Puasa"

Berkembangnya teknologi di bidang computer graphics memberikan kemudahan dalam mengolah suatu karya grafis salah satunya adalah animasi 3D. Dalam pembuatan animasi 3D terdapat permasalahan utama yang biasa menjadi tantangan bagi para animator. Permasalahan utama dalam pembuatan animasi 3D adalah kualitas gerakan yang kasar atau tidak terkesan nyata. Untuk membuat gerakan yang halus dan tampak nyata dapat dilakukan melalui banyak metode salah satunya adalah metode pose to pose. Animasi 3D islami berjudul Keutamaan Berbuka Puasa sebagian besar berisi gerakan dalam memperagakan taat cara berbuka puasa yang baik dan benar untuk mendapatkan keutamaan berbuka. Pembuatan animasi ini dibuat melalui software blender dengan menerapkan metode pose to pose. Sebagai hasil pembuatan paper ini, film animasi 3D berjudul Keutamaan Berbuka Puasa diharapkan dapat dibuat dengan kualitas gerakan yang bagus dengan menggunaan metode pose to pose serta dapat memberikan hiburan dan edukasi yang baik.

Export Citation Format

Share document.

19 October 2023 Jun-Yan Zhu named 2023 Packard Fellow

17 August 2022 CVPR 2022 Best Paper Honorable Mention

17 April 2021 Ioannis Gkioulekas Receives NSF CAREER Award

21 September 2020 CMU Graphics Papers at SIGGRAPH Asia 2020

The Carnegie Mellon Graphics Lab conducts cutting-edge research on computer graphics and computer vision, integrating insights from computer science , robotics , and mechanical engineering .

Jun-Yan Zhu named 2023 Packard Fellow

Cvpr 2022 best paper honorable mention, ioannis gkioulekas receives nsf career award, cmu graphics papers at siggraph asia 2020, sgp 2020 best paper award, cmu graphics at siggraph 2020, iccp 2020 best paper honorable mention award, keenan crane receives nsf career award, ioannis gkioulekas named sloan research fellow, cvpr 2019 best paper award, cmu graphics at siggraph 2019, katherine ye named msr phd fellow, jessica hodgins named acm fellow, keenan crane named packard fellow, cmu graphics at siggraph 2018, matt o’toole joins cmu graphics, jim mccann joins cmu graphics faculty, jessica hodgins elected acm siggraph president, cmu graphics at siggraph 2017, jessica hodgins receives steven coons award, graphics lab alums win tech oscars, cmu graphics students clean up the lab, two cmu graphics students win fellowships, eight new phd students join cmu graphics group, cmu graphics at siggraph 2015, recent press highlights cmu graphics “wizardry”, keenan crane joins graphics faculty, cmu papers at siggraph 2014, katayanagi prize winners announced.

Advantages and Disadvantages of Computer Graphics Essay

What are the advantages and disadvantages of computer graphics? The essay explains some benefits & drawbacks of computer graphics and gives numerous examples.

Introduction

  • The Benefits
  • The Drawbacks

Works Cited

The research and development in the manufacture of drugs, chemicals, automobiles, airplanes, industrial plants, buildings etc, has seen the production of higher quality products. Advanced computer animation techniques produce exceptionally high quality movies and games. These have been made possible thanks to the modern graphics systems, which give endless possibilities in the design and production of new products. According to Mraz, the era of using hand-drawn presentations for finalized designs is gone (Para. 3).

Graphics systems include hardware and software systems used in the design, analysis and making graphical presentations of both real life and theoretical phenomena.

Advantages of Computer Graphics

Graphical techniques offer more flexible and options compared to other traditional methods in design. One can make changes and undo them without tampering with the whole design. It is also possible to view a model from different angles by rotating it along various axes. One can also perfect on minute details of a design by magnifying it to see them clearly.

Presenting images in three dimensions enables designers to illustrate inner parts of structures they design, bringing clarity on the structures they intend to build. Some graphical applications like Photoshop and illustrator come with tutorials, which help inexperienced users to solve any difficulties. They have a user-friendly interface, usually designed with diversified functions for simplicity.

Research and Product Development

Graphical representation software contributes much in research. Models can be presented in three dimensions giving researchers a broader picture of how natural phenomena operate. In engineering, presentation of models in three-dimensional manner enables engineers to identify weaknesses in structures and areas of possible improvement. Computer aided molecular modeling is used in computational chemistry to investigate molecular structures and properties using graphical visualization techniques.

The techniques are very useful in polymer and catalysis science in the discovery of new synthesis pathways. The results obtained help to predict molecular properties such as structural information, atomic radii, bond angles, and molecular motions. Computer aided molecular design is highly applicable in pharmaceutical work in discovering, designing and optimization of compounds with desired structures and properties used as components in drug formulations.

There are many applications for architectural work currently, which enable easy creation and modification of designs. They are useful in “simplifying the analysis and construction of proposed designs” (Greenberg 105). Product improvement is made easier with graphic design software. Modifications can be made by changing values on the design producing different variations

In product development, the traditional method is to produce samples and carry out tests on them, a process that is time consuming. Dorsey and McMillan point that, the availability of such technology “frees humans from tedious and mundane tasks” (Para. 4).

Computer aided design on the other hand involves designing of a graphical presentation of a virtual model. Tests are then done on the model using special software. This saves not only time, but also other resources that would have been used in testing the real structure, hence the cost of production.

Many graphical systems can run a combination of many functions at one time reducing the procedures of carrying out experiments. In such a scenario, graphical systems create a platform for creativity and innovation since “ideas frequently come more quickly than they can be recorded” (McKim 11).

One is able to put all of his/her ideas in a model, carry out tests on the model using graphical applications, and then make possible changes. A common feature of these systems is the ability to multitask and carry out real time research in scientific work (Klein 6). The system gives feedback that the user is able to respond to manipulate results in any desirable direction.

Advertising is an important aspect in the business world. Customers respond to product or service depending on how it is presented to them. Graphical techniques are applied to produce attractive adverts and billboards. Applications like CorelDraw and Photoshop are used to produce magnificent images used in the adverts. Graphics make adverts lively and make them more appealing to potential customers.

Disadvantages of Computer Graphics

A majority of complex graphical system applications require prior training before use. Some of the graphics applications are so complex that they need an expert to install and customize the settings. Most of the software companies who write graphics software have professionals as their target hence only experts in a particular field can utilize certain software. A good example is some Supervisory and Data Acquisition Systems (SCADA) which come with graphical components are so complex that only trained individuals can use them(Bailey and Wright 10).

Limitations

Like all other computerized systems, graphical system lack the intelligence of understanding real world conditions and principles like the purpose of the structure it is designing. The designer has to figure out a way of obtaining the relevant results while maintaining the objective of the design process.

This means that the user needs to not only be an expert in the field in question she/he is undertaking the study, but also be well acquainted with the software. It may take several months or even years for one to learn how to operate graphics software. Some Programming Logic Controls, (PLCs), used in industrial plants; take months of training for one to operate them. Therefore, the designer makes the decisions while the system makes the calculations.

Most computer-based graphics application change in their technology at a very high rate. This requires one to keep on updating the current software costing a lot of money. The problem is compounded by the presence of many graphics system manufacturers flooding the market with many products, which are not compatible with one another. Graphics applications are not only expensive, but also need machines with high specifications and the higher the machine specifications, the higher the cost.

Generally, graphical systems reduce the time a research work consumes and even improves the quality and reliability of results. They perform tasks that would otherwise be impossible and reduce the workload in research and development. They however come with disadvantages in their complexity, cost, and limitations, but their benefits outweigh the setbacks hence, will continue to advance and probably become more user-friendly.

Bailey, David, and Wright Edwin. Practical SCADA for Industry . Oxford: Elsevier, 2003. Print.

Dorsey, Julie, and McMillan Leonard. Computer Graphics and Architecture: State of the Art and Outlook for the Future , 1998. Web.

Greenberg, Donald. “Computers in Architecture.” Scientific American 264. 2 (1991): 104-109. Print.

Klein, Mark. A Practitioner’s Handbook for Real-Time Analysis: Guide to Rate Monotonic Analysis for Real-Time Systems . New York: Kluwer Academic Publishers, 1993. Print.

McKim, Robert. Experiences in Visual Thinking . Boston: PWS Publishers, 1980. Print.

Mraz, Stephen. Changes in the Engineering Profession 80 Years of Engineering , 2009. Web.

  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2023, October 28). Advantages and Disadvantages of Computer Graphics Essay. https://ivypanda.com/essays/success-of-graphic-systems-advantages-and-disadvantages/

"Advantages and Disadvantages of Computer Graphics Essay." IvyPanda , 28 Oct. 2023, ivypanda.com/essays/success-of-graphic-systems-advantages-and-disadvantages/.

IvyPanda . (2023) 'Advantages and Disadvantages of Computer Graphics Essay'. 28 October.

IvyPanda . 2023. "Advantages and Disadvantages of Computer Graphics Essay." October 28, 2023. https://ivypanda.com/essays/success-of-graphic-systems-advantages-and-disadvantages/.

1. IvyPanda . "Advantages and Disadvantages of Computer Graphics Essay." October 28, 2023. https://ivypanda.com/essays/success-of-graphic-systems-advantages-and-disadvantages/.

Bibliography

IvyPanda . "Advantages and Disadvantages of Computer Graphics Essay." October 28, 2023. https://ivypanda.com/essays/success-of-graphic-systems-advantages-and-disadvantages/.

  • Graphical Communication and Computer Modeling
  • The Misleading Graphical Display
  • Algebraic Equations and Graphical Representations
  • The "Prisoner on the Hell Planet" Comic Book by Art Spiegelman
  • Multimedia Elements in the Website of the Maya Hotel
  • Reasons why developing software for wireless devices is challenging
  • The Most Challenging Aspects of Graphics Design
  • Graphical Descriptive Statistics in Two Articles
  • Computer-Aided Software Engineering Tools Usage
  • Transformation in the Graphic Design
  • Computer-Based Communication Technology in Business Communication: Instant Messages and Wikis
  • Deployment of Firewall and Intrusion Detection and Prevention Systems
  • Challenges of Computer Technology
  • The Role of Intranet in Health and Human Services Organization
  • Processing of Intel and AMD: Chipset Technology

Current Activity

Search jcgt papers.

Kayvon Fatahalian , Stanford University

Addendum: I've added a few additional tips for conference papers chairs, papers committee members, and papers sorters .

"SIGGRAPH hates systems papers," I've heard frustrated researchers say.

During a SIGGRAPH PC meeting, I've heard a committee member disparagingly comment: "I don't believe there is novelty here, maybe it's a good systems paper."

And from a recent SIGGRAPH post-rebuttal discussion post: "This is clearly a systems paper rather than a research contribution."

This is not an issue of whether systems research has a place in the graphics community. (It does!) Rather, these comments suggest that both graphics papers authors and reviewers hold a misunderstanding of the intellectual value of graphics systems research. Understanding the key principles and values of systems research thinking is important to system designers that wish to communicate their findings and for reviewers evaluating those results. Improving this understanding will make the graphics community better equipped to evaluate and disseminate its valuable systems results.

With this article, I hope to contribute to our shared understanding of (graphics) systems research principles. I suspect that most computer graphics researchers, even those that do not explicitly aim to create new systems, can benefit from applying systems thinking principles to their work.

This article is not an attempt to provide a comprehensive guide to writing systems papers, for which there are many excellent takes. For example, I recommend Jennifer Widom's Tips for Writing Technical Papers [Widom 2006] or "How (and How Not) to Write a Good Systems Paper" [Levin and Redell 1983].

  • Am I convinced the work is based on a compelling set of goals and constraints?
  • What is the central insight or organizing principle being proposed?
  • What are the benefits the system provides its users? (Do I agree they are valuable?)
  • Can I think of alternative (e.g., simpler) solutions that might be a preferred way to meet the stated goals and constraints?
  • Am I convinced the design decisions made are responsible for successfully achieving the stated goals?
  • Does the system provide the community with a new capability that was not possible (or too difficult) to do before? What are the implications of that capability?
  • The Reyes paper [Cook 84] contributed the key ideas behind the rendering system that generated images for most feature films for over two decades.
  • Reality Engine Graphics [Akeley 93] described techniques that continue to underlie the modern GPU-accelerated graphics pipeline, which is now in every laptop and smartphone in the world (a $100B market ). Gaming aside, today's rich user interfaces would not exist without GPU acceleration.
  • The programmable vertex processor [Lindholm 01] ultimately evolved into the general programmable cores in GPUs today. Evolutions of this architecture have accelerated applications in numerous fields beyond graphics (physics, molecular biology, medical imaging, computational finance to name a few). Modern programmable GPU cores are now the primary computation engines for deep learning and for many of the fastest supercomputers in the world.
  • The ideas in RenderMan Shading Language [Hanrahan 90] and Cg [Mark 04] shaped the design of modern shading languages, which persist in both online and offline graphics systems today.
  • Brook [Buck 04] was the direct precursor to CUDA, a language that became central to the popularization of parallel computing and is now used by a broad set of domains to program GPUs.
  • Phototourism [Snavely 06] provided a new way to organize and browse the growing corpus of online photo collections, and opened the door to new applications and products leveraging big visual data.
  • The Direct3D 10 System [Blythe 06] and OptiX [Parker 10] describe architectures that form the basis for nearly all GPU-accelerated real-time rasterization and ray tracing systems today (including recent ray tracing hardware in GPUs).
  • Ideas in the Frankencamera [Adams 10] can clearly be seen in current versions of Google's Android camera API , which is used as the hardware abstraction to program one of the most popular, and most advanced, cameras in the world.
  • Halide [Ragan-Kelley 12] is now used in production at Google to process most images taken on Android phones, as well as in Photoshop and by Instagram. While Halide was designed for graphics applications, its ideas have directly inspired emerging code generation systems for deep learning like TVM , which underlie frameworks like Apache MX.Net .

In recent years we've seen papers at SIGGRAPH that describe the HDR photo processing pipeline in modern smartphone cameras [Hasinoff 16] [Wadhwa 18] , practical VR video generation pipelines [Andersen 16] , the design decisions behind the implementation of the widely used OpenVDB library [Museph 13] , and that of systems for large-scale video processing [Poms 18] . SIGGRAPH 2016 had an entire technical papers session on domain specific languages (DSLs) for graphics and simulation, and in 2018 an entire special issue of TOG was dedicated to the design of modern production rendering systems.

Papers that focus on systems contributions are in the minority at SIGGRAPH, but they are frequently featured. Without question these efforts have carved out an important place in the graphics community, as well as in the broader tech world.

Ask researchers throughout computer graphics to name defining characteristics of their favorite papers. Regardless of area, I suspect the lists would be similar. Great papers teach us something we didn't know about our field: either they contain an idea that we had never thought about, make us think differently (or more clearly) about a concept we thought we knew, or introduce a hard-to-create artifact (or tool) that is useful to making future progress. We tend to call these "contributions", a term I've increasingly come to appreciate as it embodies what a good paper does for the field. For example:

The reader learned something new:

  • A new way to formalize a problem (e.g, applying new mathematical machinery to a task)
  • A better algorithm (faster, more stable, simpler, fewer parameters) or an approximation that works shockingly well.
  • A proof establishing a previously unknown property, or a relationship between two concepts.
  • Identification of primitives that reveal the structure of a problem domain.
  • Made the reader think "I didn't realize that was possible" or "I've never thought about this problem in that way."
  • Introduces a cool new application.

Perhaps less surprising, but generally useful to the progress of others:

  • An experiment that provides a baseline result for a new type of task.
  • A new dataset that enables the field to attack new problems.

As with all good research papers, the intellectual contribution of a systems paper is much more in the ideas and wisdom the paper contains, and much less about the specific artifact (the implementation) it describes.

However, those not accustomed to assessing systems research can struggle to identify its intellectual contributions because they are often not written down explicitly in the form of an algorithm or equation. Conversely, it is common for paper authors, having spent considerable time implementing and engineering a system (and maybe even deploying it with success to users), to erroneously think that simply describing the system's capabilities and implementation constitutes a "good" systems paper.

In some areas of computer graphics, it is common for problem inputs and outputs to be well defined. The challenge is that the problem itself is hard to solve! In contrast, articulating and defining the right problem to solve is a critical part of systems research --- often this is the hardest part. Good systems thinking requires an architect to internalize a complex set of (potentially conflicting) design goals, design constraints, and potential solution strategies. Therefore, in order to assess the contribution of a systems paper, it is essential to clearly understand these problem requirements.

Problem characterization requires a system architect to spend considerable time understanding an application domain and to develop knowledge of hardware, software, human factors, and algorithmic techniques in play. Therefore, the work put into clearly establishing and articulating system requirements is itself part of the intellectual contribution of a graphics systems paper.

In other words, good systems papers often put forth an argument of the flavor:

We are looking to achieve the goals A, B, C, under the constraints X, Y, Z. No system existed to do so when we set out on our efforts (otherwise we probably would have used it!), so from our experiences building systems in this area, we can distill these experiences into a specific set of system requirements. These requirements may not be obvious to readers who have not spent considerable time thinking about or building applications in this space.

This setup is critical for several reasons:

  • It defines the requirements of a "good" solution by communicating the key concerns of the problem space. Since unexpected constraints and requirements reveal themselves only when real end-to-end systems are built, good systems papers perform this legwork for the community.
  • It provides a framework for assessing the quality of the system design decisions made by the authors. Evaluating the quality of the proposed solution involves asking the question: are the design decisions made by the authors the reason why the system achieved the stated goals?
  • It provides context that leads to more generalizable knowledge. Since readers likely do not have the same goals and constraints as a paper's authors, understanding the author's goals and constraints helps readers understand which design decisions are applicable to their own problems, and which aspects of the proposed system might need to be changed or ignored.

Let's take a look at some examples.

Example 1: Section 1 of the Reyes paper dedicated an entire column of text to Pixar's need to handle unlimited scene complexity, which they defined in terms of scene geometry and texture data. The column also established global lighting effects as an important non-goal, since the authors' experiences at Pixar suggested that global effects could often be reasonably approximated with local computations using texture mapping.

Reyes paper goals

Example 2: A decade later Bill Mark articulated a complex set of design goals for Cg . To improve developer productivity, there was a need to raise the level of abstraction for programming early GPU hardware. Doing so was challenging because developers expected the performance transparency of low-level abstractions (the reason to use GPUs was performance!) and GPU vendors wanted flexibility to rapidly evolve GPU hardware implementations. These goals were quite different from those facing the creators of prior shading languages for offline renderers and were ultimately addressed with an approach that was basically "C for graphics hardware", instead of a graphics-specific shading language like RSL .

CG paper goals

Example 3: Google's recent HDR+ paper expresses the system requirements as a set of guiding principles for determining what algorithms were candidates for inclusion in an modern smartphone camera application: be run fast enough to provide immediate feedback, do not require human tweaking, and never produce output that looks worse than a traditional photo.

HDR plus paper goals

Example 4: In Yong He's recent work on the Slang shader programming language , problem requirements were established via a detailed background section (Section 2) that describes how the goal of maintaining modular shader code was at odds with the goal of writing high performance shaders in modern game engines. This section served to articulate goals and also discuss design alternatives.

Note to paper reviewers: When serving on the SIGGRAPH PC, I have observed reviewers request that exposition about problem characterization be shortened on the basis that the text did not directly describe the proposed system. While all paper writing should strive to be appropriately concise, these suggestions failed to recognize the technical value of accurately characterizing the problem to be solved. This feedback essentially asked the authors to remove exposition concerning the key intellectual contribution to make room for additional system implementation detail.

Given stated goals and constraints, a systems paper will often propose a formulation of a problem that facilitates meeting these requirements. In other words, many systems papers make the argument:

It is useful to think about the problem in terms of these structures (e.g., using these abstractions, these representations, or by factoring the problem into these modules), because when one does, there are compelling benefits.

Benefits might take the form of: improved system performance (or scaling), enhanced programmer productivity, greater system extensibility/generality, or the ability to provide new application-level capabilities that have not been possible before.

Identifying useful problem structure often forms the central intellectual idea of a systems paper. As in other areas of computer graphics, elegant conceptual insights can be summarized in a few sentences. For example:

  • Reyes Rendering System : the micropolygon is a simple unifying representation that serves as a common interface between many surface types and many surface shading techniques. Breaking surfaces into micropolygons simultaneously meets the goals of supporting arbitrary geometric complexity (because complex surfaces can always be broken into micropolygons) and avoiding aliasing artifacts.
  • Renderman shading language : given the diversity of materials and lights in scenes, it is desirable to define an interface for extending the capabilities of a renderer by providing a programming language for expressing these computations. For productivity and performance, that programming language should provide high-level abstractions that are aligned with the terms of the rendering equation.
  • Cg : a programming language for emerging programmable GPUs should not be a domain-specific shading language, rather it should be a relatively general-purpose language that is designed to facilitate performance transparency when targeting GPUs.
  • Frankencamera : modern camera hardware interfaces are incompatible with the needs of multi-shot photography, but a simple timeline abstraction is sufficient for describing the behavior of a camera for multi-shot sequences.
  • Halide : compositions of six simple scheduling directives (split, reorder, compute_at, store_at, vectorize, parallelize) are sufficient to capture the major "high-level" code optimization decisions for a wide range of modern image processing applications.
  • Slang : the ad hoc strategies used by modern games to achieve shader code modularity and code specialization (preprocessor hacking, string pasting, and custom DSLs) can be expressed much more elegantly if HLSL was extended with a small set of well-known features from popular modern programming languages.
  • Shader components : in order to simultaneously achieve the benefits of code specialization and modularity when authoring a shader library, it is necessary to align code decomposition boundaries for code specialization with those for CPU-GPU parameter passing.
  • Ebb : relational algebra abstractions can be used to express a variety of physical simulation algorithms in a representation-agnostic manner.

Note to paper reviewers: In many of the above examples, once an organizing principle for the system is identified, the details of the solution are quite simple. For example, consider the Frankencamera [Adams 2010] . A timeline is certainly a well-known abstraction for describing sequences of events, but it would be erroneous to judge the Frankencamera paper in terms of the novelty of this abstraction. The contribution of the paper was the observation that circa-2010 camera hardware interfaces were misaligned with the requirements of computational photography algorithms of the time, and that aligning the two could be reasonably accomplished using a timeline-like abstraction backed by a "best-effort" system implementation. A common error when judging the magnitude of systems contributions is to focus on the novelty or sophistication of individual pieces of the solution. This is a methods-centric view of contributions. Instead, assessment should focus on identifying whether unique perspective or insight enabled simple solutions to be possible. The sophistication of the decision process used to select the right method (or make tweaks to it) can be more important than the sophistication of the methods ultimately used.

Given a set of requirements, a systems architect is usually faced with a variety of solution strategies. For example, a performance requirement could be solved through algorithmic innovation, through the design of new specialized hardware, or both (modifying an existing algorithm to better map to existing parallel hardware). Alternatively, the path to better performance might best go through narrowing system scope to a smaller domain of tasks. A productivity goal might be achieved through the design of new programming abstractions, which might be realized as a new domain specific language, or via a library implemented in an existing system.

As a result, a systems paper author must identify the key choices made in architecting their system, and elaborate on the rationale for these design decisions. Doing so typically involves discussing the palette of potential alternatives, and providing an argument for why the choices made are a preferred way to meet the stated requirements and design goals. It is not sufficient to merely describe the path that was taken without saying why was deemed a good one.

Discussion of key design decisions provides wisdom and guidance for future system implementors. By understanding the rationale for a system designer's choices, a reader will be able to better determine which decisions made in the paper may be applicable to their own requirements.

While reflecting on their design decisions, researchers should consider the following:

Differentiate key decisions from implementation details. A systems architect should clearly indicate which decisions they consider to be carefully thought out decisions that are central to the system's design (contributions of the paper they want to be given credit for) and which decisions were made "just to get something working". When less critical decisions need to be mentioned for completeness of exposition, it is useful to clarify that "algorithm X was used, but the decision was not deemed fundamental and many other equally good options are likely possible."

Identify cross-cutting issues. Many important design decisions are informed by considering cross-cutting issues that only reveal themselves when building an end-to-end system. For example, if only a single design goal or single aspect of the system was considered, there might be multiple viable solutions. However, it is often the case that a system architect's design decisions are motivated by end-to-end view of the problem. For example:

  • Algorithm X might produce lower quality output than algorithm Y, but X might be faster and simpler, and the errors produced by X might be acceptable because they covered for by the operation of a later processing stage.
  • Running a more expensive algorithm in stage 1 of a system might be preferable, because it generates more uniform outputs that lend themselves to better parallelization in part 2.
  • A particular global optimization might be possible in a certain processing stage, but that optimization would prevent composition of the stage with other modules of the system, limiting system extensibility.

Discussion of cross-cutting issues is an important aspect of systems thinking. (it is less common in methods-centric research). Cross-cutting and end-to-end issues are often the reason why more sophisticated techniques from the research community may be less desirable for use in effective systems. Often, new methods are (wisely) developed under simplifying assumptions that help facilitate exploration of new techniques. Systems thinking must consider the complete picture of the context in which techniques are used in practice.

Note to paper authors: Failure to describe (and subsequently evaluate) design decisions is the most common pitfall in systems paper writing. I have observed submissions describe intriguing systems, but be justly rejected because the exposition did not reflect on what had been done and why. These papers failed to provide general systems-building wisdom for the community and read more like enumerations of features or system documentation.

If a paper clearly describes a system's goals and constraints, as well as articulates key system design decisions, then the strategy for evaluating the system is to provide evidence that the described decisions were responsible for meeting the stated goals.

Particularly when a system's evaluation focuses on performance, it is tempting to compare the proposed system's end-to-end performance against that of competing alternative systems. While such an evaluation demonstrates that performance goals were met, it is equally (and sometimes more) important to conduct experiments that specifically assess the benefit of key optimizations and design decisions. Evaluation of why success was achieved is necessary to verify that the central claims of the paper are sound. Failing to perform this evaluation leaves open the possibility that the success of the system is due to other factors. (e.g., high-quality software engineering), than the proposed key ideas.

When assessing the merit of a systems paper, it is important for reviewers to consider the extent to which the system introduces new capability to the field, and what the implications of these new capabilities are. While new interaction techniques, such as the two examples cited above, are often easier to identify as providing new capabilities, it can also be the case that dramatic improvements in performance, scale of operation, or programmer productivity transport the field into a "new capability" regime. For example, the ability to provide real-time performance for the first time (enabling interactive applications or human-in-the-loop experiences), or the ability to write applications that leverage increasingly large image databases, could be considered new capabilities if it was previously difficult or impossible for programmers to attempt these tasks.

In these cases, reviewers should be mindful about the value of extensive quantitative comparison to prior systems or methods because prior systems that meet the stated constraints may not exist. Similarly, "user studies" might be less valuable than understanding the extent to which a system allowed its authors (who might be practicing experts in a domain) to perform tasks that had never been performed by the community before. When apples-to-apples evaluations are not realistic to provide, responsibility lies with paper authors to make a clear argument for why the provided evaluation is sufficient to lend scientific credibility to the proposed ideas, and on reviewers to carefully consider the implications of the proposed work. Requests for lengthy numerical evaluation should not be used as a substitute for author/reviewer thought and judgment.

I hope this article has highlighted the depth of thought required for good systems research and good systems paper writing. Architecting good systems is a challenging, thoughtful task that involves understanding a complex set of factors, balancing conflicting goals and constraints, assessing a myriad of potential solutions to produce a single working system, and measuring the effects of these ideas.

The approach "we have a result, now just write it up" rarely turns out well when writing a systems paper. Since there is typically not a new proof, equation, or algorithm pseudocode to point to as an explicitly identifiable contribution, the intellectual value in systems work is conveyed through careful exposition that documents wisdom gained from the design process. Personally, I find the act of writing to be a valuable mechanism to achieve clarity about what my work has accomplished. As I attempt to make a case for a system's design, more alternatives and evaluation questions come to mind. (Are we sure we can't take this feature out and get the same result? How do we know we really need this?)

On the flip side, reviewing a systems paper requires considerable thought and judgment. The reviewer must assess their level of agreement with the stated goals, requirements and design decisions. They must measure the value of the services afforded by these design decisions and consider their utility and significance to users. Last, they must determine if there is evidence the proposed decisions were actually responsible for the outcomes. Since the true test of good systems work lies in whether the ideas achieve adoption by the broader community over time, a reviewer must employ their own taste and experiences to make predictions about the likelihood this will occur and the amount of wisdom they have gained.

I wish everyone good luck with future graphics systems work!

Acknowledgments: Thanks to Andrew Adams, Maneesh Agrawala, Fredo Durand, Bill Mark, Morgan McGuire, Jonathan Ragan-Kelley, Matt Pharr, Peter-Pike Sloan, and Jean Yang for helpful feedback.

Math for Computer Graphics

Greg turk, august 2019.

  • Modeling - creating 3D shape descriptions of objects
  • Animation - making objects move
  • Image Synthesis, also called Rendering - making pictures from 3D shapes
  • Image and Video Manipulation

Mathematical Basics: Linear Algebra and Trigonometry

Multivariable calculus, differential geometry, computational geometry, numerical linear algebra, optimization, partial differential equations, ordinary differential equations, signal processing, monte carlo integration methods, the rise of machine learning, off the beaten path, abstract algebra, number theory.

computer graphics essay

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

  •  We're Hiring!
  •  Help Center

Computer Graphics

  • Most Cited Papers
  • Most Downloaded Papers
  • Newest Papers
  • Save to Library
  • Last »
  • Virtual Reality (Computer Graphics) Follow Following
  • Visualization Follow Following
  • Human Computer Interaction Follow Following
  • Computer Vision Follow Following
  • Computer Science Follow Following
  • Augmented Reality Follow Following
  • Virtual Environments Follow Following
  • Information Visualization Follow Following
  • Image Processing Follow Following
  • Artificial Intelligence Follow Following

Enter the email address you signed up with and we'll email you a reset link.

  • Academia.edu Publishing
  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

Writing Universe - logo

  • Environment
  • Information Science
  • Social Issues
  • Argumentative
  • Cause and Effect
  • Classification
  • Compare and Contrast
  • Descriptive
  • Exemplification
  • Informative
  • Controversial
  • Exploratory
  • What Is an Essay
  • Length of an Essay
  • Generate Ideas
  • Types of Essays
  • Structuring an Essay
  • Outline For Essay
  • Essay Introduction
  • Thesis Statement
  • Body of an Essay
  • Writing a Conclusion
  • Essay Writing Tips
  • Drafting an Essay
  • Revision Process
  • Fix a Broken Essay
  • Format of an Essay
  • Essay Examples
  • Essay Checklist
  • Essay Writing Service
  • Pay for Research Paper
  • Write My Research Paper
  • Write My Essay
  • Custom Essay Writing Service
  • Admission Essay Writing Service
  • Pay for Essay
  • Academic Ghostwriting
  • Write My Book Report
  • Case Study Writing Service
  • Dissertation Writing Service
  • Coursework Writing Service
  • Lab Report Writing Service
  • Do My Assignment
  • Buy College Papers
  • Capstone Project Writing Service
  • Buy Research Paper
  • Custom Essays for Sale

Can’t find a perfect paper?

  • Free Essay Samples
  • Information Science and Technology
  • Computer Graphics

Essays on Computer Graphics

Graphic design entails crafting and presentation of ideas, information by visual and textual content. Often referred to as communication design, Graphics design applications cuts across major industries, deployed as a tool for corporate branding, product promotion and marketing. A lot has really changed over the years due to technological evolution...

Words: 1425

Found a perfect essay sample but want a unique one?

Request writing help from expert writer in you feed!

Related topic to Computer Graphics

You might also like.

  • All Research Labs
  • 3D Deep Learning
  • Applied Research
  • Autonomous Vehicles
  • Deep Imagination
  • New and Featured
  • AI Art Gallery
  • AI & Machine Learning
  • Computer Vision
  • Academic Collaborations
  • Government Collaborations
  • Graduate Fellowship
  • Internships
  • Research Openings
  • Research Scientists
  • Meet the Team

Research Areas

Computer graphics, associated publications, researchers.

computer graphics essay

Related Topics

  • Online Shopping
  • Mobile Phone
  • Globalization
  • Effects of Computers
  • Engineering
  • Information Technology
  • Application
  • Artificial Intelligence
  • Information Systems

Computer Graphics Essay (4260 words)

Academic anxiety?

Get original paper in 3 hours and nail the task

124 experts online

Computer graphics This article is about graphics created using computers. For the article about the scientific study of computer graphics, see Computer graphics (computer science). For other uses, see Computer graphics (disambiguation). [pic] Computer graphics are graphics created using computers and, more generally, the representation and manipulation of image data by a computer. The development of computer graphics, or simply referred to as CG, has made computers easier to interact with, and better for understanding and interpreting many types of data.

Developments in computer graphics have had a profound impact on many types of media and have revolutionized the animation and video game industry. Overview The term computer graphics has been used in a broad sense to describe “almost everything on computers that is not text or sound”. Typically, the term computer graphics refers to several different things: • the representation and manipulation of image data by a computer • the various technologies used to create and manipulate images • the images so produced, and the sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content, see study of computer graphics Today, computers and computer-generated images touch many aspects of our daily life. Computer imagery is found on television, in newspapers, for example in their weather reports, or for example in all kinds of medical investigation and surgical procedures. A well-constructed graph can present complex statistics in a form that is easier to understand and interpret.

In the media “such graphs are used to illustrate papers, reports, theses”, and other presentation material. History The advance in computer graphics was to come from one MIT student, Ivan Sutherland. In 1961 Sutherland created another computer drawing program called Sketchpad. Using a light pen, Sketchpad allowed one to draw simple shapes on the computer screen, save them and even recall them later. The light pen itself had a small photoelectric cell in its tip. This cell emitted an electronic pulse whenever it was placed in front of a computer screen and the screen’s electron gun fired directly at it.

By simply timing the electronic pulse with the current location of the electron gun, it was easy to pinpoint exactly where the pen was on the screen at any given moment. Once that was determined, the computer could then draw a cursor at that location. Image types 2D computer graphics [pic] 2D computer graphics are the computer-based generation of digital images—mostly from two-dimensional models, such as 2D geometric models, text, and digital images, and by techniques specific to them. The word may stand for the branch of computer science that comprises such techniques, or for the models themselves. D computer graphics are mainly used in applications that were originally developed upon traditional printing and drawing technologies, such as typography, cartography, technical drawing, advertising, etc.. In those applications, the two-dimensional image is not just a representation of a real-world object, but an independent artifact with added semantic value; two-dimensional models are therefore preferred, because they give more direct control of the image than 3D computer graphics, whose approach is more akin to photography than to typography.

Pixel art Pixel art is a form of digital art, created through the use of raster graphics software, where images are edited on the pixel level. Graphics in most old (or relatively limited) computer and video games, graphing calculator games, and many mobile phone games are mostly pixel art. Vector graphics [pic] Vector graphics formats are complementary to raster graphics, which is the representation of images as an array of pixels, as it is typically used for the representation of photographic images.

There are instances when working with vector tools and formats is best practice, and instances when working with raster tools and formats is best practice. There are times when both formats come together. An understanding of the advantages and limitations of each technology and the relationship between them is most likely to result in efficient and effective use of tools. 3D computer graphics 3D computer graphics in contrast to 2D computer graphics are graphics that use a three-dimensional representation of geometric data that is stored in the computer for the purposes of performing calculations and rendering 2D images.

Such images may be for later display or for real-time viewing. Despite these differences, 3D computer graphics rely on many of the same algorithms as 2D computer vector graphics in the wire frame model and 2D computer raster graphics in the final rendered display. In computer graphics software, the distinction between 2D and 3D is occasionally blurred; 2D applications may use 3D techniques to achieve effects such as lighting, and primarily 3D may use 2D rendering techniques. 3D computer graphics are often referred to as 3D models.

Apart from the rendered graphic, the model is contained within the graphical data file. However, there are differences. A 3D model is the mathematical representation of any three-dimensional object. A model is not technically a graphic until it is visually displayed. Due to 3D printing, 3D models are not confined to virtual space. A model can be displayed visually as a two-dimensional image through a process called 3D rendering, or used in non-graphical computer simulations and calculations. Computer animation [pic]

Computer animation is the art of creating moving images via the use of computers. It is a subfield of computer graphics and animation. Increasingly it is created by means of 3D computer graphics, though 2D computer graphics are still widely used for stylistic, low bandwidth, and faster real-time rendering needs. Sometimes the target of the animation is the computer itself, but sometimes the target is another medium, such as film. It is also referred to as CGI (Computer-generated imagery or computer-generated imaging), especially when used in films.

Virtual entities may contain and be controlled by assorted attributes, such as transform values (location, orientation, scale; see Cartesian coordinate system) stored in an object’s transformation matrix. Animation is the change of an attribute over time. Multiple methods of achieving animation exist; the rudimentary form is based on the creation and editing of keyframes, each storing a value at a given time, per attribute to be animated. The 2D/3D graphics software will interpolate between keyframes, creating an editable curve of a value mapped over time, resulting in animation.

Other methods of animation include procedural and expression-based techniques: the former consolidates related elements of animated entities into sets of attributes, useful for creating particle effects and crowd simulations; the latter allows an evaluated result returned from a user-defined logical expression, coupled with mathematics, to automate animation in a predictable way (convenient for controlling bone behavior beyond what a hierarchy offers in skeletal system set up). Concepts and Principles Image An image or picture is an artifact that resembles a physical object or person.

The term includes two-dimensional objects like photographs and sometimes includes three-dimensional representations. Images are captured by optical devices—such as cameras, mirrors, lenses, telescopes, microscopes, etc. and natural objects and phenomena, such as the human eye or water surfaces. A digital image is a representation of a two-dimensional image in binary format as a sequence of ones and zeros. Digital images include both vector images and raster images, but raster images are more commonly used. Pixel [pic] In the enlarged portion of the image individual pixels are rendered as squares and can be easily seen.

In digital imaging, a pixel (or picture element) is a single point in a raster image. Pixels are normally arranged in a regular 2-dimensional grid, and are often represented using dots or squares. Each pixel is a sample of an original image, where more samples typically provide a more accurate representation of the original. The intensity of each pixel is variable; in color systems, each pixel has typically three components such as red, green, and blue. Graphics Graphics are visual presentations on some surface, such as a wall, canvas, computer screen, paper, or stone to brand, inform, illustrate, or entertain.

Examples are photographs, drawings, line art, graphs, diagrams, typography, numbers, symbols, geometric designs, maps, engineering drawings, or other images. Graphics often combine text, illustration, and color. Graphic design may consist of the deliberate selection, creation, or arrangement of typography alone, as in a brochure, flier, poster, web site, or book without any other element. Clarity or effective communication may be the objective, association with other cultural elements may be sought, or merely, the creation of a distinctive style.

Rendering Rendering is the process of generating an image from a model, by means of computer programs. The model is a description of three dimensional objects in a strictly defined language or data structure. It would contain geometry, viewpoint, texture, lighting, and shading information. The image is a digital image or raster graphics image. The term may be by analogy with an “artist’s rendering” of a scene. ‘Rendering’ is also used to describe the process of calculating effects in a video editing file to produce final video output. 3D projection D projection is a method of mapping three dimensional points to a two dimensional plane. As most current methods for displaying graphical data are based on planar two dimensional media, the use of this type of projection is widespread, especially in computer graphics, engineering and drafting. Ray tracing Ray tracing is a technique for generating an image by tracing the path of light through pixels in an image plane. The technique is capable of producing a very high degree of photorealism; usually higher than that of typical scanline rendering methods.

Shading [pic] Texture mapping Texture mapping is a method for adding detail, surface texture, or colour to a computer-generated graphic or 3D model. Its application to 3D graphics was pioneered by Dr Edwin Catmull in 1974. A texture map is applied (mapped) to the surface of a shape, or polygon. This process is akin to applying patterned paper to a plain white box. Multitexturing is the use of more than one texture at a time on a polygon. Procedural textures (created from adjusting parameters of an underlying algorithm that produces an output exture), and bitmap textures (created in an image editing application) are, generally speaking, common methods of implementing texture definition from a 3D animation program, while intended placement of textures onto a model’s surface often requires a technique known as UV mapping. Volume rendering [pic] Volume rendered CT scan of a forearm with different colour schemes for muscle, fat, bone, and blood. Volume rendering is a technique used to display a 2D projection of a 3D discretely sampled data set. A typical 3D data set is a group of 2D slice images acquired by a CT or MRI scanner. Usually these are acquired in a regular pattern (e. . , one slice every millimeter) and usually have a regular number of image pixels in a regular pattern. This is an example of a regular volumetric grid, with each volume element, or voxel represented by a single value that is obtained by sampling the immediate area surrounding the voxel. 3D modeling 3D modeling is the process of developing a mathematical, wireframe representation of any three-dimensional object, called a “3D model”, via specialized software. Models may be created automatically or manually; the manual modeling process of preparing geometric data for 3D computer graphics is similar to plastic arts such as sculpting. D models may be created using multiple approaches: use of NURBS curves to generate accurate and smooth surface patches, polygonal mesh modeling (manipulation of faceted geometry), or polygonal mesh subdivision (advanced tessellation of polygons, resulting in smooth surfaces similar to NURBS models). A 3D model can be displayed as a two-dimensional image through a process called 3D rendering, used in a computer simulation of physical phenomena, or animated directly for other purposes. The model can also be physically created using 3D Printing devices. Pioneers in graphic design

Charles Csuri Charles Csuri is a pioneer in computer animation and digital fine art and created the first computer art in 1964. Csuri was recognized by Smithsonian as the father of digital art and computer animation, and as a pioneer of computer animation by the Museum of Modern Art (MoMA) and (ACM-SIGGRAPH). Donald P. Greenberg Donald P. Greenberg is a leading innovator in computer graphics. Greenberg has authored hundreds of articles and served as a teacher and mentor to many prominent computer graphic artists, animators, and researchers such as Robert L.

Cook, Marc Levoy, and Wayne Lytle. Many of his former students have won Academy Awards for technical achievements and several have won the SIGGRAPH Achievement Award. Greenberg was the founding director of the NSF Center for Computer Graphics and Scientific Visualization. A. Michael Noll Noll was one of the first researchers to use a digital computer to create artistic patterns and to formalize the use of random processes in the creation of visual arts. He began creating digital computer art in 1962, making him one of the earliest digital computer artists.

In 1965, Noll along with Frieder Nake and Georg Nees were the first to publicly exhibit their computer art. During April 1965, the Howard Wise Gallery exhibited Noll’s computer art along with random-dot patterns by Bela Julesz. Study of computer graphics The study of computer graphics is a sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content. Although the term often refers to three-dimensional computer graphics, it also encompasses two-dimensional graphics and image processing.

As an academic discipline, computer graphics studies the manipulation of visual and geometric information using computational techniques. It focuses on the mathematical and computational foundations of image generation and processing rather than purely aesthetic issues. Computer graphics is often differentiated from the field of visualization, although the two fields have many similarities. Applications Computational biology Computational biology is an interdisciplinary field that applies the techniques of computer science, applied mathematics and statistics to address biological problems.

The main focus lies on developing mathematical modeling and computational simulation techniques. By these means it addresses scientific research topics with their theoretical and experimental questions without a laboratory. It encompasses the fields of: • Computational biomodeling, a field concerned with building computer models of biological systems. • Bioinformatics, which applies algorithms and statistical techniques to the interpretation, classification and understanding of biological datasets. These typically consist of large numbers of DNA, RNA, or protein sequences.

Sequence alignment is used to assemble the datasets for analysis. Comparisons of homologous sequences, gene finding, and prediction of gene expression are the most common techniques used on assembled datasets; however, analysis of such datasets have many applications throughout all fields of biology. • Mathematical biology aims at the mathematical representation, treatment and modeling of biological processes, using a variety of applied mathematical techniques and tools. • Computational genomics, a field within genomics which studies the genomes of cells and organisms.

High-throughput genome sequencing produces lots of data, which requires extensive post-processing (genome assembly) and uses DNA microarray technologies to perform statistical analyses on the genes expressed in individual cell types. This can help find genes of interest for certain diseases or conditions. This field also studies the mathematical foundations of sequencing. • Molecular modeling, which consists of modelling the behaviour of molecules of biological importance. • Protein structure prediction and structural genomics, which attempt to systematically produce accurate structural models or three-dimensional protein structures that have not been determined experimentally. Computational physics Computational physics is the study and implementation of numerical algorithms to solve problems in physics for which a quantitative theory already exists. It is often regarded as a subdiscipline of theoretical physics but some consider it an intermediate branch between theoretical and experimental physics. Physicists often have a very precise mathematical theory describing how a system will behave. Unfortunately, it is often the case that solving the theory’s equations ab initio in order to produce a useful prediction is not practical.

This is especially true with quantum mechanics, where only a handful of simple models have complete analytic solutions. In cases where the systems only have numerical solutions, computational methods are used. Computer-aided design Computer-aided design (CAD) is the use of computer technology for the design of objects, real or virtual. CAD often involves more than just shapes. As in the manual drafting of technical and engineering drawings, the output of CAD often must convey also symbolic information such as materials, processes, dimensions, and tolerances, according to application-specific conventions.

CAD may be used to design curves and figures in two-dimensional (“2D”) space; or curves, surfaces, and solids in three-dimensional (“3D”) objects. CAD is an important industrial art extensively used in many applications, including automotive, shipbuilding, and aerospace industries, industrial and architectural design, prosthetics, and many more. CAD is also widely used to produce computer animation for special effects in movies, advertising and technical manuals.

The modern ubiquity and power of computers means that even perfume bottles and shampoo dispensers are designed using techniques unheard of by engineers of the 1960s. Because of its enormous economic importance, CAD has been a major driving force for research in computational geometry, computer graphics (both hardware and software), and discrete differential geometry. Computer simulation computer simulation, a computer model, or a computational model is a computer program, or network of computers, that attempts to simulate an abstract model of a particular system.

Computer simulations have become a useful part of mathematical modeling of many natural systems in physics (computational physics), astrophysics, chemistry and biology, human systems in economics, psychology, and social science and in the process of engineering new technology, to gain insight into the operation of those. Computer simulations vary from computer programs that run a few minutes, to network-based groups of computers running for hours, to ongoing simulations that run for days.

The scale of events being simulated by computer simulations has far exceeded anything possible (or perhaps even imaginable) using the traditional paper-and-pencil mathematical modeling: over 10 years ago, a desert-battle simulation, of one force invading another, involved the modeling of 66,239 tanks, trucks and other vehicles on simulated terrain around Kuwait, using multiple supercomputers in the DoD High Performance Computer Modernization Program; a 1-billion-atom model of material deformation (2002); a 2. 4-million-atom model of the complex maker of protein in all organisms, a ribosome, in 2005; and the Blue Brain project at EPFL (Switzerland), began in May 2005, to create the first computer simulation of the entire human brain, right down to the molecular level. Digital art Digital art is an umbrella term for a range of artistic works and practices that utilize digital technology. Since the 1970s various names have been used to describe what is now called digital art including computer art and multimedia art but digital art is itself placed under the larger umbrella term new media art.

The impact of digital technology has transformed traditional activities such as painting, drawing and sculpture, while new forms, such as net art, digital installation art, and virtual reality, have become recognized artistic practices. [ More generally the term digital artist is used to describe an artist who makes use of digital technologies in the production of art. In an expanded sense, “digital art” is a term applied to contemporary art that uses the methods of mass production or digital media.

Education Education in the broadest sense is any act or experience that has a formative effect on the mind, character or physical ability of an individual. In its technical sense education is the process by which society deliberately transmits its accumulated knowledge, skills and values from one generation to another. Etymologically the word education contains educare (Latin) “bring up” which is related to educere “bring out”, “bring forth what is within”, “bring out potential” and ducere “to lead”.

Teachers in educational institutions direct the education of students and might draw on many subjects, including reading, writing, mathematics, science and history. This process is sometimes called schooling when referring to the education of teaching only a certain subject, usually as professors at institutions of higher learning. There is also education in fields for those who want specific vocational skills, such as those required to be a pilot. In addition there is an array of education possible at the informal level, such as in museums and libraries, with the Internet and in life experience.

Many non-traditional education options are now available and continue to evolve. Graphic design The term graphic design can refer to a number of artistic and professional disciplines which focus on visual communication and presentation. Various methods are used to create and combine symbols, images and/or words to create a visual representation of ideas and messages. A graphic designer may use typography, visual arts and page layout techniques to produce the final result. Graphic design often refers to both the process (designing) by which the communication is created and the products (designs) which are generated.

Common uses of graphic design include magazines, advertisements and product packaging. For example, a product package might include a logo or other artwork, organized text and pure design elements such as shapes and color which unify the piece. Composition is one of the most important features of graphic design especially when using pre-existing materials or diverse elements. Infographics Information graphics or infographics are visual representations of information, data or knowledge.

These graphics are used where complex information needs to be explained quickly and clearly, such as in signs, maps, journalism, technical writing, and education. They are also used extensively as tools by computer scientists, mathematicians, and statisticians to ease the process of developing and communicating conceptual information Information visualization Information visualization is the interdisciplinary study of “the visual representation of large-scale collections of non-numerical information, such as files and lines of code in software systems, library and bibliographic databases, networks of relations on the internet, and so forth”.

Drug design Drug design, also sometimes referred to as rational drug design, is the inventive process of finding new medications based on the knowledge of the biological target. The drug is most commonly an organic small molecule which activates or inhibits the function of a biomolecule such as a protein which in turn results in a therapeutic benefit to the patient. In the most basic sense, drug design involves design of small molecules that are complementary in shape and charge to the biomolecular target to which they interact and therefore will bind to it.

Drug design frequently but not necessarily relies on computer modeling techniques. This type of modeling is often referred to as computer-aided drug design. The phrase ‘”drug design” is to some extent a misnomer. What is really meant by drug design is ligand design. Modeling techniques for prediction of binding affinity are reasonably successful. Scientific visualization Scientific visualization (also spelled scientific visualisation) is an interdisciplinary branch of science according to Friendly (2008) “primarily concerned with the visualization of three dimensional phenomena (architectural, meteorological, medical, biological, etc. , where the emphasis is on realistic renderings of volumes, surfaces, illumination sources, and so forth, perhaps with a dynamic (time) component”. Video game A video game is an electronic game that involves interaction with a user interface to generate visual feedback on a video device. The word video in video game traditionally referred to a raster display device. However, with the popular use of the term “video game”, it now implies any type of display device.

The electronic systems used to play video games are known as platforms; examples of these are personal computers and video game consoles. These platforms range from large mainframe computers to small handheld devices. Specialized video games such as arcade games, while previously common, have gradually declined in use. The input device used to manipulate video games is called a game controller, and varies across platforms. Virtual reality

Virtual reality (VR) is a computer-simulated environment, whether that environment is a simulation of the real world or an imaginary world. Most current virtual reality environments are primarily visual experiences, displayed either on a computer screen or through special or stereoscopic displays, but some simulations include additional sensory information, such as sound through speakers or headphones. Some advanced, haptic systems now include tactile information, generally known as force feedback, in medical and gaming applications.

Users can interact with a virtual environment or a virtual artifact (VA) either through the use of standard input devices such as a keyboard and mouse, or through multimodal devices such as a wired glove, the Polhemus boom arm, and omnidirectional treadmill. Web design Web design is the skill of creating presentations of content (usually hypertext or hypermedia) that is delivered to an end-user through the World Wide Web, by way of a Web browser or other Web-enabled software like Internet television clients, microblogging clients and RSS readers.

The intent of web design is to create a web site—a collection of electronic files that reside on a web server/servers and present content and interactive features/interfaces to the end user in form of Web pages once requested. Such elements as text, bit-mapped images (GIFs, JPEGs) and forms can be placed on the page using HTML/XHTML/XML tags. Markup languages (such as HTML, XHTML and XML) • Style sheet languages (such as CSS and XSL) • Client-side scripting (such as JavaScript) • Server-side scripting (such as PHP and ASP) • Database technologies (such as MySQL and PostgreSQL) • Multimedia technologies (such as Flash and Silverlight)

References 1. ^ What is Computer Graphics? , Cornell University Program of Computer Graphics. Last updated 04/15/98. Accessed Nov 17, 2009. 2. ^ University of Leeds ISS (2002). “What are computer graphics? “. Last updated: 22 Sep 2008 3. ^ Michael Friendly (2008). “Milestones in the history of thematic cartography, statistical graphics, and data visualization”. 4. ^ Ira Greenberg (2007). Processing: Creative Coding and Computational Art. Apress. ISBN 159059617X. http://books. google. com/books? id=WTl_7H5HUZAC&pg=PA115&dq=raster+vector+graphics+photographic&lr=&as_brr=0&ei=llOVR5LKCJL0iwGZ8-ywBw&sig=YEjfPOYSUDIf1CUbL5S5Jbzs7M8.

This essay was written by a fellow student. You may use it as a guide or sample for writing your own paper, but remember to cite it correctly . Don’t submit it as your own as it will be considered plagiarism.

Need custom essay sample written special for your assignment?

Choose skilled expert on your subject and get original paper with free plagiarism report

Computer Graphics Essay (4260 words). (2018, Oct 23). Retrieved from https://artscolumbia.org/computer-graphics-47760-60810/

More related essays

Music Appreciation Flashcard

Art History, Online

  • Words 15854

AP Art History 250 Required Images

Arts flashcard section 6

History of Costume exam 2

BCS Renaissance

MUS 101 – Test 1 Answers

  • Words 18304

Art Renaissance to Modern

Fashion 120

computer graphics essay

Hi, my name is Amy 👋

In case you can't find a relevant example, our professional writers are ready to help you write a unique paper. Just talk to our smart assistant Amy and she'll connect you with the best match.

Gravatar Icon

$2,000 No Essay Scholarship

Help cover the cost of college without writing a single essay!

Niche is giving one student $2,000 to put toward tuition, housing, books or other college expenses — no essay required.

Apply below for your chance to win so you can focus on your education, not your finances. Good luck!

Min 7 characters

By proceeding you acknowledge and agree to our Privacy Policy and Terms of Use .

By proceeding you acknowledge and agree to our Privacy Policy and Terms of Use and  Scholarship Rules .

Who Can Apply

All high school and college students, as well as anyone looking to attend college or graduate school in the next year. Please note: Not everyone is eligible for this scholarship. Niche sponsored scholarships and sweepstakes are for people with US citizenship or a valid Visa/US passport only. Read the scholarship rules for full eligibility requirements.

How It Works

The $2,000 “No Essay” Scholarship is an easy scholarship with no essay required! Only one entry allowed per person. The winner will be determined by random drawing and then contacted directly and announced in Niche's email newsletter and on the Scholarship Winners page.

About Niche scholarships

We believe cost shouldn’t keep anyone from pursuing a higher education, so we connect students with thousands of scholarships — many of which don’t require an essay — to help them afford college. In 2023 alone, we offered over $285,000 in Niche scholarships. Read more about Niche scholarships here or visit our FAQs .

IMAGES

  1. School Essay, PNG, 3552x1912px, School, Computer, Computer Graphics

    computer graphics essay

  2. Advantages and Disadvantages of Computer Graphics: Essay Example

    computer graphics essay

  3. Advantages and Disadvantages of Computer Graphics: Essay Example

    computer graphics essay

  4. Advantages and Disadvantages of Computer Graphics: Essay Example

    computer graphics essay

  5. Advantages and Disadvantages of Computer Graphics: Essay Example

    computer graphics essay

  6. How to Choose the Right Best Essay Writing Website for You

    computer graphics essay

VIDEO

  1. Easy Essay Of (COMPUTER)

  2. Computer Essay 2/2

  3. Computer Essay 10 lines 1/2

  4. ESSAY 2 COMPUTER ARCHITECTURE & ORGANIZATION SESI 1 2023/2024

  5. essay about computer 🖥️#viral #computer #shortvideo

  6. Essay on computer || computer || 5 lines on computer in English || computer for class 1st and 2nd ||

COMMENTS

  1. Resource for Computer Graphics

    HPG 2018 (Long Papers Submitted: 39 Accepted: 12 Acceptance Rate: 30.7% Short Papers Submitted: 31 Accepted: 8 Acceptance Rate: 25.8%) HPG 2017 (Submitted ... IASTED Computer Graphics and Imaging (Page maintained by Roy Walmsley) 2008, 2007; IEEE VGTC Pacific Visualization Symposium (PacificVis) (Page maintained by Roy Walmsley)

  2. Computer Graphics: History, Types and Principles of Operation

    Graphics are one of the five elements of multimedia technologies (text, audio, graphics, video, interactive applications). 3D graphics became more popular in the 1990s in games and multimedia, and in 1995s 'Toy Story' was the first fully computer-generated 3D movie and shown in international cinemas. In 1996s Quake was one of the first ...

  3. Computer graphics

    Computer graphics deals with by generating images and art with the aid of computers. Today, computer graphics is a core technology in digital photography, film, video games, digital art, cell phone and computer displays, and many specialized applications. ... In the media "such graphs are used to illustrate papers, reports, theses", and other ...

  4. Publications

    Sparse Reconstruction of Visual Appearance for Computer Graphics and Vision. Ravi Ramamoorthi Wavelets and Sparsity 2011. A broad range of problems in computer graphics rendering, appearance acquisition for graphics and vision, and imaging, involve sampling, reconstruction, and integration ...

  5. Introduction to Computer Graphics

    Graphics are defined as any sketch or a drawing or a special network that pictorially represents some meaningful information. Computer Graphics is used where a set of images needs to be manipulated or the creation of the image in the form of pixels and is drawn on the computer. Computer Graphics can be used in digital photography, film ...

  6. Computer Graphics

    Computer graphics are graphics created using computers and, more generally, the representation and manipulation of image data by a computer. The development of computer graphics has made computers easier to interact with, and better for understanding and interpreting many types of data. ... Unit 30: Digital Graphics P1, M1 Essay examples ...

  7. Essays on Computer Graphics

    For your app to look fantastic, include graphics that are tailored to the screens of the specific devices. The best graphics ideally load at run time and amount to a pleasant user experience altogether. Made-to-order essay as fast as you need it Each essay is... Application Software Computer Graphics Graphic Design. 8.

  8. computer graphics Latest Research Papers

    AbstractVisuospatial representations of numbers and their relationships are widely used in mathematics education. These include drawn images, models constructed with concrete manipulatives, enactive/embodied forms, computer graphics, and more. This paper addresses the analytical limitations and ethical implications of methodologies that use ...

  9. Carnegie Mellon Computer Graphics

    CMU Graphics at SIGGRAPH 2017. 29 July 2017. This year, nine papers that were authored or co-authored by graphics lab members will be presented at SIGGRAPH 2017: A Computational Design Tool for Compliant Mechanisms Vittorio Megaro, Jonas Zehnder, Moritz, Baecher, Stelian Coros, Markus Gross, Bernhard Thomaszewski.

  10. Computers & Graphics

    Computers & Graphics is dedicated to disseminate information on research and applications of computer graphics (CG) techniques. The journal encourages articles on: 1. ... All papers at the conference were reviewed by a rigorous double-blind process, including a rebuttal phase. Out of the accepted …

  11. Computer Graphics Essay Examples

    Computer Graphics Essay Examples. Essay Examples. Essay Topics. graded. An Explanation of My Artwork. The previous artwork The Senster (1970) is a cybernetic sculpture. As well as it is the first artwork of a robot by computer-controlled, showing interactive art between machine and human. The Senster (1970 ) works by using circuit systems and ...

  12. Advantages and Disadvantages of Computer Graphics Essay

    Conclusion. Generally, graphical systems reduce the time a research work consumes and even improves the quality and reliability of results. They perform tasks that would otherwise be impossible and reduce the workload in research and development.

  13. Frontiers in Computer Science

    Computer Graphics and Visualization denis kalkofen. Graz University of Technology. Graz, Austria. Associate Editor. Computer Graphics and Visualization Articles See all (2) Research Topics See all (3) Learn more about Research Topics. Footer. Guidelines. Author guidelines; Editor guidelines; Policies and publication ethics; Fee policy ...

  14. Journal of Computer Graphics Techniques

    We present an implementation of a progressive photon mapper for ray tracing hardware (RTPM) based on a combination of existing techniques. Additionally, we present two small novel techniques that speed up RTPM even further by reducing the number of photons stored and evaluated. We demonstrate that RTPM outperforms existing hash-based photon ...

  15. Computer Graphics Forum

    The double-layer structure bends as the Voronoi layer shrinks by water loss (bottom). Smoke forming the words 'Computer Graphics Forum' and passing through some obstacles. It is a 128 × 128 × 128 simulation (with εlow= 1.4, μlow= 0.024, εhigh= 7.0 and μhigh= 0.016) based on a 64 × 64 × 64 coarse grid, running at 37.7 frames per second.

  16. Computer Graphic Design Essay

    Best Essays. 1868 Words. 8 Pages. 5 Works Cited. Open Document. Computer Graphics Design has changed in many ways over the years. In the beginning there were 2-Dimensional shapes such as you would see in the game "Pong.". Many years later, graphics improved, changing from 2-Dimensional shapes into 3-Dimensional characters, which are ...

  17. What Makes a (Graphics) Systems Paper Beautiful

    Advances in graphics systems have long played an important part of SIGGRAPH. In fact, systems work has historically had disproportionate impact, not only on the progress of computer graphics, but on a broad range of technical fields.It is easy to make an impressive list of examples. The Reyes paper contributed the key ideas behind the rendering system that generated images for most feature ...

  18. Computer Graphics Essay Example For FREE

    Rendering Rendering is the process of generating an image from a model, by means of computer programs. The model is a description of three dimensional objects in a strictly defined language or data structure. It would contain geometry, viewpoint, texture, lighting, and shading information. The image is a digital image or raster graphics image.

  19. Math for Computer Graphics

    In this essay I am going to refer the four core areas of computer graphics. These areas are: Modeling - creating 3D shape descriptions of objects. Animation - making objects move. Image Synthesis, also called Rendering - making pictures from 3D shapes. Image and Video Manipulation.

  20. Computer Graphics Research Papers

    Dynamic Data Visualization with Weave and Brain Choropleths. Download. by Dianne Patterson. 12. Computer Graphics , Computational Biology , Multidisciplinary , Brain Mapping. Effect of the size and location of osteochondral defects in degenerative arthritis.

  21. Essays on Computer Graphics

    Essays on Computer Graphics. Graphic Design and how it Changed to Digital Design. Graphic design entails crafting and presentation of ideas, information by visual and textual content. Often referred to as communication design, Graphics design applications cuts across major industries, deployed as a tool for corporate branding, product promotion ...

  22. Computer Graphics

    Eurographics 2023 Short Papers. Stochastic Texture Filtering. Marcos Fajardo, Bartlomiej Wronski, Marco Salvi, Matt Pharr. arXiv:2305.05810 (cs) ... Computer Graphics Forum (Proceedings of High Performance Graphics) NeRF-Tex: Neural Reflectance Field Textures. Hendrik Baatz, Jonathan Granskog, ...

  23. Computer Graphics Essay (4260 words)

    Get help on 【 Computer Graphics Essay (4260 words) 】 on Artscolumbia Huge assortment of FREE essays & assignments The best writers! Get help now. Essay Samples. Back; About Me; ... In computer graphics software, the distinction between 2D and 3D is occasionally blurred; 2D applications may use 3D techniques to achieve effects such as ...

  24. $2,000 No Essay Scholarship

    March 31, 2024. Help cover the cost of college without writing a single essay! Niche is giving one student $2,000 to put toward tuition, housing, books or other college expenses — no essay required. Apply below for your chance to win so you can focus on your education, not your finances. Good luck!