{"id":1760,"date":"2024-04-08T19:25:20","date_gmt":"2024-04-08T19:25:20","guid":{"rendered":"https:\/\/sibgrapi.sbc.org.br\/2024\/?page_id=1760"},"modified":"2024-09-02T14:13:51","modified_gmt":"2024-09-02T14:13:51","slug":"keynotes","status":"publish","type":"page","link":"https:\/\/sibgrapi.sbc.org.br\/2024\/keynotes\/","title":{"rendered":"Keynotes"},"content":{"rendered":"\t\t<div data-elementor-type=\"wp-page\" data-elementor-id=\"1760\" class=\"elementor elementor-1760\">\n\t\t\t\t<div class=\"elementor-element elementor-element-6d8244ac e-con-full e-flex wpr-particle-no wpr-jarallax-no wpr-parallax-no wpr-sticky-section-no e-con e-parent\" data-id=\"6d8244ac\" data-element_type=\"container\" data-e-type=\"container\" data-settings=\"{&quot;background_background&quot;:&quot;classic&quot;}\">\n\t\t\t\t<div class=\"elementor-element elementor-element-659f4146 elementor-widget__width-inherit elementor-widget elementor-widget-heading\" data-id=\"659f4146\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">KEYNOTES<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-d189f6a e-flex e-con-boxed wpr-particle-no wpr-jarallax-no wpr-parallax-no wpr-sticky-section-no e-con e-parent\" data-id=\"d189f6a\" data-element_type=\"container\" data-e-type=\"container\" data-settings=\"{&quot;background_background&quot;:&quot;classic&quot;}\">\n\t\t\t\t\t<div class=\"e-con-inner\">\n\t\t\t\t<div class=\"elementor-element elementor-element-e152b2f elementor-widget__width-initial elementor-widget elementor-widget-heading\" data-id=\"e152b2f\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">What do image generators know?<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-8c76f21 e-flex e-con-boxed wpr-particle-no wpr-jarallax-no wpr-parallax-no wpr-sticky-section-no e-con e-parent\" data-id=\"8c76f21\" data-element_type=\"container\" data-e-type=\"container\" data-settings=\"{&quot;background_background&quot;:&quot;classic&quot;}\">\n\t\t\t\t\t<div class=\"e-con-inner\">\n\t\t<div class=\"elementor-element elementor-element-3d8868f e-con-full e-flex wpr-particle-no wpr-jarallax-no wpr-parallax-no wpr-sticky-section-no e-con e-child\" data-id=\"3d8868f\" data-element_type=\"container\" data-e-type=\"container\" data-settings=\"{&quot;background_background&quot;:&quot;classic&quot;}\">\n\t\t\t\t<div class=\"elementor-element elementor-element-d6a874c elementor-widget elementor-widget-text-editor\" data-id=\"d6a874c\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><strong>Abstract<\/strong><\/p><p>Intrinsic images are maps of surface properties, like depth, normal and albedo.<\/p><p>I will show the results of simple experiments that suggest that very good modern depth, normal and albedo predictors are strongly sensitive to lighting \u2013 if you relight a scene in a reasonable way, the reported depth will change. This is intolerable. To fix this problem, we need to be able to produce many different lightings of the same scene. I will describe a method to do so. First, one learns a method to estimate albedo from images without any labelled training data (which turns out to perform well under traditional evaluations). Then, one forces an image generator to produce many different images that have the same albedo &#8212; with care, these are relightings of the same scene.\u00a0 I will show some interim results suggesting that learned relightings might genuinely improve estimates of depth, normal and albedo.<\/p><p>But if an image generator can relight a scene, it likely has a representation of depth, normal, albedo and other useful scene properties somewhere.\u00a0 I will show strong evidence that depth, normal and albedo can be extracted from two kinds of image generator, with minimal inconvenience or training data.\u00a0\u00a0 Furthermore, all these intrinsics are much less sensitive to lighting changes.\u00a0 This suggests that the right way to obtain intrinsic images might be to recover them from image generators.\u00a0 It also suggests image generators might &#8220;know&#8221; more about scene appearance than we realize.<\/p><p><strong>About the Speaker<\/strong><\/p><p>He is currently Fulton-Watson-Copp chair in computer science at U. Illinois at Urbana-Champaign, where he moved from U.C Berkeley, where he was also a full professor.\u00a0 He has occupied the Fulton-Watson-Copp chair in Computer Science at the University of Illinois since 2014. He has published over 170 papers on computer vision, computer graphics, and machine learning and served as program co-chair for IEEE Computer Vision and Pattern Recognition in 2000, 2011, 2018, and 2021; general co-chair for CVPR 2006 and 2015 and ICCV 2019; and program co-chair for the European Conference on Computer Vision 2008. He is a regular program committee member of all major international conferences on computer vision and several scientific advisory boards.\u00a0 He has served six years on the SIGGRAPH program committee and is a regular reviewer for that conference. He has served two terms as Editor in Chief, IEEE TPAMI. He has received best paper awards at the International Conference on Computer Vision and at the European Conference on Computer Vision. He also received an IEEE Technical Achievement award in 2005 for his research.\u00a0 He became an IEEE Fellow in 2009 and an ACM Fellow in 2014.\u00a0 His textbook, &#8220;Computer Vision: A Modern Approach&#8221; (joint with J. Ponce and published by Prentice Hall), is now widely adopted as a course text (adoptions include MIT, U. Wisconsin-Madison, UIUC, Georgia Tech, and U.C. Berkeley).\u00a0 A further textbook, &#8220;Probability and Statistics for Computer Science&#8221;, is in print; yet another (&#8220;Applied Machine Learning&#8221;) has just appeared.\u00a0\u00a0<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-5858bd6 e-con-full e-flex wpr-particle-no wpr-jarallax-no wpr-parallax-no wpr-sticky-section-no e-con e-child\" data-id=\"5858bd6\" data-element_type=\"container\" data-e-type=\"container\">\n\t\t\t\t<div class=\"elementor-element elementor-element-eefa0ac elementor-widget elementor-widget-image\" data-id=\"eefa0ac\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img fetchpriority=\"high\" decoding=\"async\" width=\"300\" height=\"300\" src=\"https:\/\/sibgrapi.sbc.org.br\/2024\/wp-content\/uploads\/2024\/08\/WhatsApp-Image-2024-08-29-at-22.41.29-e1725057383884.jpeg\" class=\"attachment-full size-full wp-image-3065\" alt=\"\" srcset=\"https:\/\/sibgrapi.sbc.org.br\/2024\/wp-content\/uploads\/2024\/08\/WhatsApp-Image-2024-08-29-at-22.41.29-e1725057383884.jpeg 300w, https:\/\/sibgrapi.sbc.org.br\/2024\/wp-content\/uploads\/2024\/08\/WhatsApp-Image-2024-08-29-at-22.41.29-e1725057383884-150x150.jpeg 150w\" sizes=\"(max-width: 300px) 100vw, 300px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-811a0e6 elementor-widget elementor-widget-heading\" data-id=\"811a0e6\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">David Forsyth<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-82a1f7b e-flex e-con-boxed wpr-particle-no wpr-jarallax-no wpr-parallax-no wpr-sticky-section-no e-con e-parent\" data-id=\"82a1f7b\" data-element_type=\"container\" data-e-type=\"container\" id=\"jean-id\">\n\t\t\t\t\t<div class=\"e-con-inner\">\n\t\t\t\t<div class=\"elementor-element elementor-element-bf25493 elementor-widget__width-initial elementor-widget elementor-widget-heading\" data-id=\"bf25493\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">A journey through hierarchies, watersheds, and minimum spanning trees for image segmentation<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-642a53c e-flex e-con-boxed wpr-particle-no wpr-jarallax-no wpr-parallax-no wpr-sticky-section-no e-con e-parent\" data-id=\"642a53c\" data-element_type=\"container\" data-e-type=\"container\">\n\t\t\t\t\t<div class=\"e-con-inner\">\n\t\t<div class=\"elementor-element elementor-element-db94f52 e-con-full e-flex wpr-particle-no wpr-jarallax-no wpr-parallax-no wpr-sticky-section-no e-con e-child\" data-id=\"db94f52\" data-element_type=\"container\" data-e-type=\"container\">\n\t\t\t\t<div class=\"elementor-element elementor-element-92e6a43 elementor-widget elementor-widget-image\" data-id=\"92e6a43\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"1080\" height=\"1080\" src=\"https:\/\/sibgrapi.sbc.org.br\/2024\/wp-content\/uploads\/2024\/04\/jean_cousty.gif\" class=\"attachment-full size-full wp-image-2043\" alt=\"\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-a403238 elementor-widget elementor-widget-heading\" data-id=\"a403238\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Jean Cousty<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-752a339 e-con-full e-flex wpr-particle-no wpr-jarallax-no wpr-parallax-no wpr-sticky-section-no e-con e-child\" data-id=\"752a339\" data-element_type=\"container\" data-e-type=\"container\">\n\t\t\t\t<div class=\"elementor-element elementor-element-7734596 elementor-widget elementor-widget-text-editor\" data-id=\"7734596\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><strong>Abstract<\/strong><\/p>\n<p><span style=\"font-size: 16px; font-style: inherit; font-weight: inherit;\">A segmentation is a mid-level representation of an image (or more generally of a data set) into regions where two elements of the same region share similar features, such as spatial, spectral or semantic proximity. When each picture element belongs to a single region, the segmentation is a partition of the space and the segmentation is crisp, while, when the segmentation contains nested regions, the segmentation is a hierarchy providing a multiscale representation. Watershed is one of the most commonly used notions for image segmentation (crisp and hierarchy) both because of its accuracy and explainability and of the existence of fast algorithms to compute it. Nowadays, it is often combined with deep learning models that are trained to predict excellent contour images from which watersheds are computed. In this talk, we present a set of remarkable results and algorithms for watersheds and hierarchies in edge-weighted graphs. In particular, we highlight through equivalence theorems, the relations between these notions and the well-known combinatorial optimization problem of finding a minimum spanning tree. This provides us with both optimality conditions and an efficient algorithmic framework. Finally, we give an overview of recent works where these results are used to merge hierarchies, to compute watersheds in differential and interactive processes, to out-of-core segment Giba-byte images, to learn the seeds for interactive segmentation, and to learn hierarchies. For the last two works, the hierarchy algorithm is differentiated in order to train a neural network in an end-to-end manner with gradient descent algorithms.<\/span><\/p>\n<p><strong>About the speaker<\/strong><\/p>\n<p><span style=\"font-size: 16px; font-style: inherit; font-weight: inherit;\">Jean Cousty received the engineering degree from ESIEE Paris, France in 2004, the Ph.D. degree from Universit\u00e9 de Marne-la-Vall\u00e9e in 2007 and the Habilitation \u00e0 Diriger des Recherches from Universit\u00e9 Paris-Est in 2018.&nbsp; After a one-year post-doctoral period in the ASCLEPIOS research team at INRIA (Sophia-Antipolis, France), he has been teaching and doing research at the Computer Science Department, ESIEE Paris, and at Laboratoire d&#8217;Informatique Gaspard-Monge, Universit\u00e9 Gustave Eiffel. From 2015 to 2017, he was an invited Professeur in Brazil at UFMG and PUC Minas. His current research interests include graph-based approaches to image analysis and computer vision, hierarchical analysis, mathematical morphology,&nbsp; and discrete topology.<\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-9afca28 e-flex e-con-boxed wpr-particle-no wpr-jarallax-no wpr-parallax-no wpr-sticky-section-no e-con e-parent\" data-id=\"9afca28\" data-element_type=\"container\" data-e-type=\"container\" id=\"esteban-id\" data-settings=\"{&quot;background_background&quot;:&quot;classic&quot;}\">\n\t\t\t\t\t<div class=\"e-con-inner\">\n\t\t\t\t<div class=\"elementor-element elementor-element-5de6758 elementor-widget__width-initial elementor-widget elementor-widget-heading\" data-id=\"5de6758\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Enhancing Realistic Rendering for Mixed and Virtual Reality Games<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-3656031 e-flex e-con-boxed wpr-particle-no wpr-jarallax-no wpr-parallax-no wpr-sticky-section-no e-con e-parent\" data-id=\"3656031\" data-element_type=\"container\" data-e-type=\"container\" data-settings=\"{&quot;background_background&quot;:&quot;classic&quot;}\">\n\t\t\t\t\t<div class=\"e-con-inner\">\n\t\t<div class=\"elementor-element elementor-element-7b59d95 e-con-full e-flex wpr-particle-no wpr-jarallax-no wpr-parallax-no wpr-sticky-section-no e-con e-child\" data-id=\"7b59d95\" data-element_type=\"container\" data-e-type=\"container\">\n\t\t\t\t<div class=\"elementor-element elementor-element-a526abd elementor-widget elementor-widget-image\" data-id=\"a526abd\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"441\" height=\"452\" src=\"https:\/\/sibgrapi.sbc.org.br\/2024\/wp-content\/uploads\/2024\/04\/WhatsApp-Image-2024-04-08-at-14.33.24-e1722361597147.jpeg\" class=\"attachment-full size-full wp-image-1751\" alt=\"\" srcset=\"https:\/\/sibgrapi.sbc.org.br\/2024\/wp-content\/uploads\/2024\/04\/WhatsApp-Image-2024-04-08-at-14.33.24-e1722361597147.jpeg 441w, https:\/\/sibgrapi.sbc.org.br\/2024\/wp-content\/uploads\/2024\/04\/WhatsApp-Image-2024-04-08-at-14.33.24-e1722361597147-293x300.jpeg 293w\" sizes=\"(max-width: 441px) 100vw, 441px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-2c9dc7e elementor-widget elementor-widget-heading\" data-id=\"2c9dc7e\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Esteban Walter Gonzalez Clua\n<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-59b0e5a e-con-full e-flex wpr-particle-no wpr-jarallax-no wpr-parallax-no wpr-sticky-section-no e-con e-child\" data-id=\"59b0e5a\" data-element_type=\"container\" data-e-type=\"container\" data-settings=\"{&quot;background_background&quot;:&quot;classic&quot;}\">\n\t\t\t\t<div class=\"elementor-element elementor-element-7326220 elementor-widget elementor-widget-text-editor\" data-id=\"7326220\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><strong>Abstract<\/strong><\/p><p>The video game industry continuously advances real-time rendering techniques, with an increasing focus on features like ray-tracing and global illumination. Additionally, VR\/MR\/AR games are pushing for high-quality rendering despite constraints such as high-definition displays (requiring many pixels), less powerful processors, and higher frequency requirements. This talk will present key optimization strategies, including hybrid denoising, foveated culling methods, optimization for foveated displays, and the usage of neural rendering approaches.<\/p><p><strong>About the speaker<\/strong><\/p><p>Esteban is Full professor at Universidade Federal Fluminense and coordinator of UFF Medialab, CNPq researcher 1D, Scientist of the State of Rio since 2019. He is undergraduate in Computer Science by Universidade de S\u00e3o Paulo and has master and doctor degree by PUC-Rio. His main research and development area are Real Time rendering, Digital Games, Virtual Reality, GPUs. He is one of the founders of SBGames (Brazilian Symposium of Games and Digital Entertainment) and was the president of Game Committee of the Brazilian Computer Society from 2010 through 2014. He is the general chair of the IFIP TC14 (Entertainment Computing). Esteban is also one of the founders of ABRAGAMES. In 2015 he was nominated as NVidia CUDA Fellow. Esteban is member of the program committee of most digital entertainment conferences. Esteban has 66 journal papers and 224 conference papers published up to now. In 2024 he is the Program chair of the ACM High Performance Computing and General chair of the IFIP International Conference on Entertainment Computing. In 2023 Esteban received the SBGames Award for his life career.<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-4c7d442 e-flex e-con-boxed wpr-particle-no wpr-jarallax-no wpr-parallax-no wpr-sticky-section-no e-con e-parent\" data-id=\"4c7d442\" data-element_type=\"container\" data-e-type=\"container\" id=\"alexandru-id\" data-settings=\"{&quot;background_background&quot;:&quot;classic&quot;}\">\n\t\t\t\t\t<div class=\"e-con-inner\">\n\t\t\t\t<div class=\"elementor-element elementor-element-ae2cc76 elementor-widget__width-initial elementor-widget elementor-widget-heading\" data-id=\"ae2cc76\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Seeing is learning in high dimensions<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-e9acbd6 e-flex e-con-boxed wpr-particle-no wpr-jarallax-no wpr-parallax-no wpr-sticky-section-no e-con e-parent\" data-id=\"e9acbd6\" data-element_type=\"container\" data-e-type=\"container\" data-settings=\"{&quot;background_background&quot;:&quot;classic&quot;}\">\n\t\t\t\t\t<div class=\"e-con-inner\">\n\t\t<div class=\"elementor-element elementor-element-c064325 e-con-full e-flex wpr-particle-no wpr-jarallax-no wpr-parallax-no wpr-sticky-section-no e-con e-child\" data-id=\"c064325\" data-element_type=\"container\" data-e-type=\"container\">\n\t\t\t\t<div class=\"elementor-element elementor-element-6eebda8 elementor-widget elementor-widget-image\" data-id=\"6eebda8\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"393\" height=\"537\" src=\"https:\/\/sibgrapi.sbc.org.br\/2024\/wp-content\/uploads\/2024\/04\/IMG_4222.jpg\" class=\"attachment-full size-full wp-image-1782\" alt=\"\" srcset=\"https:\/\/sibgrapi.sbc.org.br\/2024\/wp-content\/uploads\/2024\/04\/IMG_4222.jpg 393w, https:\/\/sibgrapi.sbc.org.br\/2024\/wp-content\/uploads\/2024\/04\/IMG_4222-220x300.jpg 220w\" sizes=\"(max-width: 393px) 100vw, 393px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-557893f elementor-widget elementor-widget-heading\" data-id=\"557893f\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Alexandru C. Telea<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-7081429 e-con-full e-flex wpr-particle-no wpr-jarallax-no wpr-parallax-no wpr-sticky-section-no e-con e-child\" data-id=\"7081429\" data-element_type=\"container\" data-e-type=\"container\" data-settings=\"{&quot;background_background&quot;:&quot;classic&quot;}\">\n\t\t\t\t<div class=\"elementor-element elementor-element-c9ee71d elementor-widget elementor-widget-text-editor\" data-id=\"c9ee71d\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><strong>Abstract<\/strong><\/p><p>Multidimensional projections (MPs) are one of the techniques of choice for visually exploring large high-dimensional data. Machine learning (ML) and in particular deep learning applications are one of the most prominent generators of large, high-dimensional, and complex datasets which need visual exploration. In this talk, I will explore the connections, challenges, and potential synergies between these two fields.These involve \u201cseeing to learn\u201d, or how to use MP techniques to open the black box of ML models, and \u201clearning to see\u201d, or how to use ML to create better MP techniques for visualizing high-dimensional data. Specific topics include selecting suitable MP methods from the wide arena of such available techniques; using ML to create faster and simpler to use MP methods; assessing projections from the novel perspectives of stability and ability to handle time-dependent data; using projections to create dense representations of classifiers; and revisiting the question of what is a high-quality projection.<\/p><p><b>About the speaker<\/b><\/p><p><span style=\"font-style: inherit; font-weight: inherit;\">Alexandru Telea is Professor of Visual Data Analytics at the Department of Information and Computing Sciences, Utrecht University. He holds a PhD from Eindhoven University and has been active in the visualization field for over 25 years. He has been the program co-chair, general chair, or steering committee member of several conferences and workshops in visualization, including EuroVis, VISSOFT, SoftVis, and EGPGV. His main research interests cover unifying information visualization and scientific visualization, high-dimensional visualization, and visual analytics for machine learning. He has authored over 350 papers. He is the author of the textbook \u201cData Visualization: Principles and Practice\u201d (CRC Press, 2014), a worldwide reference in teaching data visualization.<\/span><\/p><p>\u00a0<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-29066a0 e-flex e-con-boxed wpr-particle-no wpr-jarallax-no wpr-parallax-no wpr-sticky-section-no e-con e-parent\" data-id=\"29066a0\" data-element_type=\"container\" data-e-type=\"container\" id=\"paulo-id\" data-settings=\"{&quot;background_background&quot;:&quot;classic&quot;}\">\n\t\t\t\t\t<div class=\"e-con-inner\">\n\t\t\t\t<div class=\"elementor-element elementor-element-2c0afd3 elementor-widget__width-initial elementor-widget elementor-widget-heading\" data-id=\"2c0afd3\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Graph-based image segmentation subject to high-level constraints<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-01952d3 e-flex e-con-boxed wpr-particle-no wpr-jarallax-no wpr-parallax-no wpr-sticky-section-no e-con e-parent\" data-id=\"01952d3\" data-element_type=\"container\" data-e-type=\"container\" data-settings=\"{&quot;background_background&quot;:&quot;classic&quot;}\">\n\t\t\t\t\t<div class=\"e-con-inner\">\n\t\t<div class=\"elementor-element elementor-element-8104bb5 e-con-full e-flex wpr-particle-no wpr-jarallax-no wpr-parallax-no wpr-sticky-section-no e-con e-child\" data-id=\"8104bb5\" data-element_type=\"container\" data-e-type=\"container\" data-settings=\"{&quot;background_background&quot;:&quot;classic&quot;}\">\n\t\t\t\t<div class=\"elementor-element elementor-element-abf8c22 elementor-widget elementor-widget-text-editor\" data-id=\"abf8c22\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><strong>Abstract<\/strong><\/p><p>Graph-based frameworks by combinatorial optimization can handle image segmentation as a graph partition problem subject to soft and hard constraints. Recent methods consider the usage of optimal cuts in directed weighted graphs, enabling them to support the processing of several high-level priors, including global properties such as connectedness, shape constraints, boundary polarity, maximum allowable size, closeness constraints, and hierarchical constraints, which allow the customization of the segmentation to a given target object. In this talk, I will discuss some of our recent research on image segmentation subject to high-level constraints expected for the objects of interest, including methods in layered graphs, that lie at the intersection of Generalized Graph Cut and General Fuzzy Connectedness frameworks, and methods in Component Trees.<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-eebbfe5 e-con-full e-flex wpr-particle-no wpr-jarallax-no wpr-parallax-no wpr-sticky-section-no e-con e-child\" data-id=\"eebbfe5\" data-element_type=\"container\" data-e-type=\"container\">\n\t\t\t\t<div class=\"elementor-element elementor-element-64c4dfc elementor-widget elementor-widget-image\" data-id=\"64c4dfc\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"606\" height=\"693\" src=\"https:\/\/sibgrapi.sbc.org.br\/2024\/wp-content\/uploads\/2024\/03\/Grupo-de-mascara-10.png\" class=\"attachment-full size-full wp-image-1019\" alt=\"\" srcset=\"https:\/\/sibgrapi.sbc.org.br\/2024\/wp-content\/uploads\/2024\/03\/Grupo-de-mascara-10.png 606w, https:\/\/sibgrapi.sbc.org.br\/2024\/wp-content\/uploads\/2024\/03\/Grupo-de-mascara-10-262x300.png 262w\" sizes=\"(max-width: 606px) 100vw, 606px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-4d6ee58 elementor-widget elementor-widget-heading\" data-id=\"4d6ee58\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Paulo Andr\u00e9 Vechiatto de Miranda<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":"<p>KEYNOTES What do image generators know? Abstract Intrinsic images are maps of surface properties, like depth, normal and albedo. I will show the results of simple experiments that suggest that very good modern depth, normal and albedo predictors are strongly sensitive to lighting \u2013 if you relight a scene in a reasonable way, the reported depth will change. This is intolerable. To fix this problem, we need to be able to produce many different lightings of the same scene. I will describe a method to do so. First, one learns a method to estimate albedo from images without any labelled training data (which turns out to perform well under traditional evaluations). Then, one forces an image generator to produce many different images that have the same albedo &#8212; with care, these are relightings of the same scene.\u00a0 I will show some interim results suggesting that learned relightings might genuinely improve estimates of depth, normal and albedo. But if an image generator can relight a scene, it likely has a representation of depth, normal, albedo and other useful scene properties somewhere.\u00a0 I will show strong evidence that depth, normal and albedo can be extracted from two kinds of image generator, with minimal inconvenience or training data.\u00a0\u00a0 Furthermore, all these intrinsics are much less sensitive to lighting changes.\u00a0 This suggests that the right way to obtain intrinsic images might be to recover them from image generators.\u00a0 It also suggests image generators might &#8220;know&#8221; more about scene appearance than we realize. About the Speaker He is currently Fulton-Watson-Copp chair in computer science at U. Illinois at Urbana-Champaign, where he moved from U.C Berkeley, where he was also a full professor.\u00a0 He has occupied the Fulton-Watson-Copp chair in Computer Science at the University of Illinois since 2014. He has published over 170 papers on computer vision, computer graphics, and machine learning and served as program co-chair for IEEE Computer Vision and Pattern Recognition in 2000, 2011, 2018, and 2021; general co-chair for CVPR 2006 and 2015 and ICCV 2019; and program co-chair for the European Conference on Computer Vision 2008. He is a regular program committee member of all major international conferences on computer vision and several scientific advisory boards.\u00a0 He has served six years on the SIGGRAPH program committee and is a regular reviewer for that conference. He has served two terms as Editor in Chief, IEEE TPAMI. He has received best paper awards at the International Conference on Computer Vision and at the European Conference on Computer Vision. He also received an IEEE Technical Achievement award in 2005 for his research.\u00a0 He became an IEEE Fellow in 2009 and an ACM Fellow in 2014.\u00a0 His textbook, &#8220;Computer Vision: A Modern Approach&#8221; (joint with J. Ponce and published by Prentice Hall), is now widely adopted as a course text (adoptions include MIT, U. Wisconsin-Madison, UIUC, Georgia Tech, and U.C. Berkeley).\u00a0 A further textbook, &#8220;Probability and Statistics for Computer Science&#8221;, is in print; yet another (&#8220;Applied Machine Learning&#8221;) has just appeared.\u00a0\u00a0 David Forsyth A journey through hierarchies, watersheds, and minimum spanning trees for image segmentation Jean Cousty Abstract A segmentation is a mid-level representation of an image (or more generally of a data set) into regions where two elements of the same region share similar features, such as spatial, spectral or semantic proximity. When each picture element belongs to a single region, the segmentation is a partition of the space and the segmentation is crisp, while, when the segmentation contains nested regions, the segmentation is a hierarchy providing a multiscale representation. Watershed is one of the most commonly used notions for image segmentation (crisp and hierarchy) both because of its accuracy and explainability and of the existence of fast algorithms to compute it. Nowadays, it is often combined with deep learning models that are trained to predict excellent contour images from which watersheds are computed. In this talk, we present a set of remarkable results and algorithms for watersheds and hierarchies in edge-weighted graphs. In particular, we highlight through equivalence theorems, the relations between these notions and the well-known combinatorial optimization problem of finding a minimum spanning tree. This provides us with both optimality conditions and an efficient algorithmic framework. Finally, we give an overview of recent works where these results are used to merge hierarchies, to compute watersheds in differential and interactive processes, to out-of-core segment Giba-byte images, to learn the seeds for interactive segmentation, and to learn hierarchies. For the last two works, the hierarchy algorithm is differentiated in order to train a neural network in an end-to-end manner with gradient descent algorithms. About the speaker Jean Cousty received the engineering degree from ESIEE Paris, France in 2004, the Ph.D. degree from Universit\u00e9 de Marne-la-Vall\u00e9e in 2007 and the Habilitation \u00e0 Diriger des Recherches from Universit\u00e9 Paris-Est in 2018.&nbsp; After a one-year post-doctoral period in the ASCLEPIOS research team at INRIA (Sophia-Antipolis, France), he has been teaching and doing research at the Computer Science Department, ESIEE Paris, and at Laboratoire d&#8217;Informatique Gaspard-Monge, Universit\u00e9 Gustave Eiffel. From 2015 to 2017, he was an invited Professeur in Brazil at UFMG and PUC Minas. His current research interests include graph-based approaches to image analysis and computer vision, hierarchical analysis, mathematical morphology,&nbsp; and discrete topology. Enhancing Realistic Rendering for Mixed and Virtual Reality Games Esteban Walter Gonzalez Clua Abstract The video game industry continuously advances real-time rendering techniques, with an increasing focus on features like ray-tracing and global illumination. Additionally, VR\/MR\/AR games are pushing for high-quality rendering despite constraints such as high-definition displays (requiring many pixels), less powerful processors, and higher frequency requirements. This talk will present key optimization strategies, including hybrid denoising, foveated culling methods, optimization for foveated displays, and the usage of neural rendering approaches. About the speaker Esteban is Full professor at Universidade Federal Fluminense and coordinator of UFF Medialab, CNPq researcher 1D, Scientist of the State of Rio since 2019. He is undergraduate in Computer Science by Universidade de S\u00e3o Paulo and has master and doctor degree by PUC-Rio. His main<\/p>\n","protected":false},"author":3,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"site-sidebar-layout":"no-sidebar","site-content-layout":"","ast-site-content-layout":"full-width-container","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"disabled","ast-breadcrumbs-content":"","ast-featured-img":"disabled","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"class_list":["post-1760","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/sibgrapi.sbc.org.br\/2024\/wp-json\/wp\/v2\/pages\/1760","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/sibgrapi.sbc.org.br\/2024\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/sibgrapi.sbc.org.br\/2024\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/sibgrapi.sbc.org.br\/2024\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/sibgrapi.sbc.org.br\/2024\/wp-json\/wp\/v2\/comments?post=1760"}],"version-history":[{"count":82,"href":"https:\/\/sibgrapi.sbc.org.br\/2024\/wp-json\/wp\/v2\/pages\/1760\/revisions"}],"predecessor-version":[{"id":3109,"href":"https:\/\/sibgrapi.sbc.org.br\/2024\/wp-json\/wp\/v2\/pages\/1760\/revisions\/3109"}],"wp:attachment":[{"href":"https:\/\/sibgrapi.sbc.org.br\/2024\/wp-json\/wp\/v2\/media?parent=1760"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}