List books in category Computers & Technology / Optical Data Processing

  • Technical Analysis for Algorithmic Pattern Recognition

    Technical Analysis for Algorithmic Pattern Recognition
    Prodromos E. Tsinaslanidis

    The main purpose of this book is to resolve deficiencies and limitations that currently exist when using Technical Analysis (TA). Particularly, TA is being used either by academics as an “economic test” of the weak-form Efficient Market Hypothesis (EMH) or by practitioners as a main or supplementary tool for deriving trading signals. This book approaches TA in a systematic way utilizing all the available estimation theory and tests. This is achieved through the developing of novel rule-based pattern recognizers, and the implementation of statistical tests for assessing the importance of realized returns. More emphasis is given to technical patterns where subjectivity in their identification process is apparent. Our proposed methodology is based on the algorithmic and thus unbiased pattern recognition. The unified methodological framework presented in this book can serve as a benchmark for both future academic studies that test the null hypothesis of the weak-form EMH and for practitioners that want to embed TA within their trading/investment decision making processes. ​

  • Understanding Augmented Reality: Concepts and Applications

    Understanding Augmented Reality: Concepts and Applications
    Alan B. Craig

    Understanding Augmented Reality addresses the elements that are required to create augmented reality experiences. The technology that supports augmented reality will come and go, evolve and change. The underlying principles for creating exciting, useful augmented reality experiences are timeless. Augmented reality designed from a purely technological perspective will lead to an AR experience that is novel and fun for one-time consumption – but is no more than a toy. Imagine a filmmaking book that discussed cameras and special effects software, but ignored cinematography and storytelling! In order to create compelling augmented reality experiences that stand the test of time and cause the participant in the AR experience to focus on the content of the experience – rather than the technology – one must consider how to maximally exploit the affordances of the medium. Understanding Augmented Reality addresses core conceptual issues regarding the medium of augmented reality as well as the technology required to support compelling augmented reality. By addressing AR as a medium at the conceptual level in addition to the technological level, the reader will learn to conceive of AR applications that are not limited by today’s technology. At the same time, ample examples are provided that show what is possible with current technology.Explore the different techniques, technologies and approaches used in developing AR applicationsLearn from the author's deep experience in virtual reality and augmented reality applications to succeed right off the bat, and avoid many of the traps that catch new developers and users of augmented reality experiencesSome AR examples can be experienced from within the book using downloadable software

  • Understanding Machine Learning: From Theory to Algorithms

    Understanding Machine Learning: From Theory to Algorithms
    Shai Shalev-Shwartz

    Machine learning is one of the fastest growing areas of computer science, with far-reaching applications. The aim of this textbook is to introduce machine learning, and the algorithmic paradigms it offers, in a principled way. The book provides a theoretical account of the fundamentals underlying machine learning and the mathematical derivations that transform these principles into practical algorithms. Following a presentation of the basics, the book covers a wide array of central topics unaddressed by previous textbooks. These include a discussion of the computational complexity of learning and the concepts of convexity and stability; important algorithmic paradigms including stochastic gradient descent, neural networks, and structured output learning; and emerging theoretical concepts such as the PAC-Bayes approach and compression-based bounds. Designed for advanced undergraduates or beginning graduates, the text makes the fundamentals and algorithms of machine learning accessible to students and non-expert readers in statistics, computer science, mathematics and engineering.

  • Eye Tracking Methodology: Theory and Practice, Edition 2

    Eye Tracking Methodology: Theory and Practice, Edition 2
    Andrew Duchowski

    Despite the availability of cheap, fast, accurate and usable eye trackers, there is still little information available on how to develop, implement and use these systems. This second edition of Andrew Duchowski’s successful guide to these systems contains significant additional material on the topic and fills this gap in the market with this accessible and comprehensive introduction. Opening with useful background information, including an introduction to the human visual system and key issues in visual perception and eye movement, the second part surveys eye-tracking devices and provides a detailed introduction to the technical requirements necessary for installing a system and developing an application program. The book focuses on video-based, corneal-reflection eye trackers – the most widely available and affordable type of system, before closing with a look at a number of interesting and challenging applications in human factors, collaborative systems, virtual reality, marketing and advertising. Key features of this second edition include: • Three new chapters providing technical descriptions of new (state-of-the-art) eye tracking technology • A complete new Part describing experimental methodology including experimental design, empirical guidelines, together with 5 case studies • Survey material regarding recent research publications included within Part IV The second edition of Eye Tracking Methodology is an invaluable guide for practitioners responsible for developing or implementing an eye tracking system, and can also be used as a teaching text for relevant modules on advanced undergraduate and postgraduate courses. ‘One of the most comprehensive books on eye tracking ever written. A must-read for novices and experts alike.’ Roel Vertegaal, Human Media Lab, Queen’s University, Canada ‘… the first comprehensive treatment of a critical methodological area, this book unites the field’s core theoretical underpinnings with real-world applications and practical know-how …’ Dario Salvucci, Drexel University, USA

  • Computer Age Statistical Inference: Algorithms, Evidence, and Data Science

    Computer Age Statistical Inference: Algorithms, Evidence, and Data Science
    Bradley Efron

    The twenty-first century has seen a breathtaking expansion of statistical methodology, both in scope and in influence. 'Big data', 'data science', and 'machine learning' have become familiar terms in the news, as statistical methods are brought to bear upon the enormous data sets of modern science and commerce. How did we get here? And where are we going? This book takes us on an exhilarating journey through the revolution in data analysis following the introduction of electronic computation in the 1950s. Beginning with classical inferential theories – Bayesian, frequentist, Fisherian – individual chapters take up a series of influential topics: survival analysis, logistic regression, empirical Bayes, the jackknife and bootstrap, random forests, neural networks, Markov chain Monte Carlo, inference after model selection, and dozens more. The distinctly modern approach integrates methodology and algorithms with statistical inference. The book ends with speculation on the future direction of statistics and data science.

  • Biometrics: Advanced Identity Verification: The Complete Guide

    Biometrics: Advanced Identity Verification: The Complete Guide
    Julian Ashbourn

    Biometric identity verification (BIV) offers a radical alternative to passports, PIN numbers, ID cards and driving licences. It uses physiological or behavioural characteristics such as fingerprints, hand geometry, and retinas to check a person's identity. It is therefore much less open to fraudulent use, which makes it ideal for use in voting systems, financial transactions, benefit payment administration, border control, and prison access.This is the first book to provide business readers with an easy-to-read, non-technical introduction to BIV systems. It explains the background and then tells the reader how to get their system up and running quickly. It will be an invaluable read for practitioners, managers and IT personnel – in fact for anyone considering, or involved in, implementing a BIV system.Julian Ashbourn was one of the pioneers in integrating biometric technology and has provided input into many prototype BIV systems around the world.

  • Computational Color Imaging: Second International Workshop, CCIW 2009, Saint-Etienne, France, March 26-27, 2009. Revised Selected Papers

    Computational Color Imaging: Second International Workshop, CCIW 2009, Saint-Etienne, France, March 26-27, 2009. Revised Selected Papers
    Alain Trémeau

    We would like to welcome you to the proceedings of CCIW 2009, the Computational Color Imaging Workshop, held in Saint-Etienne, France, March –27, 2009. This, the second CCIW, was organized by the University Jean Monnet and the – boratoire Hubert Curien UMR 5516 (Saint-Etienne, France) with the endorsement of the International Association for Pattern Recognition (IAPR), the French Association for Pattern Recognition and Interpretation (AFRIF) affiliated with IAPR, and the "Groupe Français de l'Imagerie Numérique Couleur" (GFINC). The first CCIW was organized in 2007 in Modena, Italy, with the endorsement of IAPR. This workshop was held along with the International Conference on Image Analysis and Processing (ICIAP), the main conference on image processing and pattern recognition organized every two years by the Group of Italian Researchers on Pattern Recognition (GIRPR) affiliated with the International Association for Pattern Recognition (IAPR). Our first goal, since we began the planning of the workshop, was to bring together engineers and scientists from various imaging companies and from technical com- nities all over the world to discuss diverse aspects of their latest work, ranging from theoretical developments to practical applications in the field of color imaging, color image processing and analysis. The workshop was therefore intended for researchers and practitioners in the digital imaging, multimedia, visual communications, computer vision, and consumer electronic industry, who are interested in the fundamentals of color image processing and its emerging applications.

  • Hexagonal Image Processing: A Practical Approach

    Hexagonal Image Processing: A Practical Approach
    Lee Middleton

    The sampling lattice used to digitize continuous image data is a signi?cant determinant of the quality of the resulting digital image, and therefore, of the e?cacy of its processing. The nature of sampling lattices is intimately tied to the tessellations of the underlying continuous image plane. To allow uniform sampling of arbitrary size images, the lattice needs to correspond to a regular – spatially repeatable – tessellation. Although drawings and paintings from many ancient civilisations made ample use of regular triangular, square and hexagonal tessellations, and Euler later proved that these three are indeed the only three regular planar tessellations possible, sampling along only the square lattice has found use in forming digital images. The reasons for these are varied, including extensibility to higher dimensions, but the literature on the rami?cations of this commitment to the square lattice for the dominant case of planar data is relatively limited. There seems to be neither a book nor a survey paper on the subject of alternatives. This book on hexagonal image processing is therefore quite appropriate. Lee Middleton and Jayanthi Sivaswamy well motivate the need for a c- certedstudyofhexagonallatticeandimageprocessingintermsoftheirknown uses in biological systems, as well as computational and other theoretical and practicaladvantagesthataccruefromthisapproach. Theypresentthestateof the art of hexagonal image processing and a comparative study of processing images sampled using hexagonal and square grids.

  • Brain Informatics: International Conference, BI 2009, Beijing, China, October 22-24, Proceedings

    Brain Informatics: International Conference, BI 2009, Beijing, China, October 22-24, Proceedings
    Ning Zhong

    This volume contains the papers selected for presentation at The 2009 Inter- tional Conference on Brain Informatics (BI 2009) held at Beijing University of Technology, China, on October 22–24, 2009. It was organized by the Web Int- ligence Consortium (WIC) and IEEE Computational Intelligence Society Task Force on Brain Informatics (IEEE TF-BI). The conference was held jointly with The 2009 International Conference on Active Media Technology (AMT 2009). Brain informatics (BI) has emergedas an interdisciplinaryresearch?eld that focuses on studying the mechanisms underlying the human information proce- ing system (HIPS). It investigates the essential functions of the brain, ranging from perception to thinking, and encompassing such areas as multi-perception, attention,memory,language,computation,heuristicsearch,reasoning,planning, decision-making, problem-solving, learning, discovery, and creativity. The goal of BI is to develop and demonstrate a systematic approach to achieving an integrated understanding of both macroscopic and microscopic level working principles of the brain, by means of experimental, computational, and cognitive neuroscience studies, as well as utilizing advanced Web Intelligence (WI) centric information technologies. BI represents a potentially revolutionary shift in the way that research is undertaken. It attempts to capture new forms of c- laborative and interdisciplinary work. Following this vision, new kinds of BI methods and global research communities will emerge, through infrastructure on the wisdom Web and knowledge grids that enables high speed and d- tributed, large-scale analysis and computations, and radically new ways of sh- ing data/knowledge.

  • Numerical Geometry of Images: Theory, Algorithms, and Applications

    Numerical Geometry of Images: Theory, Algorithms, and Applications
    Ron Kimmel

    Numerical Geometry of Images examines computational methods and algorithms in image processing. It explores applications like shape from shading, color-image enhancement and segmentation, edge integration, offset curve computation, symmetry axis computation, path planning, minimal geodesic computation, and invariant signature calculation. In addition, it describes and utilizes tools from mathematical morphology, differential geometry, numerical analysis, and calculus of variations. Graduate students, professionals, and researchers with interests in computational geometry, image processing, computer graphics, and algorithms will find this new text / reference an indispensable source of insight of instruction.

  • Making Beautiful Deep-Sky Images: Astrophotography with Affordable Equipment and Software, Edition 2

    Making Beautiful Deep-Sky Images: Astrophotography with Affordable Equipment and Software, Edition 2
    Greg Parker

    I have recently discovered the most satisfying hobby so far, and to be frank, I have pursued quite a few hobbies in my time! This one encompasses computers, optics, precision mechanics, digital image processing and artistic appreciation, and it therefore satisfies just about every major interest I have in one go. The hobby is taking photographic images of the deep-sky. I have not met anyone, so far, that has not been moved, sometimes to a great extent, by the images you will find within the pages of this book. Some people will actually admit to being frightened by the vastness of space that these images depict. I am not frightened by these images, but I am certainly awe-struck by them, and they do make me feel rather insignificant regarding the grand scale of things. I am also still firmly in the grip of being totally amazed that the capability to take such awe-inspiring images is now available to anyone with sufficient time and effort to dedicate to this most rewarding of hobbies. This book has two aims. The first is to show you the richness, wonder, and beauty of deep-sky objects. The second is to show you how you can take these images for yourself, using readily available commercial equipment.

  • Machine Learning in Medical Imaging: First International Workshop, MLMI 2010, Held in Conjunction with MICCAI 2010, Beijing, China, September 20, 2010, Proceedings

    Machine Learning in Medical Imaging: First International Workshop, MLMI 2010, Held in Conjunction with MICCAI 2010, Beijing, China, September 20, 2010, Proceedings
    Fei Wang

    The first International Workshop on Machine Learning in Medical Imaging, MLMI 2010, was held at the China National Convention Center, Beijing, China on Sept- ber 20, 2010 in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) 2010. Machine learning plays an essential role in the medical imaging field, including image segmentation, image registration, computer-aided diagnosis, image fusion, ima- guided therapy, image annotation, and image database retrieval. With advances in me- cal imaging, new imaging modalities, and methodologies such as cone-beam/multi-slice CT, 3D Ultrasound, tomosynthesis, diffusion-weighted MRI, electrical impedance to- graphy, and diffuse optical tomography, new machine-learning algorithms/applications are demanded in the medical imaging field. Single-sample evidence provided by the patient’s imaging data is often not sufficient to provide satisfactory performance; the- fore tasks in medical imaging require learning from examples to simulate a physician’s prior knowledge of the data. The MLMI 2010 is the first workshop on this topic. The workshop focuses on major trends and challenges in this area, and works to identify new techniques and their use in medical imaging. Our goal is to help advance the scientific research within the broad field of medical imaging and machine learning. The range and level of submission for this year's meeting was of very high quality. Authors were asked to submit full-length papers for review. A total of 38 papers were submitted to the workshop in response to the call for papers.

  • Time-of-Flight and Structured Light Depth Cameras: Technology and Applications

    Time-of-Flight and Structured Light Depth Cameras: Technology and Applications
    Pietro Zanuttigh

    This book provides a comprehensive overview of the key technologies and applications related to new cameras that have brought 3D data acquisition to the mass market. It covers both the theoretical principles behind the acquisition devices and the practical implementation aspects of the computer vision algorithms needed for the various applications. Real data examples are used in order to show the performances of the various algorithms. The performance and limitations of the depth camera technology are explored, along with an extensive review of the most effective methods for addressing challenges in common applications.Applications covered in specific detail include scene segmentation, 3D scene reconstruction, human pose estimation and tracking and gesture recognition. This book offers students, practitioners and researchers the tools necessary to explore the potential uses of depth data in light of the expanding number of devices available for sale. It explores the impact of these devices on the rapidly growing field of depth-based computer vision.

  • Image and Video-Based Artistic Stylisation

    Image and Video-Based Artistic Stylisation
    Paul Rosin

    Non-photorealistic rendering (NPR) is a combination of computer graphics and computer vision that produces renderings in various artistic, expressive or stylized ways such as painting and drawing. This book focuses on image and video based NPR, where the input is a 2D photograph or a video rather than a 3D model. 2D NPR techniques have application in areas as diverse as consumer and professional digital photography and visual effects for TV and film production. The book covers the full range of the state of the art of NPR with every chapter authored by internationally renowned experts in the field, covering both classical and contemporary techniques. It will enable both graduate students in computer graphics, computer vision or image processing and professional developers alike to quickly become familiar with contemporary techniques, enabling them to apply 2D NPR algorithms in their own projects.

  • Mathematical Methods and Modelling in Hydrocarbon Exploration and Production

    Mathematical Methods and Modelling in Hydrocarbon Exploration and Production
    Armin Iske

    Hydrocarbon exploration and production incorporate great technology challenges for the oil and gas industry. In order to meet the world's future demand for oil and gas, further technological advance is needed, which in turn requires research across multiple disciplines, including mathematics, geophysics, geology, petroleum engineering, signal processing, and computer science. This book addresses important aspects and fundamental concepts in hydrocarbon exploration and production. Moreover, new developments and recent advances in the relevant research areas are discussed, whereby special emphasis is placed on mathematical methods and modelling. The book reflects the multi-disciplinary character of the hydrocarbon production workflow, ranging from seismic data imaging, seismic analysis and interpretation and geological model building, to numerical reservoir simulation. Various challenges concerning the production workflow are discussed in detail. The thirteen chapters of this joint work, authored by international experts from academic and industrial institutions, include survey papers of expository character as well as original research articles. Large parts of the material presented in this book were developed between November 2000 and April 2004 through the European research and training network NetAGES, "Network for Automated Geometry Extraction from Seismic". The new methods described here are currently being implemented as software tools at Schlumberger Stavanger Research, one of the world's largest service providers to the oil industry.

  • Machine Learning for Audio, Image and Video Analysis: Theory and Applications, Edition 2

    Machine Learning for Audio, Image and Video Analysis: Theory and Applications, Edition 2
    Francesco Camastra

    This second edition focuses on audio, image and video data, the three main types of input that machines deal with when interacting with the real world. A set of appendices provides the reader with self-contained introductions to the mathematical background necessary to read the book.Divided into three main parts, From Perception to Computation introduces methodologies aimed at representing the data in forms suitable for computer processing, especially when it comes to audio and images. Whilst the second part, Machine Learning includes an extensive overview of statistical techniques aimed at addressing three main problems, namely classification (automatically assigning a data sample to one of the classes belonging to a predefined set), clustering (automatically grouping data samples according to the similarity of their properties) and sequence analysis (automatically mapping a sequence of observations into a sequence of human-understandable symbols). The third part Applications shows how the abstract problems defined in the second part underlie technologies capable to perform complex tasks such as the recognition of hand gestures or the transcription of handwritten data.Machine Learning for Audio, Image and Video Analysis is suitable for students to acquire a solid background in machine learning as well as for practitioners to deepen their knowledge of the state-of-the-art. All application chapters are based on publicly available data and free software packages, thus allowing readers to replicate the experiments.

  • Biometric Technology: Authentication, Biocryptography, and Cloud-Based Architecture

    Biometric Technology: Authentication, Biocryptography, and Cloud-Based Architecture
    Ravi Das

    Most biometric books are either extraordinarily technical for technophiles or extremely elementary for the lay person. Striking a balance between the two, Biometric Technology: Authentication, Biocryptography, and Cloud-Based Architecture is ideal for business, IT, or security managers that are faced with the task of making purchasing, migration, or adoption decisions. It brings biometrics down to an understandable level, so that you can immediately begin to implement the concepts discussed.Exploring the technological and social implications of widespread biometric use, the book considers the science and technology behind biometrics as well as how it can be made more affordable for small and medium-sized business. It also presents the results of recent research on how the principles of cryptography can make biometrics more secure. Covering biometric technologies in the cloud, including security and privacy concerns, the book includes a chapter that serves as a "how-to manual" on procuring and deploying any type of biometric system. It also includes specific examples and case studies of actual biometric deployments of localized and national implementations in the U.S. and other countries.The book provides readers with a technical background on the various biometric technologies and how they work. Examining optimal application in various settings and their respective strengths and weaknesses, it considers ease of use, false positives and negatives, and privacy and security issues. It also covers emerging applications such as biocryptography.Although the text can be understood by just about anybody, it is an ideal resource for corporate-level executives who are considering implementing biometric technologies in their organizations.

  • Data Matching: Concepts and Techniques for Record Linkage, Entity Resolution, and Duplicate Detection

    Data Matching: Concepts and Techniques for Record Linkage, Entity Resolution, and Duplicate Detection
    Peter Christen

    Data matching (also known as record or data linkage, entity resolution, object identification, or field matching) is the task of identifying, matching and merging records that correspond to the same entities from several databases or even within one database. Based on research in various domains including applied statistics, health informatics, data mining, machine learning, artificial intelligence, database management, and digital libraries, significant advances have been achieved over the last decade in all aspects of the data matching process, especially on how to improve the accuracy of data matching, and its scalability to large databases.Peter Christen’s book is divided into three parts: Part I, “Overview”, introduces the subject by presenting several sample applications and their special challenges, as well as a general overview of a generic data matching process. Part II, “Steps of the Data Matching Process”, then details its main steps like pre-processing, indexing, field and record comparison, classification, and quality evaluation. Lastly, part III, “Further Topics”, deals with specific aspects like privacy, real-time matching, or matching unstructured data. Finally, it briefly describes the main features of many research and open source systems available today.By providing the reader with a broad range of data matching concepts and techniques and touching on all aspects of the data matching process, this book helps researchers as well as students specializing in data quality or data matching aspects to familiarize themselves with recent research advances and to identify open research challenges in the area of data matching. To this end, each chapter of the book includes a final section that provides pointers to further background and research material. Practitioners will better understand the current state of the art in data matching as well as the internal workings and limitations of current systems. Especially, they will learn that it is often not feasible to simply implement an existing off-the-shelf data matching system without substantial adaption and customization. Such practical considerations are discussed for each of the major steps in the data matching process.

  • Artificial General Intelligence: 8th International Conference, AGI 2015, AGI 2015, Berlin, Germany, July 22-25, 2015, Proceedings

    Artificial General Intelligence: 8th International Conference, AGI 2015, AGI 2015, Berlin, Germany, July 22-25, 2015, Proceedings
    Jordi Bieger

    This book constitutes the refereed proceedings of the 8th International Conference on Artificial General Intelligence, AGI 2015, held in Berlin, Germany in July 2015. The 41 papers were carefully reviewed and selected from 72 submissions. The AGI conference series has played and continues to play, a significant role in this resurgence of research on artificial intelligence in the deeper, original sense of the term of “artificial intelligence”. The conferences encourage interdisciplinary research based on different understandings of intelligence and exploring different approaches. AGI research differs from the ordinary AI research by stressing on the versatility and wholeness of intelligence and by carrying out the engineering practice according to an outline of a system comparable to the human mind in a certain sense.

  • Advances in Multimedia Information Processing -- PCM 2015: 16th Pacific-Rim Conference on Multimedia, Gwangju, South Korea, September 16-18, 2015, Proceedings, Part 2

    Advances in Multimedia Information Processing — PCM 2015: 16th Pacific-Rim Conference on Multimedia, Gwangju, South Korea, September 16-18, 2015, Proceedings, Part 2
    Yo-Sung Ho

    The two-volume proceedings LNCS 9314 and 9315, constitute the proceedings of the 16th Pacific-Rim Conference on Multimedia, PCM 2015, held in Gwangju, South Korea, in September 2015. The total of 138 full and 32 short papers presented in these proceedings was carefully reviewed and selected from 224 submissions. The papers were organized in topical sections named: image and audio processing; multimedia content analysis; multimedia applications and services; video coding and processing; multimedia representation learning; visual understanding and recognition on big data; coding and reconstruction of multimedia data with spatial-temporal information; 3D image/video processing and applications; video/image quality assessment and processing; social media computing; human action recognition in social robotics and video surveillance; recent advances in image/video processing; new media representation and transmission technologies for emerging UHD services.

  • One Jump Ahead: Computer Perfection at Checkers, Edition 2

    One Jump Ahead: Computer Perfection at Checkers, Edition 2
    Jonathan Schaeffer

    It’s hard to believe that it’s been over a decade since One Jump Ahead: Challenging Human Supremacy at Checkers was published. I’m delighted to have the oppor- nity to update and expand the book. The ?rst edition ended on a sad note and that was re?ected in the writing. It is now eleven years later and the project has come to a satisfying conclusion. Since its inception, the checkers project has consumed eighteen years of my life— twenty if you count the pre-CHINOOK and post-solving work. It’s hard for me to believe that I actually stuck with it for that long. My wife, Steph, would probably have something witty to say about my obsessive behavior. Rereading the book after a decade was dif?cult for me. When I originally wrote One Jump Ahead, I vowed to be candid in my telling of the story. That meant being honest about what went right and what went wrong. I have been criticized for being hard on some of the characters. That may be so, but I hope everyone will agree that the person receiving the most criticism was, justi?ably, me. I tried to be balanced in the storytelling, re?ecting things as they really happened and not as some sanitized everyone-lived-happily-ever-after tale.

  • Beginning Microsoft Kinect for Windows SDK 2.0: Motion and Depth Sensing for Natural User Interfaces

    Beginning Microsoft Kinect for Windows SDK 2.0: Motion and Depth Sensing for Natural User Interfaces
    Mansib Rahman

    Develop applications in Microsoft Kinect 2 using gesture and speech recognition, scanning of objects in 3D, and body tracking. Create motion-sensing applications for entertainment and practical uses, including for commercial products and industrial applications.Beginning Microsoft Kinect for Windows SDK 2.0 is dense with code and examples to ensure that you understand how to build Kinect applications that can be used in the real world. Techniques and ideas are presented to facilitate incorporation of the Kinect with other technologies.What You Will LearnSet up Kinect 2 and a workspace for Kinect application developmentAccess audio, color, infrared, and skeletal data streams from KinectUse gesture and speech recognitionPerform computer vision manipulations on image data streamsDevelop Windows Store apps and Unity3D applications with Kinect 2Take advantage of Kinect Fusion (3D object mapping technology) and Kinect Ripple (Kinect projector infotainment system)Who This Book Is ForDevelopers who want to include the simple but powerful Kinect technology into their projects, including amateurs and hobbyists, and professional developers

  • Computational Models of Speech Pattern Processing

    Computational Models of Speech Pattern Processing
    Keith Ponting

    Proceedings of the NATO Advanced Study Institute on Computational Models of Speech Pattern Processing, held in St. Helier, Jersey, UK, July 7-18, 1997

  • Integrating 3D Modeling, Photogrammetry and Design

    Integrating 3D Modeling, Photogrammetry and Design
    Shaun Foster

    This book looks at the convergent nature of technology and its relationship to the field of photogrammetry and 3D design. This is a facet of a broader discussion of the nature of technology itself and the relationship of technology to art, as well as an examination of the educational process. In the field of technology-influenced design-based education it is natural to push for advanced technology, yet within a larger institution the constraints of budget and adherence to tradition must be accepted. These opposing forces create a natural balance; in some cases constraints lead to greater creativity than freedom ever can – but in other cases the opposite is true. This work offers insights into ways to integrate new technologies into the field of design, and from a broader standpoint it also looks ahead, raising further questions and looking to the near future as to what additional technologies might cause further disruptions to 3D design as well as wonderful creative opportunities.

  • Fundamentals of Multimedia: Edition 2

    Fundamentals of Multimedia: Edition 2
    Ze-Nian Li

    Multimedia is a ubiquitous part of the technological environment in which we work and think, touching upon almost all aspects of computer science and engineering.This comprehensive textbook introduces the Fundamentals of Multimedia in an accessible manner, addressing real issues commonly faced in the workplace. Suitable for both advanced undergraduate and graduate students, the essential concepts are explained in a practical way to enable students to apply their existing skills to address problems in multimedia. Fully revised and updated, this new edition now includes coverage of such topics as 3D TV, social networks, high-efficiency video compression and conferencing, wireless and mobile networks, and their attendant technologies.Topics and features: presents a brief history and overview of the key concepts in multimedia, including important data representations and color science; reviews lossless and lossy compression methods for image, video and audio data; examines the demands placed by multimedia communications on wired and wireless networks; discusses the impact of social media and cloud computing on information sharing, and on multimedia content search and retrieval; includes study exercises at the end of each chapter; provides supplementary resources for both students and instructors at an associated website.This classroom-tested textbook is ideal for higher-level undergraduate and graduate courses on multimedia systems. Practitioners in industry interested in current multimedia technologies will also find the book to be a useful reference.

  • Topics in Surface Modeling

    Topics in Surface Modeling
    Hans Hagen

    Contains recent ideas and results in three areas of growing importance in curve and surface design: algebraic methods, variational surface design, and some special applications. Leading researchers from throughout the world have contributed their latest work and provided several promising solutions to open issues in surface modeling.

  • Computational Intelligence in Music, Sound, Art and Design: 6th International Conference, EvoMUSART 2017, Amsterdam, The Netherlands, April 19–21, 2017, Proceedings

    Computational Intelligence in Music, Sound, Art and Design: 6th International Conference, EvoMUSART 2017, Amsterdam, The Netherlands, April 19–21, 2017, Proceedings
    João Correia

    This book constitutes the refereed proceedings of the 6th International Conference on Evolutionary Computation in Combinatorial Optimization, EvoMUSART 2017, held in Amsterdam, The Netherlands, in April 2017, co-located with the Evo*2017 events EuroGP, EvoCOP and EvoApplications. The 24 revised full papers presented were carefully reviewed and selected from 29 submissions. The papers cover a wide range of topics and application areas, including: generative approaches to music, graphics, game content, and narrative; music information retrieval; computational aesthetics; the mechanics of interactive evolutionary computation; computer-aided design; and the art theory of evolutionary computation.

  • Introduction to Deep Learning: From Logical Calculus to Artificial Intelligence

    Introduction to Deep Learning: From Logical Calculus to Artificial Intelligence
    Sandro Skansi

    This textbook presents a concise, accessible and engaging first introduction to deep learning, offering a wide range of connectionist models which represent the current state-of-the-art. The text explores the most popular algorithms and architectures in a simple and intuitive style, explaining the mathematical derivations in a step-by-step manner. The content coverage includes convolutional networks, LSTMs, Word2vec, RBMs, DBNs, neural Turing machines, memory networks and autoencoders. Numerous examples in working Python code are provided throughout the book, and the code is also supplied separately at an accompanying website.Topics and features: introduces the fundamentals of machine learning, and the mathematical and computational prerequisites for deep learning; discusses feed-forward neural networks, and explores the modifications to these which can be applied to any neural network; examines convolutional neural networks, and the recurrent connections to a feed-forward neural network; describes the notion of distributed representations, the concept of the autoencoder, and the ideas behind language processing with deep learning; presents a brief history of artificial intelligence and neural networks, and reviews interesting open research problems in deep learning and connectionism.This clearly written and lively primer on deep learning is essential reading for graduate and advanced undergraduate students of computer science, cognitive science and mathematics, as well as fields such as linguistics, logic, philosophy, and psychology.

  • Image Correlation for Shape, Motion and Deformation Measurements: Basic Concepts,Theory and Applications

    Image Correlation for Shape, Motion and Deformation Measurements: Basic Concepts,Theory and Applications
    Michael A. Sutton

    Image Correlation for Shape, Motion and Deformation Measurements provides a comprehensive overview of data extraction through image analysis. Readers will find and in-depth look into various single- and multi-camera models (2D-DIC and 3D-DIC), two- and three-dimensional computer vision, and volumetric digital image correlation (VDIC). Fundamentals of accurate image matching are described, along with presentations of both new methods for quantitative error estimates in correlation-based motion measurements, and the effect of out-of-plane motion on 2D measurements. Thorough appendices offer descriptions of continuum mechanics formulations, methods for local surface strain estimation and non-linear optimization, as well as terminology in statistics and probability. With equal treatment of computer vision fundamentals and techniques for practical applications, this volume is both a reference for academic and industry-based researchers and engineers, as well as a valuable companion text for appropriate vision-based educational offerings.

  • Autonomous Intelligent Vehicles: Theory, Algorithms, and Implementation

    Autonomous Intelligent Vehicles: Theory, Algorithms, and Implementation
    Hong Cheng

    Autonomous intelligent vehicles pose unique challenges in robotics, that encompass issues of environment perception and modeling, localization and map building, path planning and decision-making, and motion control.This important text/reference presents state-of-the-art research on intelligent vehicles, covering not only topics of object/obstacle detection and recognition, but also aspects of vehicle motion control. With an emphasis on both high-level concepts, and practical detail, the text links theory, algorithms, and issues of hardware and software implementation in intelligent vehicle research.Topics and features: presents a thorough introduction to the development and latest progress in intelligent vehicle research, and proposes a basic framework; provides detection and tracking algorithms for structured and unstructured roads, as well as on-road vehicle detection and tracking algorithms using boosted Gabor features; discusses an approach for multiple sensor-based multiple-object tracking, in addition to an integrated DGPS/IMU positioning approach; examines a vehicle navigation approach using global views; introduces algorithms for lateral and longitudinal vehicle motion control.An essential reference for researchers in the field, the broad coverage of all aspects of this research will also appeal to graduate students of computer science and robotics who are interested in intelligent vehicles.

  • Computer Algebra and Geometric Algebra with Applications: 6th International Workshop, IWMM 2004, Shanghai, China, May 19-21, 2004 and International Workshop, GIAE 2004, Xian, China, May 24-28, 2004.Revised Selected Papers

    Computer Algebra and Geometric Algebra with Applications: 6th International Workshop, IWMM 2004, Shanghai, China, May 19-21, 2004 and International Workshop, GIAE 2004, Xian, China, May 24-28, 2004.Revised Selected Papers
    Hongbo Li

    MathematicsMechanization consistsoftheory,softwareandapplicationofc- puterized mathematical activities such as computing, reasoning and discovering. ItsuniquefeaturecanbesuccinctlydescribedasAAA(Algebraization,Algori- mization, Application). The name “Mathematics Mechanization” has its origin in the work of Hao Wang (1960s), one of the pioneers in using computers to do research in mathematics, particularly in automated theorem proving. Since the 1970s, this research direction has been actively pursued and extensively dev- oped by Prof. Wen-tsun Wu and his followers. It di?ers from the closely related disciplines like Computer Mathematics, Symbolic Computation and Automated Reasoning in that its goal is to make algorithmic studies and applications of mathematics the major trend of mathematics development in the information age. The International Workshop on Mathematics Mechanization (IWMM) was initiated by Prof. Wu in 1992, and has ever since been held by the Key L- oratory of Mathematics Mechanization (KLMM) of the Chinese Academy of Sciences. There have been seven workshops of the series up to now. At each workshop, several experts are invited to deliver plenary lectures on cutting-edge methods and algorithms of the selected theme. The workshop is also a forum for people working on related subjects to meet, collaborate and exchange ideas.

  • Fundamentals of Music Processing: Audio, Analysis, Algorithms, Applications

    Fundamentals of Music Processing: Audio, Analysis, Algorithms, Applications
    Meinard Müller

    This textbook provides both profound technological knowledge and a comprehensive treatment of essential topics in music processing and music information retrieval. Including numerous examples, figures, and exercises, this book is suited for students, lecturers, and researchers working in audio engineering, computer science, multimedia, and musicology.The book consists of eight chapters. The first two cover foundations of music representations and the Fourier transform—concepts that are then used throughout the book. In the subsequent chapters, concrete music processing tasks serve as a starting point. Each of these chapters is organized in a similar fashion and starts with a general description of the music processing scenario at hand before integrating it into a wider context. It then discusses—in a mathematically rigorous way—important techniques and algorithms that are generally applicable to a wide range of analysis, classification, and retrieval problems. At the same time, the techniques are directly applied to a specific music processing task. By mixing theory and practice, the book’s goal is to offer detailed technological insights as well as a deep understanding of music processing applications. Each chapter ends with a section that includes links to the research literature, suggestions for further reading, a list of references, and exercises. The chapters are organized in a modular fashion, thus offering lecturers and readers many ways to choose, rearrange or supplement the material. Accordingly, selected chapters or individual sections can easily be integrated into courses on general multimedia, information science, signal processing, music informatics, or the digital humanities.

  • Multiresolution Image Processing and Analysis

    Multiresolution Image Processing and Analysis
    A. Rosenfeld

    This book results from a Workshop on Multiresolution Image Processing and Analysis, held in Leesburg, VA on July 19-21, 1982. It contains updated ver sions of most of the papers that were presented at the Workshop, as well as new material added by the authors. Four of the presented papers were not available for inclusion in the book: D. Sabbah, A computing with connections approach to visual recognition; R. M. Haralick, Fitting the gray tone intensity surface as a function of neighborhood size; E. M. Riseman, Hierarchical boundary formation; and W. L. Mahaffey, L. S. Davis, and J. K. Aggarwal, Region correspondence in multi-resolution images taken from dynamic scenes. The number and variety of papers indicates the timeliness of the H0rkshop. Multiresolution methods are rapidly gaining recognition as an important theme in image processing and analysis. I would like to express my thanks to the National Science Foundation for their support of the Workshop under Grant MCS-82-05942; to Barbara Hope for organizing and administering the Workshop; to Janet Salzman and Fran Cohen, for retyping the papers; and above all, to the speakers and other partici pants, for making the Workshop possible.

  • Digital Waveform Generation

    Digital Waveform Generation
    Pete Symons

    This concise overview of digital signal generation will introduce you to powerful, flexible and practical digital waveform generation techniques. These techniques, based on phase-accumulation and phase-amplitude mapping, will enable you to generate sinusoidal and arbitrary real-time digital waveforms to fit your desired waveshape, frequency, phase offset and amplitude, and to design bespoke digital waveform generation systems from scratch. Including a review of key definitions, a brief explanatory introduction to classical analogue waveform generation and its basic conceptual and mathematical foundations, coverage of recursion, DDS, IDFT and dynamic waveshape and spectrum control, a chapter dedicated to detailed examples of hardware design, and accompanied by downloadable Mathcad models created to help you explore 'what if?' design scenarios, this is essential reading for practitioners in the digital signal processing community, and for students who want to understand and apply digital waveform synthesis techniques.

  • Introduction to Computational Genomics: A Case Studies Approach

    Introduction to Computational Genomics: A Case Studies Approach
    Nello Cristianini

    Where did SARS come from? Have we inherited genes from Neanderthals? How do plants use their internal clock? The genomic revolution in biology enables us to answer such questions. But the revolution would have been impossible without the support of powerful computational and statistical methods that enable us to exploit genomic data. Many universities are introducing courses to train the next generation of bioinformaticians: biologists fluent in mathematics and computer science, and data analysts familiar with biology. This readable and entertaining book, based on successful taught courses, provides a roadmap to navigate entry to this field. It guides the reader through key achievements of bioinformatics, using a hands-on approach. Statistical sequence analysis, sequence alignment, hidden Markov models, gene and motif finding and more, are introduced in a rigorous yet accessible way. A companion website provides the reader with Matlab-related software tools for reproducing the steps demonstrated in the book.

  • Digital Image Processing: An Algorithmic Introduction Using Java, Edition 2

    Digital Image Processing: An Algorithmic Introduction Using Java, Edition 2
    Wilhelm Burger

    This revised and expanded new edition of an internationally successful classic presents an accessible introduction to the key methods in digital image processing for both practitioners and teachers. Emphasis is placed on practical application, presenting precise algorithmic descriptions in an unusually high level of detail, while highlighting direct connections between the mathematical foundations and concrete implementation. The text is supported by practical examples and carefully constructed chapter-ending exercises drawn from the authors' years of teaching experience, including easily adaptable Java code and completely worked out examples. Source code, test images and additional instructor materials are also provided at an associated website. Digital Image Processing is the definitive textbook for students, researchers, and professionals in search of critical analysis and modern implementations of the most important algorithms in the field, and is also eminently suitable for self-study.

  • Inside PixInsight

    Inside PixInsight
    Warren A. Keller

    In this book, Warren Keller reveals the secrets of astro-image processing software PixInsight in a practical and easy to follow manner, allowing the reader to produce stunning astrophotographs from even mediocre data. As the first comprehensive post-processing platform to be created by astro-imagers for astro-imagers, it has for many, replaced the generic graphics editors as the software of choice. With clear instructions from Keller, astrophotographers can get the most from its tools to create amazing images. Capable of complex post-processing routines, PixInsight is also an advanced pre-processing software, through which astrophotographers calibrate and stack their exposures into completed master files. Although it is extremely powerful, PixInsight has been inadequately documented in print–until now. With screenshots to help illustrate the process, it is a vital guide.

  • Evaluating Learning Algorithms: A Classification Perspective

    Evaluating Learning Algorithms: A Classification Perspective
    Nathalie Japkowicz

    The field of machine learning has matured to the point where many sophisticated learning approaches can be applied to practical applications. Thus it is of critical importance that researchers have the proper tools to evaluate learning approaches and understand the underlying issues. This book examines various aspects of the evaluation process with an emphasis on classification algorithms. The authors describe several techniques for classifier performance assessment, error estimation and resampling, obtaining statistical significance as well as selecting appropriate domains for evaluation. They also present a unified evaluation framework and highlight how different components of evaluation are both significantly interrelated and interdependent. The techniques presented in the book are illustrated using R and WEKA, facilitating better practical insight as well as implementation. Aimed at researchers in the theory and applications of machine learning, this book offers a solid basis for conducting performance evaluations of algorithms in practical settings.

  • Domain Adaptation in Computer Vision Applications

    Domain Adaptation in Computer Vision Applications
    Gabriela Csurka

    This comprehensive text/reference presents a broad review of diverse domain adaptation (DA) methods for machine learning, with a focus on solutions for visual applications. The book collects together solutions and perspectives proposed by an international selection of pre-eminent experts in the field, addressing not only classical image categorization, but also other computer vision tasks such as detection, segmentation and visual attributes.Topics and features: surveys the complete field of visual DA, including shallow methods designed for homogeneous and heterogeneous data as well as deep architectures; presents a positioning of the dataset bias in the CNN-based feature arena; proposes detailed analyses of popular shallow methods that addresses landmark data selection, kernel embedding, feature alignment, joint feature transformation and classifier adaptation, or the case of limited access to the source data; discusses more recent deep DA methods, including discrepancy-based adaptation networks and adversarial discriminative DA models; addresses domain adaptation problems beyond image categorization, such as a Fisher encoding adaptation for vehicle re-identification, semantic segmentation and detection trained on synthetic images, and domain generalization for semantic part detection; describes a multi-source domain generalization technique for visual attributes and a unifying framework for multi-domain and multi-task learning.This authoritative volume will be of great interest to a broad audience ranging from researchers and practitioners, to students involved in computer vision, pattern recognition and machine learning.

  • 3D Computer Vision: Efficient Methods and Applications, Edition 2

    3D Computer Vision: Efficient Methods and Applications, Edition 2
    Christian Wöhler

    This indispensable text introduces the foundations of three-dimensional computer vision and describes recent contributions to the field.Fully revised and updated, this much-anticipated new edition reviews a range of triangulation-based methods, including linear and bundle adjustment based approaches to scene reconstruction and camera calibration, stereo vision, point cloud segmentation, and pose estimation of rigid, articulated, and flexible objects. Also covered are intensity-based techniques that evaluate the pixel grey values in the image to infer three-dimensional scene structure, and point spread function based approaches that exploit the effect of the optical system. The text shows how methods which integrate these concepts are able to increase reconstruction accuracy and robustness, describing applications in industrial quality inspection and metrology, human-robot interaction, and remote sensing.Practitioners of computer vision, photogrammetry, optical metrology, robotics and planetary science will find the book an essential reference.Examines three-dimensional surface reconstruction of strongly non-Lambertian surfaces by the combination of photometric stereo and active range scanning, with applications to industrial metrology (NEW).Discusses pose estimation and tracking of human body parts, and subsequent recognition of actions performed in a complex industrial production environment, in the context of safe interaction between humans and industrial robots (NEW).Reviews the construction of high-resolution lunar digital elevation models based on orbital imagery in combination with laser altimetry data, including a discussion of the latest lunar spacecraft data sets (NEW).

  • Computer Vision for Visual Effects

    Computer Vision for Visual Effects
    Richard J. Radke

    Modern blockbuster movies seamlessly introduce impossible characters and action into real-world settings using digital visual effects. These effects are made possible by research from the field of computer vision, the study of how to automatically understand images. Computer Vision for Visual Effects will educate students, engineers and researchers about the fundamental computer vision principles and state-of-the-art algorithms used to create cutting-edge visual effects for movies and television. The author describes classical computer vision algorithms used on a regular basis in Hollywood (such as blue screen matting, structure from motion, optical flow and feature tracking) and exciting recent developments that form the basis for future effects (such as natural image matting, multi-image compositing, image retargeting and view synthesis). He also discusses the technologies behind motion capture and three-dimensional data acquisition. More than 200 original images demonstrating principles, algorithms and results, along with in-depth interviews with Hollywood visual effects artists, tie the mathematical concepts to real-world filmmaking.

  • Intelligent Technologies for Interactive Entertainment: Third International Conference, INTETAIN 2009, Amsterdam, The Netherlands, June 22-24, 2009, Proceedings

    Intelligent Technologies for Interactive Entertainment: Third International Conference, INTETAIN 2009, Amsterdam, The Netherlands, June 22-24, 2009, Proceedings
    Anton Nijholt

    This book constitutes the proceedings of the 3rd International Conference on Intelligent Technologies for Interactive Entertainment (INTETAIN 09). The papers focus on topics such as emergent games, exertion interfaces and embodied interaction. Further topics are affective user interfaces, story telling, sensors, tele-presence in entertainment, animation, edutainment, and interactive art.

  • Sparse Representations and Compressive Sensing for Imaging and Vision

    Sparse Representations and Compressive Sensing for Imaging and Vision
    Vishal M. Patel

    Compressed sensing or compressive sensing is a new concept in signal processing where one measures a small number of non-adaptive linear combinations of the signal. These measurements are usually much smaller than the number of samples that define the signal. From these small numbers of measurements, the signal is then reconstructed by non-linear procedure. Compressed sensing has recently emerged as a powerful tool for efficiently processing data in non-traditional ways. In this book, we highlight some of the key mathematical insights underlying sparse representation and compressed sensing and illustrate the role of these theories in classical vision, imaging and biometrics problems.

  • Introduction to Applied Linear Algebra: Vectors, Matrices, and Least Squares

    Introduction to Applied Linear Algebra: Vectors, Matrices, and Least Squares
    Stephen Boyd

    This groundbreaking textbook combines straightforward explanations with a wealth of practical examples to offer an innovative approach to teaching linear algebra. Requiring no prior knowledge of the subject, it covers the aspects of linear algebra – vectors, matrices, and least squares – that are needed for engineering applications, discussing examples across data science, machine learning and artificial intelligence, signal and image processing, tomography, navigation, control, and finance. The numerous practical exercises throughout allow students to test their understanding and translate their knowledge into solving real-world problems, with lecture slides, additional computational exercises in Julia and MATLAB, and data sets accompanying the book online at https://web.stanford.edu/~boyd/vmls/. Suitable for both one-semester and one-quarter courses, as well as self-study, this self-contained text provides beginning students with the foundation they need to progress to more advanced study.

  • Computer Vision: Algorithms and Applications

    Computer Vision: Algorithms and Applications
    Richard Szeliski

    Humans perceive the three-dimensional structure of the world with apparent ease. However, despite all of the recent advances in computer vision research, the dream of having a computer interpret an image at the same level as a two-year old remains elusive. Why is computer vision such a challenging problem and what is the current state of the art? Computer Vision: Algorithms and Applications explores the variety of techniques commonly used to analyze and interpret images. It also describes challenging real-world applications where vision is being successfully used, both for specialized applications such as medical imaging, and for fun, consumer-level tasks such as image editing and stitching, which students can apply to their own personal photos and videos. More than just a source of “recipes,” this exceptionally authoritative and comprehensive textbook/reference also takes a scientific approach to basic vision problems, formulating physical models of the imaging process before inverting them to produce descriptions of a scene. These problems are also analyzed using statistical models and solved using rigorous engineering techniques Topics and features: Structured to support active curricula and project-oriented courses, with tips in the Introduction for using the book in a variety of customized courses Presents exercises at the end of each chapter with a heavy emphasis on testing algorithms and containing numerous suggestions for small mid-term projects Provides additional material and more detailed mathematical topics in the Appendices, which cover linear algebra, numerical techniques, and Bayesian estimation theory Suggests additional reading at the end of each chapter, including the latest research in each sub-field, in addition to a full Bibliography at the end of the book Supplies supplementary course material for students at the associated website, http://szeliski.org/Book/ Suitable for an upper-level undergraduate or graduate-level course in computer science or engineering, this textbook focuses on basic techniques that work under real-world conditions and encourages students to push their creative boundaries. Its design and exposition also make it eminently suitable as a unique reference to the fundamental techniques and current research literature in computer vision.

  • Learning to Rank for Information Retrieval

    Learning to Rank for Information Retrieval
    Tie-Yan Liu

    Due to the fast growth of the Web and the difficulties in finding desired information, efficient and effective information retrieval systems have become more important than ever, and the search engine has become an essential tool for many people. The ranker, a central component in every search engine, is responsible for the matching between processed queries and indexed documents. Because of its central role, great attention has been paid to the research and development of ranking technologies. In addition, ranking is also pivotal for many other information retrieval applications, such as collaborative filtering, definition ranking, question answering, multimedia retrieval, text summarization, and online advertisement. Leveraging machine learning technologies in the ranking process has led to innovative and more effective ranking models, and eventually to a completely new research area called “learning to rank”. Liu first gives a comprehensive review of the major approaches to learning to rank. For each approach he presents the basic framework, with example algorithms, and he discusses its advantages and disadvantages. He continues with some recent advances in learning to rank that cannot be simply categorized into the three major approaches – these include relational ranking, query-dependent ranking, transfer ranking, and semisupervised ranking. His presentation is completed by several examples that apply these technologies to solve real information retrieval problems, and by theoretical discussions on guarantees for ranking performance. This book is written for researchers and graduate students in both information retrieval and machine learning. They will find here the only comprehensive description of the state of the art in a field that has driven the recent advances in search engine development.

  • Introduction to Image Processing Using R: Learning by Examples

    Introduction to Image Processing Using R: Learning by Examples
    Alejandro C. Frery

    This book introduces the statistical software R to the image processing community in an intuitive and practical manner. R brings interesting statistical and graphical tools which are important and necessary for image processing techniques. Furthermore, it has been proved in the literature that R is among the most reliable, accurate and portable statistical software available. Both the theory and practice of R code concepts and techniques are presented and explained, and the reader is encouraged to try their own implementation to develop faster, optimized programs. Those who are new to the field of image processing and to R software will find this work a useful introduction. By reading the book alongside an active R session, the reader will experience an exciting journey of learning and programming.

  • The Image Processing Handbook: Edition 6

    The Image Processing Handbook: Edition 6
    John C. Russ

    Whether obtained by microscopes, space probes, or the human eye, the same basic tools can be applied to acquire, process, and analyze the data contained in images. Ideal for self study, The Image Processing Handbook, Sixth Edition, first published in 1992, raises the bar once again as the gold-standard reference on this subject. Using extensive new illustrations and diagrams, it offers a logically organized exploration of the important relationship between 2D images and the 3D structures they reveal. Provides Hundreds of Visual Examples in FULL COLOR! The author focuses on helping readers visualize and compare processing and measurement operations and how they are typically combined in fields ranging from microscopy and astronomy to real-world scientific, industrial, and forensic applications. Presenting methods in the order in which they would be applied in a typical workflow—from acquisition to interpretation—this book compares a wide range of algorithms used to: Improve the appearance, printing, and transmission of an image Prepare images for measurement of the features and structures they reveal Isolate objects and structures, and measure their size, shape, color, and position Correct defects and deal with limitations in images Enhance visual content and interpretation of details This handbook avoids dense mathematics, instead using new practical examples that better convey essential principles of image processing. This approach is more useful to develop readers’ grasp of how and why to apply processing techniques and ultimately process the mathematical foundations behind them. Much more than just an arbitrary collection of algorithms, this is the rare book that goes beyond mere image improvement, presenting a wide range of powerful example images that illustrate techniques involved in color processing and enhancement. Applying his 50-year experience as a scientist, educator, and industrial consultant, John Russ offers the benefit of his image processing expertise for fields ranging from astronomy and biomedical research to food science and forensics. His valuable insights and guidance continue to make this handbook a must-have reference.

  • Computer Vision with OpenCV 3 and Qt5: Build visually appealing, multithreaded, cross-platform computer vision applications

    Computer Vision with OpenCV 3 and Qt5: Build visually appealing, multithreaded, cross-platform computer vision applications
    Amin Ahmadi Tazehkandi

    Blend the power of Qt with OpenCV to build cross-platform computer vision applications Key Features ● Start creating robust applications with the power of OpenCV and Qt combined ● Learn from scratch how to develop cross-platform computer vision applications ● Accentuate your OpenCV applications by developing them with Qt Book Description Developers have been using OpenCV library to develop computer vision applications for a long time. However, they now need a more effective tool to get the job done and in a much better and modern way. Qt is one of the major frameworks available for this task at the moment. This book will teach you to develop applications with the combination of OpenCV 3 and Qt5, and how to create cross-platform computer vision applications. We’ll begin by introducing Qt, its IDE, and its SDK. Next you’ll learn how to use the OpenCV API to integrate both tools, and see how to configure Qt to use OpenCV. You’ll go on to build a full-fledged computer vision application throughout the book. Later, you’ll create a stunning UI application using the Qt widgets technology, where you’ll display the images after they are processed in an efficient way. At the end of the book, you’ll learn how to convert OpenCV Mat to Qt QImage. You’ll also see how to efficiently process images to filter them, transform them, detect or track objects as well as analyze video. You’ll become better at developing OpenCV applications. What you will learn ● Get an introduction to Qt IDE and SDK ● Be introduced to OpenCV and see how to communicate between OpenCV and Qt ● Understand how to create UI using Qt Widgets ● Learn to develop cross-platform applications using OpenCV 3 and Qt 5 ● Explore the multithreaded application development features of Qt5 ● Improve OpenCV 3 application development using Qt5 ● Build, test, and deploy Qt and OpenCV apps, either dynamically or statically ● See Computer Vision technologies such as filtering and transformation of images, detecting and matching objects, template matching, object tracking, video and motion analysis, and much more ● Be introduced to QML and Qt Quick for iOS and Android application development Who this book is for This book is for readers interested in building computer vision applications. Intermediate knowledge of C++ programming is expected. Even though no knowledge of Qt5 and OpenCV 3 is assumed, if you’re familiar with these frameworks, you’ll benefit.

  • Crime Scene Photography

    Crime Scene Photography
    Edward M. Robinson

    Crime Scene Photography is a book wrought from years of experience, with material carefully selected for ease of use and effectiveness in training, and field tested by the author in his role as a Forensic Services Supervisor for the Baltimore County Police Department.While there are many books on non-forensic photography, none of them adequately adapt standard image-taking to crime scene photography. The forensic photographer, or more specifically the crime scene photographer, must know how to create an acceptable image that is capable of withstanding challenges in court. This book blends the practical functions of crime scene processing with theories of photography to guide the reader in acquiring the skills, knowledge and ability to render reliable evidence.Required reading by the IAI Crime Scene Certification Board for all levels of certificationContains over 500 photographsCovers the concepts and principles of photography as well as the "how to" of creating a final productIncludes end-of-chapter exercises

  • Algorithms on Strings, Trees, and Sequences: Computer Science and Computational Biology

    Algorithms on Strings, Trees, and Sequences: Computer Science and Computational Biology
    Dan Gusfield

    String algorithms are a traditional area of study in computer science. In recent years their importance has grown dramatically with the huge increase of electronically stored text and of molecular sequence data (DNA or protein sequences) produced by various genome projects. This book is a general text on computer algorithms for string processing. In addition to pure computer science, the book contains extensive discussions on biological problems that are cast as string problems, and on methods developed to solve them. It emphasises the fundamental ideas and techniques central to today's applications. New approaches to this complex material simplify methods that up to now have been for the specialist alone. With over 400 exercises to reinforce the material and develop additional topics, the book is suitable as a text for graduate or advanced undergraduate students in computer science, computational biology, or bio-informatics. Its discussion of current algorithms and techniques also makes it a reference for professionals.

  • An Introduction to Kolmogorov Complexity and Its Applications: Edition 3

    An Introduction to Kolmogorov Complexity and Its Applications: Edition 3
    Ming Li

    “The book is outstanding and admirable in many respects. … is necessary reading for all kinds of readers from undergraduate students to top authorities in the field.” Journal of Symbolic Logic Written by two experts in the field, this is the only comprehensive and unified treatment of the central ideas and applications of Kolmogorov complexity. The book presents a thorough treatment of the subject with a wide range of illustrative applications. Such applications include the randomness of finite objects or infinite sequences, Martin-Loef tests for randomness, information theory, computational learning theory, the complexity of algorithms, and the thermodynamics of computing. It will be ideal for advanced undergraduate students, graduate students, and researchers in computer science, mathematics, cognitive sciences, philosophy, artificial intelligence, statistics, and physics. The book is self-contained in that it contains the basic requirements from mathematics and computer science. Included are also numerous problem sets, comments, source references, and hints to solutions of problems. New topics in this edition include Omega numbers, Kolmogorov–Loveland randomness, universal learning, communication complexity, Kolmogorov's random graphs, time-limited universal distribution, Shannon information and others.

  • On-Chip Photonic Interconnects: A Computer Architect s Perspective

    On-Chip Photonic Interconnects: A Computer Architect’s Perspective
    Christopher J. Nitta

    As the number of cores on a chip continues to climb, architects will need to address both bandwidth and power consumption issues related to the interconnection network. Electrical interconnects are not likely to scale well to a large number of processors for energy efficiency reasons, and the problem is compounded by the fact that there is a fixed total power budget for a die, dictated by the amount of heat that can be dissipated without special (and expensive) cooling and packaging techniques. Thus, there is a need to seek alternatives to electrical signaling for on-chip interconnection applications. Photonics, which has a fundamentally different mechanism of signal propagation, offers the potential to not only overcome the drawbacks of electrical signaling, but also enable the architect to build energy efficient, scalable systems. The purpose of this book is to introduce computer architects to the possibilities and challenges of working with photons and designing on-chip photonic interconnection networks.

  • Quad Rotorcraft Control: Vision-Based Hovering and Navigation

    Quad Rotorcraft Control: Vision-Based Hovering and Navigation
    Luis Rodolfo García Carrillo

    Quad Rotorcraft Control develops original control methods for the navigation and hovering flight of an autonomous mini-quad-rotor robotic helicopter. These methods use an imaging system and a combination of inertial and altitude sensors to localize and guide the movement of the unmanned aerial vehicle relative to its immediate environment. The history, classification and applications of UAVs are introduced, followed by a description of modelling techniques for quad-rotors and the experimental platform itself. A control strategy for the improvement of attitude stabilization in quad-rotors is then proposed and tested in real-time experiments. The strategy, based on the use low-cost components and with experimentally-established robustness, avoids drift in the UAV’s angular position by the addition of an internal control loop to each electronic speed controller ensuring that, during hovering flight, all four motors turn at almost the same speed. The quad-rotor’s Euler angles being very close to the origin, other sensors like GPS or image-sensing equipment can be incorporated to perform autonomous positioning or trajectory-tracking tasks. Two vision-based strategies, each designed to deal with a specific kind of mission, are introduced and separately tested. The first stabilizes the quad-rotor over a landing pad on the ground; it extracts the 3-dimensional position using homography estimation and derives translational velocity by optical flow calculation. The second combines colour-extraction and line-detection algorithms to control the quad-rotor’s 3-dimensional position and achieves forward velocity regulation during a road-following task. In order to estimate the translational-dynamical characteristics of the quad-rotor (relative position and translational velocity) as they evolve within a building or other unstructured, GPS-deprived environment, imaging, inertial and altitude sensors are combined in a state observer. The text give the reader a current view of the problems encountered in UAV control, specifically those relating to quad-rotor flying machines and it will interest researchers and graduate students working in that field. The vision-based control strategies presented help the reader to a better understanding of how an imaging system can be used to obtain the information required for performance of the hovering and navigation tasks ubiquitous in rotored UAV operation.

  • Speech Spectrum Analysis

    Speech Spectrum Analysis
    Sean A. Fulop

    The accurate determination of the speech spectrum, particularly for short frames, is commonly pursued in diverse areas including speech processing, recognition, and acoustic phonetics. With this book the author makes the subject of spectrum analysis understandable to a wide audience, including those with a solid background in general signal processing and those without such background. In keeping with these goals, this is not a book that replaces or attempts to cover the material found in a general signal processing textbook. Some essential signal processing concepts are presented in the first chapter, but even there the concepts are presented in a generally understandable fashion as far as is possible. Throughout the book, the focus is on applications to speech analysis; mathematical theory is provided for completeness, but these developments are set off in boxes for the benefit of those readers with sufficient background. Other readers may proceed through the main text, where the key results and applications will be presented in general heuristic terms, and illustrated with software routines and practical "show-and-tell" discussions of the results. At some points, the book refers to and uses the implementations in the Praat speech analysis software package, which has the advantages that it is used by many scientists around the world, and it is free and open source software. At other points, special software routines have been developed and made available to complement the book, and these are provided in the Matlab programming language. If the reader has the basic Matlab package, he/she will be able to immediately implement the programs in that platform—no extra "toolboxes" are required.

  • Inside PixInsight: Edition 2

    Inside PixInsight: Edition 2
    Warren A. Keller

    PixInsight has taken the astro-imaging world by storm. As the first comprehensive postprocessing platform to be created by astro-imagers for astro-imagers, it has for many replaced other generic graphics editors as the software of choice. PixInsight has been embraced by professionals such as the James Webb (and Hubble) Space Telescope's science imager Joseph DePasquale and Calar Alto's Vicent Peris, as well as thousands of amateurs around the world. While PixInsight is extremely powerful, very little has been printed on the subject. The first edition of this book broke that mold, offering a comprehensive look into the software’s capabilities. This second edition expands on the several new processes added to the PixInsight platform since that time, detailing and demonstrating each one with a now-expanded workflow. Addressing topics such as PhotometricColorCalibration, Large-Scale Pixel Rejection, LocalNormalization and a host of other functions, this text remains the authoritative guide to PixInsight.

  • Introduction to Pattern Recognition: A Matlab Approach

    Introduction to Pattern Recognition: A Matlab Approach
    Sergios Theodoridis

    Introduction to Pattern Recognition: A Matlab Approach is an accompanying manual to Theodoridis/Koutroumbas' Pattern Recognition. It includes Matlab code of the most common methods and algorithms in the book, together with a descriptive summary and solved examples, and including real-life data sets in imaging and audio recognition. This text is designed for electronic engineering, computer science, computer engineering, biomedical engineering and applied mathematics students taking graduate courses on pattern recognition and machine learning as well as R&D engineers and university researchers in image and signal processing/analyisis, and computer vision. Matlab code and descriptive summary of the most common methods and algorithms in Theodoridis/Koutroumbas, Pattern Recognition, Fourth EditionSolved examples in Matlab, including real-life data sets in imaging and audio recognitionAvailable separately or at a special package price with the main text (ISBN for package: 978-0-12-374491-3)

  • Embedded Robotics: Mobile Robot Design and Applications with Embedded Systems

    Embedded Robotics: Mobile Robot Design and Applications with Embedded Systems
    Thomas Bräunl

    t all started with a new robot lab course I had developed to accompany my I robotics lectures. We already had three large, heavy, and expensive mobile robots for research projects, but nothing simple and safe, which we could give to students to practice on for an introductory course. We selected a mobile robot kit based on an 8-bit controller, and used it for the first couple of years of this course. This gave students not only the enjoy ment of working with real robots but, more importantly, hands-on experience with control systems, real-time systems, concurrency, fault tolerance, sensor and motor technology, etc. It was a very successful lab and was greatly enjoyed by the students. Typical tasks were, for example, driving straight, finding a light source, or following a leading vehicle. Since the robots were rather inexpensive, it was possible to furnish a whole lab with them and to con duct multi-robot experiments as well. Simplicity, however, had its drawbacks. The robot mechanics was unreli able, the sensors were quite poor, and extendability and processing power were very limited. What we wanted to use was a similar robot at an advanced level.

  • Cloudera Administration Handbook

    Cloudera Administration Handbook
    Rohit Menon

    An easy-to-follow Apache Hadoop administrator’s guide filled with practical screenshots and explanations for each step and configuration. This book is great for administrators interested in setting up and managing a large Hadoop cluster. If you are an administrator, or want to be an administrator, and you are ready to build and maintain a production-level cluster running CDH5, then this book is for you.