CSE 600 (Ongoing Research Seminar)



Time: Friday, February 7, 2003, 2:20pm

Canceled due to heavy snow.

Time: Wednesday, February 12, 2003, 1pm
Location: Computer Science Bldg 2311
Speaker: David Patterson, University of California, Berkeley
David Patterson joined the faculty at the University of California at Berkeley in 1977, where he now holds the Pardee Chair of Computer Science. He is a member of the National Academy of Engineering and is a fellow of both the ACM (Association for Computing Machinery) and the IEEE (Institute of Electrical and Electronics Engineers).

He led the design and implementation of RISC I, likely the first VLSI Reduced Instruction Set Computer. This research became the foundation of the SPARC architecture, used by Sun Microsystems and others. He was a leader, along with Randy Katz, of the Redundant Arrays of Inexpensive Disks project (or RAID), which led to reliable storage systems from many companies. He is co-author of five books, including two with John Hennessy, who is now President of Stanford University. Patterson has been chair of the CS division at Berkeley, the ACM SIG in computer architecture, and the Computing Research Association.

His teaching has been honored by the ACM, the IEEE, and the University of California. Patterson shared the 1999 IEEE Reynold Johnson Information Storage Award with Randy Katz for the development of RAID and shared the 2000 IEEE von Neumann medal with John Hennessy for "creating a revolution in computer architecture through their exploration, popularization, and commercialization of architectural innovations."

Abstract: Recovery Oriented Computing (ROC)

It is time to broaden our performance-dominated research agenda. A four order of magnitude increase in performance over 20 years means that few outside the CS&E research community believe that speed is the only problem of computer hardware and software. If we don't change our ways, our legacy may be cheap, fast, and flaky. Recovery Oriented Computing (ROC) takes the perspective that hardware faults, software bugs, and operator errors are facts to be coped with, not problems to be solved. By concentrating on Mean Time to Repair rather than Mean Time to Failure, ROC reduces recovery time and thus offers higher availability. Since a large portion of system administration is dealing with failures, ROC may also reduce total cost of ownership. ROC principles include design for fast recovery, extensive error detection and diagnosis, systematic error insertion to test emergency systems, and recovery benchmarks to measure progress. If we embrace availability and maintainability, systems of the future may compete on recovery performance rather than just processor performance, and on total cost of ownership rather than just system price. Such a change may restore our pride in the systems we craft.

Time: Friday, February 21, 2003, 2pm
Location: Computer Science Bldg 2311
Speaker: Pat Hanrahan, Stanford

Pat Hanrahan is the CANON USA Professor of Computer Science and Electrical Engineering at Stanford University where he teaches computer graphics. His current research involves visualization, image synthesis, and graphics systems and architectures. Before joining Stanford he was a faculty member at Princeton. He has also worked at Pixar where he developed developed volume rendering software and was the chief architect of the RenderMan(TM) Interface - a protocol that allows modeling programs to describe scenes to high quality rendering programs. Previous to Pixar he directed the 3D computer graphics group in the Computer Graphics Laboratory at New York Institute of Technology. Professor Hanrahan has received three university teaching awards. He has received an Academy Award for Science and Technology, the Spirit of America Creativity Award, the SIGGRAPH Computer Graphics Achievement Award, and was recently elected to the National Academy of Engineering.

Abstract: Why is Graphics Hardware so Fast?

Recently NVIDIA has claimed that their graphics processors (or GPUs) are improving at a rate three times faster than Moore's Law for processors. A $25 GPU is rated from 50-100 gigaflops and approximately 1 teraop (8-bit ops). Alongside this increase in performance is new functionality. The most recent innovation is user-programmable vertex and fragment stages that allow GPUs to compute a wide range of new visual effects enabling movie-quality games. Announced chips have as many as 200 programmable floating point units operating in parallel. The result is that the latest generation of commodity graphics and game chips are powerful data-parallel computers. Why are these graphics processors so fast? Will the future performance of GPUs continue to increase faster than CPUs? And, if so, what are the implications for computing?

Time: Wednesday, February 26, 2003, 2pm
Location: Computer Science Bldg 2311
Speaker: Daniel N. Jackson, MIT

Abstract:  Dependability by Design


Time: Friday, March 7, 2003, 2pm
Location: SAC Auditorium
Speaker: Stephen Wolfram, Wolfram Research, Inc.

Abstract:  A New Kind of Science



Time:Friday, March 14, 2003, 2pm
Location: Computer Science Bldg 2311
Speaker: Guizhen Yang

http://www.cs.sunysb.edu/~guizyang/

Abstract:  Semantic Web Information Processing:  from Semistructured Data to Structural Knowledge

The vision of the Semantic Web is to define and share machine
processable data on the Web which will enable a variety of automated
tasks ranging from information search to data integration to content
management to Web services. This talk will present our approach to
realizing the Semantic Web vision, by addressing two fundamental
issues: (1) creation of semantic content by transforming unstructured
Web documents into structured data; (2) infrastructure for reasoning
with semantically enriched data.

In the first part of the talk, I will focus on creation of semantic
content from Web documents. Specifically, I will describe novel
techniques for data extraction from Web documents that exhibit a high
degree of precision and recall. The theory behind these techniques is
based on the concept of unambiguity in automatic learning of
extraction patterns and the notion of resilience to changes in Web
documents. I will present complexity results and efficient algorithms
for learning unambiguous and resilient extraction patterns, as well as
experimental results to demonstrate the effectiveness of these
techniques in practice.

In the second part of the talk, I will deal with infrastructure for
reasoning with semantically enriched data. I will present my work on
the design and implementation of Flora-2. Flora-2 unifies the
well-known F-logic, HiLog, and Transaction Logic into one coherent
rule-based, object-oriented knowledge representation system. I will
discuss the engineering issues of language and compiler design,
system architecture, and query optimization, as well as the theoretical
issues related to the new semantics and algorithms for nonmonotonic
multiple value and code inheritance.

Flora-2 (and its predecessor Flora-1) has been used in a variety of
application domains, ranging from Web agents to information
integration in bioinformatics to ontology management to building CASE
systems. Since its last alpha-release less than a year ago it has had
hundreds of downloads and a small community of devoted users. A beta
release is planned in the near future. The source code of Flora-2 is
freely available at http://flora.sourceforge.net/.

At the end of the talk I will outline ongoing and future research on
the Flora-2 system, tree pattern query aggregation, mining semantic
structures of Web documents, and security policy management.

Spring Break
Time: Friday, March 28, 2003, 2pm
Location: Computer Science Bldg 1306
Speaker: Amenda Stent
Amanda Stent received her PhD in computer science from the University of Rochester, where she worked with James Allen on the TRIPS dialogue system. She has been on the faculty in the CS department at Stony Brook since January 2002.  Her areas of research interest include:  natural language and multimodal generation for dialogue, spoken dialogue systems, theories of discourse, and multimedia information extraction.

Abstract:  Evaluating user-tailored conversational interfaces

Conversational interfaces (otherwise known as dialogue systems) are human-computer interfaces where language (speech) is used as the
predominant modality.  Today, different types of conversational interface are in commercial use in the telecommunications, travel, entertainment, military, and education fields.  However, most existing conversational interfaces adapt to the user only minimally.  If we can make the computer more adaptive in its output, human-computer interaction will improve and a wider user base will develop for this type of application.  However, how do we know what adaptive behaviors are useful?  And how can we measure the success of adaptive behaviors?

In this talk, I will first discuss some commonly-used experimental methods for evaluating dialogue systems.  I will describe some work I have been engaged in with researchers at AT&T, in the area of designing and evaluating a multimodal conversational interface that can adapt one aspect of its output to particular users.  I will then describe some work my students and I are currently engaged in that will support a range of experiments into different types of system adaptation.


Time: Friday, April 4, 2003, 2pm
Location: Computer Science Bldg 1306
Speaker: Isidro Ramos

Prof. Dr. Isidro Ramos Salavert is a Full Professor in the Computing
Systems Department, Technical University of Valencia (Spain).
He was a President of the Castilla la Mancha University and then a
President of its Board of Trustees. Prior to that he was on the Faculty of
Universidad Complutense de Madrid, University of Valencia, Basc Country
University, and University of Nancy (France). The research interests of
Dr. Ramos include Object Oriented Conceptual Modeling and foundations of
object-oriented systems.

Abstract: Model Compilers: The OASIS Approach to Automated Software Development

Industrial and academic research has led to several object-oriented methods
for system development. However, most of these methods do not have the
mechanisms for identifying and specifying user requirements and for testing
and validating these requirements before, during, and after
development. Model compilers are emerging as a cost-effective solution for
producing quality software. In this approach, high level abstract models
(with visual UML like notation) are directly compiled into complete
applications in standard languages such as C++ or VB. This automated
approach leads both to a productivity gain and to better quality of the
software produced. In this talk we will present our work on an
industrial-strength model compiler, called OASIS, which we have developed
in the past several years.

Time: Friday, April 11, 2003, 2pm
Location: Computer Science Bldg 1306
Speaker:  Klaus Mueller

Klaus Mueller earned a BS degree in Electrical Engineering (University
of Ulm, Germany, '87), a MS degree in Biomedical Engineering (The Ohio
State University, '90), and a PhD degree in Computer Science (The Ohio
State University '98). His current research interests are computer
graphics, visualization, augmented reality, and medical imaging. He is a
recipient of an NSF CAREER award (2000) and has served as a program
co-chair at various conferences, such the the Volume Graphics Workshop
(2001, 2003) and the Symposium on Volume Visualization and Graphics (2002).

Abstract:
Point-Based Volume Rendering

Point-based surface rendering has recently come into vogue to replace,
or at least augment, the presently wide-spread polygonal rendering,
with the rational being that small surface detail can be more
faithfully, and presumably also more efficiently, represented by
atomic points instead of many tiny polygons. This new trend, and the
associated hardware that may emerge along these lines, also gives a
great boost for point-based volume rendering, a popular rendering
technique widely known as Splatting. Volume rendering is attractive as
it considers the entire space filled by the object, and not just its
surface, which, however, leads to a higher computational
complexity. Splatting is an attractive rendering method as it provides
a natural compression of the dataset as well as great rendering
simplicity. In this talk, I will present some of my past and current
work on splatting, and I will also address some of the issues related
to point-based representations that are unique to volume rendering.


Time: Tuesday, April 15, 2003, 2:30pm
Location: Computer Science Bldg 2311
Speaker:  Prasun Sinha

Prasun Sinha is currently with the Data Networking Research
Center at Bell Labs, Holmdel, New Jersey. He holds a PhD in Computer
Science from University of Illinois, Urbana-Champaign. His research
interests are in the area of wireless networking and mobile computing.

Abstract: Transport Layer Fairness on Wireless LANs

As  wireless local area networks (WLANs) based on the IEEE 802.11
standard  see  increasing public  deployment, it is important  to
ensure  that access  to the network  by different  users  remains
fair. While fairness  issues in WLANs  have been  studied before,
fairness between upstream and downstream  flows has not  received
much  attention.   In   the current  standard, the   protocol for
accessing  the  shared medium is same for every device on a WLAN.
As a result, the user terminals and  the access  point  get equal
shares of the channel, irrespective of the number of nodes served
by  the access point. This results in poor and  unfair throughput
for downstream flows.

Various proposed solutions for fair access to  shared  media  are
capable of alleviating these problems.  However, they all require
changing the MAC standard which makes their deployment difficult.
I  will  present  a  deployable solution which can be implemented
as a sub-layer above the MAC layer for  providing  fair   channel
access  to  all  UDP flows. The protocol's effectiveness has been
studied using both ns2 simulations as well as  an  implementation
over a WLAN.

In case of TCP flows, the problem of unfairness is exacerbated by
TCP's  closed  loop  control.   Four  different  regions  of  TCP
unfairness  can  be  identified  that  depend   on   the   buffer
availability   at   the   base   station,   with   some   regions
exhibiting  significant unfairness  of   over  10  in  terms   of
throughput   ratio between  upstream  and downstream  TCP  flows.
Through results  obtained from  extensive  analysis,  simulation,
and  experimentation,  I will explain the interaction between the
802.11 MAC protocol  and  TCP.  I  will  also  present  a  simple
solution  that  can  be implemented at the access point above the
MAC  layer for  ensuring  that  different  TCP  flows  share  the
channel  equitably  irrespective  of  the available buffer at the
base station.

Time: Friday, April 25, 2003, 2:00pm
Location: Computer Science Bldg 2311
Speaker:  Erran L. Li (Li Li)

Dr. Li Li received the B.E. degree in Automatic Control from Beijing
Polytechnic University in 1993, M.E. in Pattern Recognition from Institute
of Automation, Chinese Academy of Sciences in 1996, and Ph.D. in Computer
Science from Cornell University in 2001 respectively.  During his graduate
study at Cornell University, he worked at Microsoft Research, Bell-Labs
Lucent as an intern and AT&T Research Center at ICSI Bekerley as a visiting
student.  He is presently a member of the Networking Research Center in Bell
Labs.  His research interests are in networking with a focus on wireless
networking and mobile computing.

Abstract: Topology Control and Routing in Multi-hop Wireless Ad Hoc Networks

An ad hoc network is a multi-hop wireless network with no fixed
infrastructure.  Rooftop networks and sensor networks are two existing
networks that can benefit from ad hoc networking technology.  Ad hoc
networks can be widely deployed in applications such as disaster relief,
tetherless classrooms, battlefield situations, and pervasive computing.  In
an ad hoc network, the topology can change rapidly as nodes move in and out
of each other's range, bandwidth is limited, and battery power is often a
signficant constraint.  In this talk, I will address these challenges.  I
will first present a simple distributed algorithm where each node makes
local decisions about its transmission power and these local decisions
guarantee the global connectivity of the network, while reducing energy
consumption.  I will then motivate and describe a simple gossip-based ad hoc
routing protocol that is more efficient and robust than those previously
proposed in the literature.  These techniques make ad hoc networks
deployable in a wide variety of application scenarios.

GRC
Time: Friday, May 9, 2003, 2:00pm
Location: Computer Science Bldg 1306
Speaker:  Charles Wright

I'm an undergraduate Senior, and will be graduating in May.  I plan to
continue my studies at Stony Brook as a Ph.D. student in the Fall.  My
research interests are improving file system and data security.  I will be
presenting NCryptfs at the General Track of the USENIX Annual Technical
Conference.

Abstract: NCryptfs: A Secure and Convenient File System

Often, increased security comes at the expense of user convenience,
performance, or compatibility with other systems.  The right level of
security depends on specific site and user needs, which must be carefully
balanced.  We have designed and built a new cryptographic file system
called NCryptfs with the primary goal of allowing users to tailor the
level of security vs. convenience to fit their needs.  Some of the
features NCryptfs supports include multiple concurrent ciphers and
authentication methods, separate per-user name spaces, ad-hoc groups,
challenge-response authentication, and transparent process suspension and
resumption based on key validity.  Our Linux prototype works as a
stackable file system and can be used to secure any file system.
Performance evaluation of NCryptfs shows a minimal user-visible overhead.



Last update on 5/20/2003
Any comments to this page, please send mail to zliang@cs.sunysb.edu