In their book Foundations of Data Science, the authors state in the preface that:
"Courses in theoretical computer science covered finite automata, regular expressions, context-free languages, and computability. In the 1970’s, the study of algorithms was added as an important component of theory. The emphasis was on making computers useful. Today, a fundamental change is taking place and the focus is more on a wealth of applications. There are many reasons for this change. The merging of computing and communications has played an important role. The enhanced ability to observe, collect, and store data in the natural sciences, in commerce, and in other fields calls for a change in our understanding of data and how to handle it in the modern setting. The emergence of the web and social networks as central aspects of daily life presents both opportunities and challenges for theory.
While traditional areas of computer science remain highly important, increasingly researchers of the future will be involved with using computers to understand and extract usable information from massive data arising in applications, not just how to make computers useful on specific well-defined problems. With this in mind we have written this book to cover the theory we expect to be useful in the next 40 years, just as an under standing of automata theory, algorithms, and related topics gave students an advantage in the last 40 years. One of the major changes is an increase in emphasis on probability, statistics, and numerical methods.
...
Modern data in diverse fields such as information processing, search, and machine learning is often advantageously represented as vectors with a large number of components"
Sounds like I am a student of the past that focused more on Automata Theory and algorithms (traditional) in the past. I am yet to catch up with the mentioned emergence and opportunities topics. Time to learn new things, but it's going to be tricky to do it by myself, but I am glad to see someone thinking of the next 40 years.
"Courses in theoretical computer science covered finite automata, regular expressions, context-free languages, and computability. In the 1970’s, the study of algorithms was added as an important component of theory. The emphasis was on making computers useful. Today, a fundamental change is taking place and the focus is more on a wealth of applications. There are many reasons for this change. The merging of computing and communications has played an important role. The enhanced ability to observe, collect, and store data in the natural sciences, in commerce, and in other fields calls for a change in our understanding of data and how to handle it in the modern setting. The emergence of the web and social networks as central aspects of daily life presents both opportunities and challenges for theory.
While traditional areas of computer science remain highly important, increasingly researchers of the future will be involved with using computers to understand and extract usable information from massive data arising in applications, not just how to make computers useful on specific well-defined problems. With this in mind we have written this book to cover the theory we expect to be useful in the next 40 years, just as an under standing of automata theory, algorithms, and related topics gave students an advantage in the last 40 years. One of the major changes is an increase in emphasis on probability, statistics, and numerical methods.
...
Modern data in diverse fields such as information processing, search, and machine learning is often advantageously represented as vectors with a large number of components"
Sounds like I am a student of the past that focused more on Automata Theory and algorithms (traditional) in the past. I am yet to catch up with the mentioned emergence and opportunities topics. Time to learn new things, but it's going to be tricky to do it by myself, but I am glad to see someone thinking of the next 40 years.
No comments:
Post a Comment