The Coming Storm: Digital Natives Will Redefine the Nature of Learning and the Future Course of Institutions

The following is a re-production of a document I’ve assembled with my team as part of FusionVirtual’s ongoing push to revolutionize online education.  You can view the whitepaper in .pdf format HERE.   I value your feedback and am eager for a passionate discourse on the topic.  Please do not hesitate to share your thoughts and observations in the form of a comment below or reach out to me on Twitter or by e-mail.

FUSIONVIRTUAL POSITION PAPER

THE COMING STORM: DIGITAL NATIVES WILL REDEFINE THE NATURE OF LEARNING AND THE FUTURE COURSE OF INSTITUTIONS.

ABOUT THIS STUDY

This project is led and coordinated by Alex Berger, the founder and Chief Executive Officer of FusionVirtual. Berger is a digitally-literate, self-directed Millennial with trans-generational experience and perspective. In 2003, he began sharing his insights into the ways institutions have to change to be viable. He shared his extensive knowledge of advancements in information availability, and breakthroughs made by the online gaming community in the use of virtual environments. His honors thesis, Not Just A Game: How On-line Gaming Communities Are Shaping Social Capital, was circulated and verified by many leading educators and business leaders. His blogs and communications on his VirtualWayfarer.com site have kept him at the leading edge of change. He is an accomplished lecturer and delights in sharing his insights into the future.

As one of the most disciplined of the digitally literate, information-age leaders, Berger gathered together educators and business leaders and charged them with helping him focus FusionVirtual’s role in defining not only the future of institutions but, perhaps most important, identifying structural inhibitors to change. He stated that this must be done so leaders know how to make their way in the digital-information age.

INTRODUCTION

“In human history there has never been a time when the difference between generations has been so vast. In the last two decades computer literate, digitally skilled, information-age youth have begun to function in ways never before imagined. They are developing a new operating system for the acquisition and application of knowledge. They are changing our understanding of how things work and how things can be.

The generational differences that divide the past from this new present are so extreme that those with pre-digital, pre-information age mindsets must replace their outmoded literacy with new information. If they do this, they will be able to communicate with their children and function in the world as it is now.”

-E.F. Berger, Ed.D. 2008

This quote from an educational leader may seem extreme to those who have not rethought the foundations of their knowledge and begun their process of reeducation.

Let’s examine an example of significant change: Up until a decade ago – the 1990s – some group, a religion, an institution, or a professor controlled information. Whether it was locked-up by a religious hierarchy, a government, or an institution of higher education, it was controlled and doled out to those deemed worthy. Within what can be described as an instant in human time, through the Internet, information of all types became available to anyone with basic computer skills. As technology has advanced institutions and individuals have quickly begun to forfeit sole access and control. Those institutions or governments that try to limit what their subjects can know are fighting a losing battle for control. While the degree of their success varies widely, they will destroy the lives of countless people before they learn that regardless of the actions they take, they are delaying the inevitable. The reality of complete control is dead and has been replaced by the myth of absolute control.

This one major change in what and how we can learn has shaken institutions and governments to their roots. Institutions up to this time were able to control the way people thought by limiting their access to information or setting themselves up as authorities due to their privileged access to “truth.” Although many institutions continue to function as if they have exclusive rights to information, their death-knells are too loud to ignore.

Higher educational systems everywhere are institutions unable to continue as designed. All are based upon one-way communication. Even those few that attempt to be interactive still rely on conveying information that is limited to the knowledge and intent of the tellers and the curricula. Digitally literate individuals cannot – will not – function in these outmoded schools. Learning has jumped over the barriers of past practices and a whole new threshold of knowledge has been crossed.

Epistemology is being redefined. Credits, degrees and measures of academic mastery no longer have the same meaning because the information they were based on has been shown to be too limited or skewed.

NOTE: The ways modern generations are able to communicate have increased as never imagined. The effects of web technology on social inter-relationships have perhaps more impact on society than the changes brought about by the invention of the telephone at the beginning of the last century. In a future position paper we will address the schism this has created between generations and identify ways for leaders to bring their organizations into the Communication Age.

THE MASTERY OF FOUNDATIONAL SKILLS IS ABSOLUTELY REQUIRED

The training aspects of schooling focus on essential skills which must be mastered. Most of these skills are developed in the early or elementary grades. However, training aspects are part of every level of learning. For example, the skills needed to access, analyze and apply law precedents are not taught in elementary school. They are part of the ongoing study of law. Training and mastery of skills is the key factor in the use of knowledge. Information without the learned skill sets necessary to analyze and evaluate the authenticity of data is useless.

Upcoming generations of learners must be encouraged to explore tasks – learning to read, write, and compute, using the scientific method, plumbing a house, building a device, preparing a meal, and so forth. With the mastery of foundation skills they have an unlimited ability to access knowledge that will allow them to gather and combine sources of information and apply what they have learned to solve problems or gain greater understanding.

We can access information far beyond what the brightest minds could access only a decade ago. To do that, the educational institutions must ensure student mastery of foundation skills, ensure self-direction through self-discipline, and inculcate ‘universal’ social and cultural values. For example, in the US, this is known as “Americanization.”

Dr. Mark Jacobs, Dean of the Barrett Honor College at Arizona State University, stresses that true education must be interactive. We agree. We clarify this concept by adding that reading a book and answering the questions at the end of a chapter may help one pass a test, but only through active discussion and practical application, where learners must demonstrate their ability to turn the concept in their heads and apply it to other situations, does true education take place. Berger’s point of view is that listening to a ‘teller’, reading a book, viewing a PowerPoint, or going through copious information on the Internet and then “passing the course” by answering questions is not the way education works best. What works is the interaction between students and teachers, students and students, students and research resources, and students and scholars from anywhere they can be found, within set parameters that require the application of all that information to real problems and situations.

THE CHALLENGE

One might assume that institutional changes are so obviously necessary that those in leadership positions will redirect their organizations to serve the new types of learners. In fact, many leading institutions have begun to make necessary changes. Whole bodies of knowledge once controlled by academic institutions which, in essence, charged for the selective release or access to this information, are available to anyone through the Internet. Many institutions of higher learning are examining the use of virtual space and avatars to enhance educational opportunities.

Increased access to once carefully protected research and knowledge has resulted in combinations of data that was previously impossible. The economics of this type of change is of great concern. If a university cannot charge for access to information or access to those who have spent their lives researching new insights, then what can they charge students for? What is their role?

THERE IS NO PLACE TO STAND AND VIEW OUR SYSTEMS WITHOUT PRECONCEIVED NOTIONS AND BIAS. WE MUST SEE THE BIG PICTURE

To bring about the changes that will make systems and institutions viable, leaders and those who deliver the services must stand back and rethink and often redefine the structures they work within. In education that is very difficult. The conglomerate institution called education is vast, petrified by administrative facility, and peopled by workers who are reluctant to change. Political influences add to the dysfunctional aspects of schools. A century or more of habit, custom and self-protection supports an argument that, “It was good enough for me, it’s good enough for them.”

Some argue that the seeds of a nation’s destruction are within it. This is especially true of a nation that depends on a trained and educated populace. It is no surprise that in the U.S. those areas that are most regressive are the regions with skewed educational standards and inflexible religious strictures.

To assume that any amount of credible information will bring about change is naive. Many institutions and systems must be bypassed. Unable to deliver what is required, they will wither and die. It is no surprise that the largest competitor to traditional public and private universities are online higher education programs like the University of Phoenix. And it is not difficult to project that as these online, two- dimensional schools are not interactive, they have short life spans due to the lack of quality of their products.

This is a time of change. Those who do not identify their product and the nature of the students they work with, as well as the availability of information and the skills necessary to deal with it, not only kill their institutions, but severely damage the Nation. We must adapt and utilize a worldwide scope of knowledge. We must prepare students to interactively process, evaluate, and make a contribution by applying their knowledge.

INSTITUTIONAL CHANGE AND AMERICAN EDUCATION

There are phenomenal numbers of critiques of America’s schools. Few of these articles differentiate between elementary, middle, high school, and college, (although colleges are more often treated separately). Most articles reference schools as if they were all doing the same thing and need the same fix to be effective. Most fail to offer solutions along with their critiques. Unless each level of education and each area of subject matter (discipline) is examined and its purpose understood, change is impossible.

For our purpose we assume the reader understands the needs of the different levels of education. It is enough to say that necessary change at the elementary level may be quite different than changes needed in high schools.

We concluded that regardless of the nature of the changes, there are overarching structures, many put in place without consideration of training and educational needs, that block effective ways of serving learners. These are structural problems that affect all levels of education and are not specific to any one level or discipline. Rather than list structural problems that are in the way of effective change, we decided to identify those areas that must be adapted to education in the digital, information age.

As a guide to open, new thinking about overarching structures that inhibit or deny effective education for many children and erode the Nation’s need for an educated populace, let’s stand back and look at the K–12 structure and select one significant structure that must be changed.
K-12 education is mandatory except some children can opt-out of the system at age 14 (some states age 16) ending their education. With no place to go, these kids are turned loose to run wild in the streets. We understand that at one time kids could drop out because they were needed to work the family farm. Then things changed and there was no demand for unskilled, underage youth. It is interesting to note that in the ‘40s through ‘60s the military draft collected many of these young men and educated them. Then the draft ended. Dropouts run wild.

Today our core cities are filled with undirected, poorly educated, dropout youth who are a drain on society. The solution seems obvious. If a child did not survive in the traditional school setting, or if her family was dysfunctional and could not/would not support her, then what is needed to save her and children like her, and our society, are training and acculturation programs not unlike basic training and the Civilian Conservation Corps.

Why haven’t programs for these damaged kids been created? Obviously because our society and its institutions are too entrenched and inflexible to change in response to critical issues. Why then will some assume that systemic problems that block what is needed for the digital, information age student be addressed and institutions modified? It would take forces greater than those allowed in a democracy to make it happen. Most institutions are so bound they cannot change. If they are unable to serve a changing population these petrified systems erode the competency of large numbers of citizens and gradually make participatory democracy impossible.

Hopefully, new leadership creates options that by-pass moribund systems. Perhaps through forced redirection? Perhaps through market forces that meet demand?
We believe the battle for change is half won when we are able to clearly identify the structural changes that must be made.

OVERARCHING CHANGES TO OUR THINKING AND MODIFICATION OF STRUCTURES IN THE WAY OF EFFECTIVENESS. BRAINSTORMING AND LISTING MAJOR CHANGES THAT ARE NECESSARY IF INSTITUTIONS ARE TO SURVIVE

Please note: The following lists are in no particular order. Each identified change can and will have volumes written about it. Herein we simply tag some necessary changes.

  1. The role of the teacher: What must change to meet the needs of digitally literate, information age students?
  2. One-way communication ends replaced by interactive communication.
  3. Teachers connect students to world resources.
  4. Teachers set the learning parameters for specific levels (courses and units within courses).
  5. Teachers keep individuals focused and the assigned group “class” individually centered.
  6. Teachers learn about and provide desired outcomes for each student.
  7. Teachers utilize virtual space as an extension of the classroom and as a way to work with students individually and in groups.
  8. Teachers extend their accessibility through the use of avatars.
  9. Teachers know the essential skills necessary for mastery of taught material and train individuals accordingly.
  10. Teachers identify student mastery of identified data by observing how each student is able to apply the concept to other situations.
  11. Teachers work as part of interdisciplinary teams.
  12. Although training may be done in isolation, the educational programs are always interactive.
  13. Teaching emphasis is on student mastery of basic skills necessary to function in the course, and the application of readily available data, not how to find information.
  14. The nature of evaluation changes – teachers do not use tests to punish or motivate students. Teachers use evaluation (tests of many types) as a diagnostic tool to determine the educational focus for each student.
  15. Teachers (educators) break out of the ‘time block’ system and use time as necessary to meet goals. Time-on-task is determined by the teacher and student, not a set schedule.
  16. Course length, within realistic parameters, is determined by the teacher to address student needs and learning styles.
  17. Teachers build their courses and instruction methods around the Learning Path: Introduction, Association, Involvement, Application, Internalization and Contribution. (Dr. Edward F. Berger)
  18. Teachers are highly skilled professionals. Their time is focused on students and instructional coordination. It is not used for patrolling, policing, or administrative tasks best done by support personnel.
  19. Changing the concept of classroom (place-based) education.
  20. Interactive learning takes place in many learning environments. For example, a dedicated classroom space may be used for face-to-face communication, group, and one-on-one interaction, or it may not be needed.
  21. Lecture halls are replaced/supplemented by presentation areas in virtual space.
  22. Students are grouped by achievement level and need, not chronological age.
  23. Placing every student at a desk, in a room, doing the same thing has no purpose beyond administrative facility.
  24. Virtual space “classrooms” can be utilized for 24/7 instruction and one-on-one instruction.
  25. Virtual space can be utilized for testing and mastery evaluation as well as attendance, tutoring, and socializing.
  26. Students may never enter a “place-based classroom” if their needs are met in monitored studies through the Internet or in virtual environments.
  27. Educational delivery is not determined by proximity to a school classroom – students may be anywhere.

FusionVirtual has presented this position paper to stimulate thought, share ideas, and help leaders identify the directions they must take to serve future generations and our Nation. Our work has just begun.

-Alex Berger and the FusionVirtual team.

Need a copy that’s easier to print/read?  View the original whitepaper here.

Why The Term “Multi-tasking” Is All Wrong

The term Multi-tasking has become prolific.  If you have read an article about the millennial generation, Web 2.0, or the power and impact of the internet recently, you’ve no doubt come across it regularly.  It’s often referenced as the great enabler of the world’s tech savvy youths and just as often it’s fiercely debated as the great quality inhibitor. Prominent efficiency blogs like Lifehacker deride the term and lambaste multi-tasking as a quality and efficiency reducer. Surveys have been done, books written, and a ferocious flurry of debate has arisen around the benefits, negatives, and great undecideds associated with multi-tasking.  A debate that has spilled onto this blog repeatedly with the most pronounced instance occurring in my 2 part series on Educating Millennials. Unfortunately, we have it all wrong.

The term multi-tasking has never sat well with me.  Sure, it seems to fit some of the behaviors and is close enough in definition and appearance to what’s actually occurring that it’s been the best and easiest way to describe what’s going on – but as a tech savvy millennial the shoe never quite seemed to fit.

Multi-tasking is the simultaneous execution of multiple actions. Juggling is multi-tasking, patting your head and rubbing your stomach is multi-tasking. The way I search the web, chat, watch a movie and write all at once — That is something different.  It is parallel processing. The difference is subtle, but significant.

What is Parallel Processing?

First, clear your mind of any pre-conceived definitions you may harbor for the term parallel processing. What I’m talking about has nothing to do with parallel computing or Amdhal’s law. The fundamental difference between multi-tasking and parallel processing is the way our minds respond to, and deal with, the actions we are handling.  Using my previous examples, when juggling or patting your head and rubbing your stomach you’re performing two actions simultaneously.  As I’m sure most of us will agree, that’s incredibly difficult and our performance decreases exponentially the more tasks we add.

Parallel processing, in contrast, deals with a cycling, structured, hierarchical list which is continuously executed at a comfortable pace.  The speed with which that list is executed and repeated depends on an individual’s familiarity with the tasks and the time/focus each task requires.  A juggler can’t stop to take more time with one ball without losing the other 2.  An individual switching between browser tabs, a movie, and several conversations can. The advantage that millennials and tech savvy individuals the world over have developed is not the ability to do more at once, but rather the ability to handle more tasks almost simultaneously in a more time efficient and effective fashion.

The Skill Set

One of the fundamental components of parallel processing is task familiarity. If I sat you down in front of a massively multi-layer online game and you had never played before, your entire focus would be consumed by trying to move forward while interacting with the spatial environment. Chances are the degree of your familiarity with the action would be so small that it would consume almost all of your attention to execute it. However, fast forward a bit and you’ll have gained familiarity with the process and be able to automate most of it subconsciously. Before long you’ll be carrying on 5 conversations through the in-game chat channels, interacting with other players, traversing the virtual world and engaging in complex actions all seemingly simultaneously. In these instances, there are simply too many actions to be able to manage and participate in them all at the same time.  You can, however, cycle through actions based on the immediacy of their need and respond to each fully in lightning quick bursts.

One of our most incredible abilities is to take certain tasks, develop a familiarity with them, and then transition them into a familiar ‘second nature’ skill set.  When you write, you typically don’t have to think about how you hold a pencil or what muscles make the letters you want.  Further, as you write words, the familiar ones come to you naturally almost without a second thought.  It’s only the ones you’re unfamiliar with that you have to pause and spell out letter by letter, sound by sound. There are thousands of every day tasks we take for granted as developed skills and hardly notice. If you wear glasses, have taken them off, but still gone to push them up on your nose, you’ve experienced a perfect illustration of how our brain is capable of executing and automating ‘second nature’ behaviors almost subconsciously.

Why It Matters

The modern business environment is not the only thing changing.  The world we know, perceive, and interact with is being driven forward by powerful, expansive new technologies.  Understanding the way in which we interact with these technologies and how they change our behaviors is fundamental to understanding what’s really going on around us. The process followed while writing a hand written letter in the 1800s is almost unrecognizable when compared to the steps and process employed by a modern individual writing an e-mail or research paper. Significantly more has changed than the technology.  The very way we relate to, formulate, and execute actions has evolved.  Unfortunately, despite changing our behaviors, our perception of how the processes should work and the advice we offer on how to execute it, has not changed drastically.

This also becomes very significant in our understanding of what looks like a social disconnect. If you’ve ever walked up to someone engaging in heavy parallel tasking and had trouble engaging them in conversation or getting a response from them, it’s because you’re disrupting the process they’re comfortable with and the rate with which they’re executing the sequence. Chances are, whatever activities they’re carrying out are balanced near the uppermost end of what they can comfortably process. They’re in a rhythm, executing a sequence of actions and able to perform at that rate. Enter the parent or roommate who wants to talk about their day in real time, without consideration for the other 5-15 processes the individual has going on, and you end up disrupting the flow of parallel processing. The end result is always a general break down across the board.  I find it interesting that social norms tell us it’s rude to walk up to a conversation two people are having privately about African swallows and begin talking to them about astrological geometry, but not similarly rude to effectively do the same thing when an individual is using a digital device.

I invite you all to join me in changing the dialog surrounding technology and multi-tasking. Before honest dialogue can move forward it’s necessary that we adopt descriptive language like ‘parallel processing‘ that accurately identifies and describes the phenomenon.

Agree?  Disagree? Thoughts or comments?  Please share them in comment form below.  As always I love your feedback and discussion. Additionally, I’d like to thank Dr. John Crosby for his feedback and collaborative ideas on this subject.

The upside to the 250GB Comcast cap – If we fight for it!

Listen to this post:

Comcast’s 256 GB Cap

Depending on how up to date you are on your tech news, you may or may not be aware that starting October 1st, Comcast is implementing a set 250 GB/month bandwidth cap. ArsTechnica has a great break down with background history which you can read here to get caught up.  While this announcement comes from Comcast it is representative of troubling behavior and policies which have secretly been in place for years across a number of major ISPs. If you’re a regular reader you’re probably aware of my issues with my own provider: Cox Communications.  Regardless of which ISP you’re on, if you’re in the US you will be affected eventually.

I’m a firm believer that unlimited, or at the very least, extremely high daily bandwidth caps are a hugely important factor in the future of the US as a competitive world player. For a more in-depth look at how broadband is handled abroad and my thoughts on the economic impact, please see my past Part I & 2 posts on the Technological (Digital) Revolution available here and here.  I would also like to take a moment to be very clear.  While I am pointing out a potential upside to the current bandwidth caps, I am in no way a supporter or endorsing them.

Right now ISPs: 1) Throttle what they see as P2P-like activity indiscriminately; and 2) Oversell their product offering. In large part due to a powerful lobbying arm and a lack of competition, ISPs have been able to get away with murder. They make up their own rules, regularly lie to their customers, and generally do as they want.  Recently when the FCC got involved in a case involving Comcast we saw the start of what may be some accountability in the industry. All of that said, currently there is no pressure whatsoever on ISPs to deliver the service they’ve sold you.  Most providers bill their services as unlimited, but have near-secret soft caps.   Because of the nebulous nature of the service description provided and the lack of understanding most users have of how their high speed cable bill works, there are seldom any challenges to this lack of service delivery.

The Upside

With set caps the ISP’s are defining a fair-use limit, which I assume will be legally binding. Previously, users were provided with the speed tier they were paying for, 7 mb/s for example, and the price. Now price, speed, and bandwidth have been laid out in concrete terms.  This is significant because it removes any valid claim on the ISP’s behalf that super users (typically P2P users) are clogging the pipes, slowing down other users and destroying ISP’s profitability by using excessive bandwidth. The ISPs have now defined for us – the consumer – what constitutes fair use and the service we are paying for.  So long as Comcast users use less than 249.99999 GB of bandwidth a month, they are within the allotted bandwidth they have paid for.

There is now NO grounds for ISPs to throttle P2P and other similar traffic as it can not be claimed that P2P/Gaming/Multimedia users (within their allotted bandwidth) are abusing the network. Especially since ISPs can NOT differentiate between legal P2P usage and illegal P2P usage.  There are hundreds of legal uses for P2P software and throttling it because of the possibility of illegal use is like banning e-mail because it is used by Nigerian bank scammers. Further, the current throttling activities being used by some ISPs may hinder streaming media, particularly online video games which rely on a constant, timely exchange of data.

This is a point being largely ignored/unrealized and one which the ISP’s will work to “overlook”.  They made their bed, now it’s time they lay in it. Get your hands off my data and out of my pockets.

Cox Quotes

The following are quotes from a recent exchange I had with Chris, a high level Cox technician responsible for monitoring customer complaints across newsgroups, blogs, and other similar web resources. It took a lot of back and forth but he confirms several extremely revealing things about Cox’s customer policies and network maintenance.

If my previous fails to answer you question on this issue, I simply do not know how to answer your question.  What I can tell you is that streaming media an online gaming work with our service and you will observe no impact on the performance of these functions. If you are asking if out network management practices affect these services I cannot tell you that because I simply do not know. If you are observing performance related problems with online gaming and streaming media there is a problem that we need to fix.

This comes from the end of our 14 e-mail exchange after extensive back and forth.  It is important to note that the discourse started because I posted audio of a Tier I tech telling me Cox did throttle P2P, and a Tier 2 Tech telling me immediately after being escalated that they did not hinder ANY traffic whatsoever.  What I find revealing is that when pressed repeatedly he still gives me a concrete answer saying that media and gaming won’t be effected, but in the very next sentence contradicts any credibility he has on the subject when stating that he has no idea how/what they do to manage “P2P” traffic.

I know of no technology that can differentiate between legal and illegal traffic. The primary goal of our network management policy is to ensure that all of our customers have adequate bandwidth and to prevent heavy users from denying service to others.

I’d like to reiterate that P2P technology is an amazing resource and used for countless legal tools/applications.  One such use is my P2P based library modernization project.  There is NO valid reason for it’s restriction.  Regardless of the hogwash RIAA and other lobbyist groups have tried to convince us of.

As far as a heavy user is concerned, we are basically trying to prevent a situation where a single customer or a group of customers is over utilizing their connection to the point that it prevents the remaining customers from using their connection and achieving speeds at or close to the advertised rates.  There is a fundamental flaw with the suggestion that you should be able to utilize your connection without any restrictions until reaching the stated monthly caps.  The problem is with the speeds we are offering our customers now you could easily reach those stated caps within a couple of days. Such a high utilization in such a short span of time would likely cause a denial of service to other customers such as I described above.  As far as enforcement of the advertised caps are concerned, due to the lack of an accurate customer facing tool to monitor usage we have been lenient with enforcement with regards to this abuse issue.

I understand this argument, but I don’t buy it. When you pay your taxes to maintain city and state roads it’s an all you can eat setup.  This is no different.  By that same bar when traffic in one area or another becomes particularly congested the government doesn’t turn your car off, or flatten your tires.  They improve the roads and increase their capacity and load to service demand.  The limitation is how much gas you buy, or in this case, monthly bandwidth.

If you’re a Cox subscriber you’re probably curious what your monthly caps are.  If you’re on their standard plan, you might be surprised to learn you only get 40gb of downstream and 10gb of upstream a month. To put that into perspective, that’s less bandwidth in a month up/down combined, than the Japanese are alloted in 2 days. The real kicker?  They’re also paying less and their available speeds are significantly higher. View Cox’s full cap breakdown here.

Thank you for reading. If you’ve enjoyed this post please recommend it via one of the social media bookmarks below.