Map: Internet Prices Around the World

What do Moldova, Tunisia, Russia, Iran, and  Kazakhstan have in common? Apparently, these disparate (and not particularly prosperous) countries have some of the cheapest broadband Internet in the world, with an average package cost of less than $20 a month.

By contrast, citizens of the West African nation of Burkina Faso top the list with the most expensive Internet, paying an an average of $924 for a monthly broadband package. Folks living in Namibia, Papua New Guinea, and Haiti far slightly better, but still need to shell out a few hundred dollars for the typical broadband package.

Americans are in the middle range, paying around $66 for the average broadband service; our neighbors to the north and south pay about $54 and $26, respectively.

These results are from a joint study by two British consultancies, which analyzed over 3,500 broadband packages worldwide from August 18 to October 12 of 2017. You can read the results here, which have been helpfully visualized by HowMuch.Net.

1-world-9dcb

See here for a more detailed visual breakdown by region and price.

The results show an interesting and often unexpected mix of cheapest and most expensive. Who would have thought that the likes of, say, Iran and the former Soviet Union would offer world-beating Internet access? Or that some African countries outperform far wealthier and more digitally connected nations?

Iran offers the world’s cheapest broadband, with an average cost of USD 5.37 per month. Burkina Faso is the most expensive, with an average package price of USD 954.54.

Six of the top ten cheapest countries in the world are found in the former USSR (Commonwealth of Independent States or CIS), including the Russian Federation itself.

Within Western Europe Italy is the cheapest with an average package price of USD 28.89 per month, followed by Germany (USD 34.07), Denmark (USD 35.90) and France (USD 36.34). The UK came in 8th cheapest out of 28, with an average package price of USD 40.52 per month.

In the Near East region, war-ravaged Syria came in cheapest with an average monthly price of USD 12.15 per month (and ranked fifth overall), with Saudi Arabia (USD 84.03), Bahrain (USD 104.93), Oman (USD 147.87), Qatar (USD 149.41) and the United Arab Emirates (USD 155.17) providing the most expensive connectivity in the region.

Iran is the cheapest in Asia (as well as cheapest globally) with an average package price of USD 5.37 per month, followed by Nepal (USD 18.85) and Sri Lanka (USD 20.17), all three countries also ranked in the top 20 of the cheapest in the world. The Maldives (USD 86.08), Laos (USD 231.76) and Brunei (UD 267.33) provide the most expensive package price per month.

Mexico is the cheapest country in Central America with an average broadband package cost per month of USD 26.64, Panama being the most expensive with an average package price of USD 112.77 per month.

In North America, Canada offers the cheapest broadband on average (USD 54.92), coming in 21 positions ahead of the United States globally (USD 66.17). Bermuda provides the most expensive packages in the region with an average price of USD 126.80 per month.

Saint-Martin offers the cheapest broadband in the Caribbean, with an average package price of USD 20.72 per month, with the British Virgin Islands (USD 146.05), Antigua and Barbuda (USD 153.78), Cayman Islands (USD 175.27) and Haiti (224.19) at the most expensive end both regionally and globally.

Sub-Saharan Africa fared worst overall with almost all countries in the bottom half of the table. Burkina Faso will charge residential users a staggering USD 954.54 per month for their ADSL. Meanwhile Namibia (USD 432.86), Zimbabwe (USD 170.00) and Mali (USD 163.96) were among the 10 most expensive countries.

All 13 countries in Oceania were found in the most expensive half of the global table. Generally, larger landmasses such as Australia and New Zealand were cheaper than smaller islands in the region. Fiji, however, was actually the cheapest in Oceania with an average cost of USD 57.44. Vanuatu (USD 154.07), Cook Islands (USD 173.57) and Papua New Guinea (USD 597.20) are the most expensive in the region, the latter second-most expensive in the world.

I would be very curious to know what accounts for these results. Is it government policy? Geographic location or size? An abundance of competing ISPs? Perhaps a combination of all three? Or maybe it depends on the specific country?

What are your thoughts?

 

Happy 25th Birthday World Wide Web!

March 12 was the 25th anniversary of the World Wide Web, otherwise known simply as the Web, a system of interlinked hypertext documents accessed via the Internet. On that day in 1989, Tim Berners-Lee, a British computer scientist and engineer at CERN, wrote a proposal to his administrators for developing an effective communication system to be used by the organization’s members.

He eventually realized the wider applications of this concept, and teamed up with Belgian computer scientist Robert Cailliau in 1990 to further refine the concept of a hypertext system that would “link and access information of various kinds as a web of nodes in which the user can browse at will”. Hypertext is simply text displayed on a computer with references to other text via “hyperlinks”. Berners-Lee finished the first website in December of that year, which you can still see here (for information on the first image ever uploaded, which was a GIF, click here). 

It’s amazing how far it’s come since that humble page , and where the web will be another 25 years from now. Berners-Lee actually shares his thoughts on the future of the Internet in general here and I recommend you give it a read.

Note that despite being used interchangeably, the Internet and the Web are two distinct things: the former is a massive networking infrastructure that connects millions of computers together globally — a network of networks, so to speak. Information that travels over the Internet does so via a variety of languages known as protocols.

The Web, on the other hand, is a way of accessing that information using the HTTP protocol, which is one of only many languages used in the Internet to transmit data. Email, for example, relies on the SMTP protocol, and therefore isn’t technically part of the Web.

Convenience vs. Freedom

The right to privacy has been one of the defining issues of the modern world, especially since the advent of the internet, which has facilitated the vast and rapid exchange of information — often beyond our own apparent individual control. The Babbage technology column in The Economist highlights one of the complex dynamics of this issue: the trade off between the convenience that the web brings — many in the form of free services from social media to search — and the need to give up a certain measure of privacy as payment of a sort.

It has been said many times, but the fact remains that anything users share over the internet will inevitably be bought and sold and, sooner or later, used against them in some way. That is the price people tacitly accept for the convenience of using popular web services free of charge.

The corollary, of course, is that if individuals are not paying for some online product, they are the product. And collecting information about the product (users) enhances its value for the service’s actual customers (advertisers, corporate clients and government agencies) who pay the bills. That is how the business model works. Those who do not like it can choose not to use such free services and find paid alternatives instead that promise greater privacy. Though limited, they do exist.

Granted, the internet is an inherent social force that’s driven by networks and trends, so most people tend to gravitate to services that most other people are already using. So seeking out alternatives is easier said than done, since those that do so will likely find themselves alone or out of the loop.

Perhaps the fact that most people have (thus far) chosen to use the services that mine their data says something about what we value more, or about how unconcerning the whole process ultimately seems. Indeed, it’s pretty much become a given that that is how the internet works:

Along with other internet companies, Google mines the data it collects from users for two purposes. One is to improve the user experience, making its various online services more personal, useful and rewarding for the individual—and thereby increasing their popularity. The other purpose is to provide better targeted information for advertisers.

Like other firms offering free services, Google makes its living out of matching the right kind of advertising to the specific interests of its individual users. To do so, it needs to know their likes and purchases as well as their identifiers and demographics, including name, sex, age, address, current location and income bracket.

If truth be told, no-one needs to eavesdrop to discover such things. People willingly volunteer all manner of facts about themselves when registering or subscribing to various online services. Scraping such information off social networks and combining it with data drawn from sites for searching, shopping, downloading, streaming or whatever lets social marketers infer all they need to know about most individuals.

That is fine for the vast majority of internet users, who are happy to trade a measure of privacy for the convenience of using popular sites like Google, Facebook, Twitter, Flickr and YouTube. That such convenience comes free of charge makes the trade an even better deal. But where to draw the line?

There’s already growing concern about whether we’ve gone too far in willingly giving up our information to internet companies (and whether those firms themselves have been crossing the line). But at the article notes, most of what we share is already public knowledge: the stuff we want to buy, the movies we like, the personal views we hold, etc. Isn’t it unreasonable to expect companies that are offering a free service to not, at the very least, utilize this innocuous data to sustain their operations? (After all, there are overhead costs to cover).  Well, that’s where it gets a little complicated:

It is one thing to reveal personal preferences such as favourite films, TV shows, dishes, books or music tracks. However, most people (though not all) stop short of blurting out more intimate details about their private lives. Even so, all those innocuous bits of self-revelation can be pieced together, jig-saw fashion, by intelligent algorithms. Throw in the digital paper-trails stashed in Google searches and Amazon purchases, and things can begin to get a little scary.

Babbage’s teenage daughter, for instance, uses his Amazon account and credit card to buy everything from romantic novels to cosmetics and underwear. As a result, he gets bombarded by e-mails recommending other female items he might like to purchase. Anyone leaning over his shoulder could easily label him a pervert or worse.

So is the onus on companies to refrain — or be legally forced to refrain — from prying into our more personal tastes and habits? Or is this once again a small price to pay for the convenience that such “personalized” services offer? Maybe we’re the ones that need to take action, as the Babbage columnist feels.

But with the convenience of using free online services, even those offered by major brands, comes the responsibility to be personally vigilant, to watch out for oneself—and to be willing to pay for services that offer higher levels of security, freedom from advertising, or simply a better quality of service all round. One of Babbage’s colleagues says he would happily pay for Twitter if it provided proper analytics. He would pay for Facebook, too, if it did not compress his photographs so much.

Ultimately, though, Babbage is more concerned about identity theft than with Google selling his likes and dislikes to advertisers. This is one of the fastest growing white-collar crimes around, with an identity being stolen somewhere at least once every four seconds (see “Your life in their hands”, March 23rd 2007). The average cost of restoring a stolen identity is reckoned to be $8,000, and victims spend typically 600 hours dealing with the nightmare—plus many years more restoring their good name and credit record.

Personally, I side with Babbage in that I see little conceivable harm in the mining of already-public or innocuous data, but do feel that the potential for identity theft is a growing and real threat, one made easier by all the data flowing around on the web. And what about private messages or email? What would happen if someone were to hack into our Facebook, Google, or smartphone accounts and expose our personal conversations? Are we to just avoid such exchanges on the web?

And will our tacit acceptance of this arrangement lead to a blurring of what’s public and private? As people become more willing and able to share more about themselves online, do we risk undermining our own sense of personal space? Maybe it doesn’t even matter and we’re happy to share these things; after all, humans have always been keen on self-expression, and the rate of doing so has always increased alongside advances in media (consider the then-unprecedented outflow of opinions and data once the printing press was invented).

Anyway, what are your thoughts?

Is Facebook (And Other Social Media) Good For Us?

Read this article in the Atlantic and judge for yourselves. Commentary is welcomed, as always.

Personally, I think that Facebook, like almost everything else in life, is what you make of it, and that it may merely be amplifying pre-existing social and psychological problems. It’ll have different effects on different people, and I recall that many people rely on social media, especially Tumblr, as somewhat of a remedy for their real-life problems – a place to escape, vent, seek advice, or just pass the time (not that social media should be a substitute for things like therapy or counseling, with respect to more serious issues).

Of course, this doesn’t mean we shouldn’t explore the potential for individual and societal harm. All technological and social developments have some sort of ambivalent effect on us. The key is to adapt to them, and most of the time that’s something we learn to deal with as we go along. Luckily, we have both the means and awareness to think ahead about these fast-paced developments. But if anyone looks at the history of almost any innovation – even such wholesome things as, say, novels – and you’ll find the usual anxieties about the potential for harm (especially as far as the youth are concerned).

While we’re on the subject, there has also been a lot of discussion about the effect of social media on human behavior and psychology. Again, this is to be expected, given that any new technology, especially in the area of communication, tends to have profound effect on how we think, interact, and identify with others.

NPR’s Morning Edition had a brief but fascinating piece on the subject some time ago. It’s obviously not the first to explore the influence of social media, and it most certainly won’t be the last. Though it deals specifically with Facebook, a lot of the observations can apply to Tumblr, Twitter, and other social media as well. Below is the transcript:

Posting on Facebook is an easy way to connect with people, but it also can be a means to alienate them. That can be particularly troublesome for those with low self-esteem.

People with poor self-image tend to view the glass as half empty. They complain a bit more than everyone else, and they often share their negative views and feelings when face to face with friends and acquaintances.

Researchers at the University of Waterloo in Ontario, Canada, wondered whether those behavior patterns would hold true online. They published their findings in the journal Psychological Science.

“People with low self-esteem tend to be very cautious and self-protective,” says one of the researchers, psychologist Amanda L. Forest. “It’s very important to them to gain others’ acceptance and approval. … So given that, we thought people with low self-esteem might censor what they’re saying to present a kind of positive and likable self-image on Facebook.”

She and fellow psychologist Joanne V. Wood collected the 10 most recent status updates from 177 undergraduate volunteers who had completed the Rosenberg Self-Esteem Scale. A team of objective “readers” then rated the updates based on how positive or negative they were.

People with low self-esteem posted far more negative updates than those with high self-esteem. Forest says they described a host of unhappy sentiments, from seemingly minor things like having a terrible day or being frustrated with class schedules to more extreme feelings of rage and sorrow.

Intuitively, this makes sense: being bombarded with the apparent success and happiness of everyone else will only intensify your already low self-worth. Speaking from personal experience, people with low self-esteem generally have a tendency to overstate the joy of others, feeling as if the rest of the world is normal and well-adjusted while they struggle to get by. Facebook only amplifies this perception further, given that being in large groups – be they physical or digital – exacerbates the sense of loneliness that is intrinsic to many emotional and mental problems.

But what about if you don’t suffer from low self-confidence and the like?

On the other hand, those with a healthy dose of self-esteem often wrote about being happy, excited or thankful for something.

When researchers asked people rating the updates if they wanted to get to know those who wrote the negative posts, the answer was a resounding no.

Researchers even looked at actual Facebook friends because, Forest says, “you might think that a real friend would care if you’re expressing negativity.” It turned out actual friends didn’t like the negative posts, either. The posts actually backfired, neither winning the author new friends nor generating good feelings.

As the saying goes, “laugh and the world laughs with you, cry and you cry alone.” People have a natural aversion to negativity, which is what makes depression even more frustrating: not only do you have to struggle with your own internal emotional problems, but you’re unable to express yourself or seek out help without feeling burdensome or annoying.

This kind of rejection might also have to do with the nature of Facebook, which many people view as the wrong place to share personal or emotional matters. Even though social media has become so ingrained in the everyday lives of millions of people, there is still a widespread perception that because it’s the internet, it’s not a serious and appropriate venue for communicating such things. Plus, there is the usual inclination to assume that such comments are merely “attention baiting.”

But this issue gets trickier.

Even for people with high self-esteem, aspects of Facebook can be difficult, according to mental health professionals — for example, if other people get lots of “likes” or thumbs-up on their posts and yours don’t, or if friends post photos that you’re not in.

The bottom line for everyone — no matter how much self-esteem you have — is to be selective about what you put on Facebook, says Dr. Mike Brody, a psychiatrist at the University of Maryland and in private practice. Especially since posts live in cyberspace forever.

Exactly. I’d also add that people need to build up their confidence and work on their insecurities. If not, avoid Facebook (or any other site for that matter) that will make you feel more unhappy than you already are.

Keep in mind that social media, like most things, is a double-edged sword: for all it’s faults, Facebook and its ilk is great at connecting you to like-minded people. There exist groups and forums where people with certain interests, conditions, or ideas can come together and relate. You just have to learn how to adapt to the web and make the most of its benefits while avoiding the pitfalls as best you can.

Reflections on Obtaining a Smart Phone

So I’ve finally obtained a smart phone of my own, complete with unlimited 4G access (I was due for an upgrade, so it was thankfully affordable). This gadget is a news junkie’s dream: I now have instantaneous access to all the events of the world at all times. I can look up anything and everything whenever a random thought or question comes into my mind. I have a constant stream of knowledge available wherever I go.

Of course, like most innovations, this one is a double-edged sword. It’s nice to have all this information literally in the palm of my hand. But will my often distracting obsession with data and news be made worse by this newfound capacity to expand on it? Sure, I don’t plan on playing any of the games that often distract many of my peers: all my apps are strictly functional (so far). But a distraction is a distraction…how intrusive will this remarkable device be?

I suppose this will offer a wonderful opportunity to test my willpower – or to learn by experience just how difficult it is for the human mind to adjust in this era of constant stimulus. I already know the feeling of data overload firsthand, as I’m sure most of us well-connected youth do. Have I just upped the ante here? I’ll see with time, but for now I’m thoroughly enjoying having so much to read and learn whenever I’m stuck waiting somewhere. For better or for worse, boredom is a thing of the past (though I’ve always carried reading material with me wherever I go, so keeping myself entertained has never been an issue; now I get to save on space).

Another profound thought struck me as I started reaping the benefits of my new toy: that in the palm of my hand, in this lightweight and sleek machine, lies access to almost the entire sum of human knowledge. Anything and everything I could ever want to know – from the mundane, to the profound, from the practical to the philosophical – was available to me almost instantaneously with a few strokes of my fingers. Not a single reportable event in the world can go unnoticed. No conceivable question could go unaddressed. All of that lies within something smaller than my hand, which I can take with me anywhere I wanted.

For most of our history, the majority of our species couldn’t even read or write, let alone have access to the world’s knowledge. We barely knew what went on beyond our little villages. Suddenly, a growing number of us are connected to this immaterial repository of human knowledge known as the internet, and now, if we so choose, we can delve into the near-totality of collected human knowledge.

As I mentioned before, there is certainly a catch as far as the social and psychological effects of all that data – the human mind was never meant to absorb so much information so regularly. We’ll probably come to adapt to it as we have to so many other developments, but it may be a difficult process nonetheless. Who knows? Whatever the caveats, we shouldn’t underestimate how marvelous it is to live in a time when knowledge is no longer (entirely) the domain of the rich and powerful. The accessibility and affordability of these things is getting better with time. Whatever the impact, it’s sure to be weighty.

 

On SOPA and Policing the Web

At this point, I doubt these infamous acronyms need much introduction, given the ubiquity of news related to the so-called Stop Online Piracy Act. You’d have to go out of your way to avoid hearing any mention or debate about it, especially from within social media.

For the record, I try to avoid discussing current events on this blog – the deluge of instant information would render my articles obsolete very quickly. Besides, you can find of plenty of up-to-date news about the subject elsewhere (except Wikipedia, which as of this posting, has blacked itself out in protest). Instead, I want to briefly discuss the more enduring issues of “net neutrality” and freedom of speech on the web, a subject I’ve visited once or twice before.
 
Like most people of my generation, I’m firmly committed to the idea that the internet should be an open marketplace where ideas, opinions, and data are freely exchanged between participants (I’m a strong proponent for making the web more accessible to more people). The greatest strength of the internet, and indeed the source of its allure, is its role as a venue where almost anything goes: you can seek out almost any activity, interest, person, or information you want. There’s quite literally something for everyone, and this groundbreaking capacity to connect to one another and pool together the virtual sum of human knowledge has been a great boon to our increasingly interconnected societies. Few of us could imagine existing without it.
 
Of course, like any new innovation or technology, there is a propensity for unsavory exploitation. Humans are just as inventive about the ways to abuse and misuse our creations as we are about making them in the first place. A lot of the information that gets dispersed can be unethical, corrupting, or illegal – child pornography, libel, or simple misinformation, to name a few examples. And the web is a great tool for connecting predatory individuals to their targets, whether it’s a sexual predator seeking out minors, or cyber-thieves looking to access someone’s bank account (if not the bank itself). Plenty of terrorist groups and crime syndicates make use of the web too, whether it’s to coordinate their illicit activities or recruit more members.
 
In other words, the web is a double-edged sword, just as liable for use in misdeeds as it is for being applied to benign pursuits, like education or social networking. This is hardly a shocking revelation. Even the least internet savvy person knows the potential for harm or mistreatment – nearly everyone has run up against viruses or the itinerant internet troll, to name the relatively more innocuous examples.
 
Online piracy is perhaps the most controversial and well-known manifestation of this fact, especially since SOPA and its ilk came along. The web may be famous in facilitating a wellspring of creative and conceptual output, but it’s also the place where such ideas and intellectual products are stolen, duplicated, corrupted, or wrongfully disseminated. Instant and easy access applies to nefarious types as well. So it’s understandable the individuals and companies would want to product their rightful creations, be it films, songs, artwork, patents, and the like.
 
But trying to enforce some sort of centralized legal and security system upon the web, as SOPA would aim to do, is not the way to do it. At best, it simply wouldn’t work, and at worst, it’d be overkill. The web just isn’t suited for a catch-all form of policing.
 
For starters, demarcating such a protocol would be difficult enough as it is, let alone even beginning to implement it. Part of the problem with SOPA and similar legislation dealing with the web is that its language is both vague and broad – as some have read it, the act could potentially allow the government, at the behest of complainants that include big media corporations, to shut down entire websites accused of being complicit in piracy. As with all forms of law, interpretation matters a lot, and there’s a risk that SOPA could be needlessly restrictive. To drive the point further, most tech experts have noted the archaic or otherwise incorrect terminology used throughout the bill, demonstrating how little regulators actually know about what they intend to regulate. That doesn’t bode well for efficacy.
 
This leads to my next point: trying to impose any sort of restrictions, well-meaning or otherwise, wouldn’t be very efficient. Pirates, hackers, and other skilled web users could very easily set up shop elsewhere; each new web domain that is shut down will be replaced by another somewhere else, sometimes within the same day. Every criminal activity innovates and adapts in response to new and improved efforts, and web-based transgressions are certainly no exception (especially since the same kind of technology is used by both sides). Trying to establish some sort of command and control over the constantly expanding internet just isn’t tenable.
 
This isn’t to say that piracy, cyber theft, and other crimes shouldn’t be addressed. Even the Wild West, to which the internet is compared to, had some modicum of order (its rate of lawlessness and crime is pretty exaggerated anyway). Laws dealing with crimes in the physical world should apply to their cyber counterparts as well (take theft or stalking for example). But such methods need to be left to the private sector to sort out. Companies, individuals, and public institutions should either fend for themselves or work together on their own accord to protect their content from attack or exploitation. We’ve seen dozens of security organizations offer their services to protect everything from databases to personal computers, while many enterprising groups have gone so far as to provide such programs for free. It’s no different from the way communities set up a neighborhood watch, or how individuals hire private security forms. People find their own way to get around obstacles, which the internet helps facilitate, thanks to its boundless supply of resources and networks.
 
Ultimately, there’s no stopping criminal activity, on or off the web. We must do the best we can with the resources available, but trying to curtail internet freedom is missing the forest for the trees. As we’ve seen in real life, responding to wrongdoings with an authoritarian approach simply doesn’t work, and if anything it makes the problem worse – or creates new ones, such as if innocuous activity becomes restricted as collateral damage. The web’s freedom is its greatest strength. The same open-endedness that has given unsavory types free reign to steal or terrorize will also allow others to take matters into their own hands. Let the internet be.
 
Appropriately, the web’s capacity to mobilize and inform the masses has greatly contributed to the widespread condemnation that has led to the bill’s demise, at least for the meantime.

Wikipedia

Imagine a world in which every single person on the planet is given free access to the sum of all human knowledge. That’s what we’re doing.

— Jimmy Wales, Founder of Wikipedia 

Most of my friends know me to be a very strong “wiki aficionado”, to put it lightly. I’m quite obsessed with the site, and I lose countless hours traversing its seemingly endless sum of information, ranging from the mundane and obscure, to the profound and fundamental. I’ll freely admit that “wiki-ing” is basically a hobby of mine, and I credit this unique and wonderful site with teaching me a lot, or at least pointing me to the right place to do so. Despite popular belief, most articles have references and external links to relevant sources, and those that are sufficiently cited are identified as such to readers; also, pages with a bronze star at the upper-right corner are indicated as high-quality by the site’s actually employees.
 
In any case, my love for Wikipedia is almost certainly unsurprising. I’m a knowledge junkie after all, so any site that can give me full and easy access to almost any topic imaginable is a godsend (needless to say, I have a Wiki App on my iPod touch to facilitate constant learning).  
 
In fact, I was a true believer in the site from the very beginning, back when it was an upstart that was largely disregarded by most of the public, especially those in academia, who derided it as inaccurate and unreliable. At best, it was a fun little curiosity or something to pass the time with. Throughout my college studies, I recall having an increasing number of professors explicitly forbid the use of Wikipedia as a source, and from what I hear, that position has only become more prevalent, to the extent that such a warning is now codified in most class syllabi. Of course, most people still utilized it anyway, opting to simply fact check its claims from conventional sources, or clicking on its linked references.
 
Granted, Wikipedia was certainly not the most dependable source out there, especially in the early years when supervision was lax and vandalism – or mere human error – was subsequently common (if not exaggerated in its extent). But there wasn’t anything else like it that was online or as accessible – and there still isn’t. For all its flaws, Wikipedia was the only source of its kind – an aggregate of as much human knowledge as possible, covering realms of data ranging from pop culture to metaphysics.
 
I knew Wikipedia wasn’t perfect (and still isn’t, despite vast improvements). But I also knew that such an ambitious project would, like any other, take a lot of time and work to see fruition. If not yet taken seriously as a source, it at least deserved some respect and support as an idea. As its founder’s quote attests, a world where human beings, with their increasing access to the web, can have near-universal access to everything we know is a beautiful one that must be promoted. Long-term users like me will notice that Wikipedia has come a long way, and deserves all the help it can get to continue improving.
 
The site has managed to become the 5th most visited on the web, with close to 500 million visitors and billions of page views monthly – all that popular dismissal notwithstanding (I suspect even critics give it a glance once in a while). Few people have never seen a Wiki entry, and doing any sort of Google search almost always lists one among the top recommendations. The site has managed to grow enough to encompass 283 languages and close to 20 million articles – all this with only 679 serves and 95 staff (for comparison, Google has a million servers and Yahoo has around 13,000 employees).
 
This is all the more impressive given that the entire project is a non-profit, dependent upon donations to sustain it: there is no fee or subscription, nor is there any advertising. The absence of these things makes the site more conducive to learning, yet leaves it without a revenue stream – hence the periodic fund drives that request contributions for the bare minimum of keeping operations going.
 
I honestly used to ignore the banners at the top of each page pleading for donations. I can’t say I had a good reason to, given my enjoyment of the site and my presumed inclination for charitable causes. But I made amends and decided to finally give what I could to a project that is dear to me. I know all this sounds like a propaganda piece, but I sincerely ask that readers to do their part and give what they can. As this blog attests, I’m passionate about the dispersal of knowledge and the advancing of human progress through education. Wikipedia has its work cut out for it, but at least its making a step in the right direction.
 

Your Phone is Watching You

By now, many of you have no doubt heard the major news regarding Carrier IQ, a third-party “embedded analytics company” whose eponymous software is installed in millions of smartphones and handsets, where it can – and likely has – been used to collected data on its users. For a brief and detailed enough explanation, check out this article on Gizmodo.

While there is still much contention regarding whether people were really being spied on – of course the company and service providers disagree – it’s very likely this was the case, given that it’s become an endemic practice across telecommunications. There’s a lot of money to be made by knowing our habits, spending preferences, favorite products, and interests.

Interestingly, while most people fear this sort of thing being undertaken by governments, it’s clear that private companies and even fellow citizens are no less likely to violate our privacy either. In fact, the American legal system has yet to really develop any clear safeguards against this sort of thing; although there is a Senate investigation in the works (see the hyperlink), and this practice my violate the Wiretap Act, the means and methods of tracking us seem to evolve faster than law enforcement or the public can address. Indeed, the latest in spying tech – including miniature spy drones – are available in the civilian commercial market.

As we become increasingly dependent on all this tech – if not so already – it’ll be very difficult to avoid this sort of thing. Sure, tech savvy grassroots folks could get the best of these sketchy practices. But they’re not formal watchdogs, and they’re certainly not the bulk of all internet or smartphone users. As technology becomes ubiquitous, so too will the exploitation that comes with it.

Fringe Views in the Information Age

Even a casual browse through the web will yield to how much of a hotbed it is for fringe views and conspiracy theories. There are numerous websites, blogs, and forum devoted to espousing just about every peripheral view imaginable, from old-fashioned diatribes about Zionist world domination to the relatively recent notion of a New World Order, which itself is just a re-imagining of previous variations of an elite plot to takeover the globe. There is so much overlap and convergence between all the various conspiracy theories out there that it’s becoming difficult to demarcate between them.

Interestingly, they all draw from a central theme involving some secretive and powerful class – be they  of Jews, the Illuminati, aliens, or even all of the above – trying to control the world, if not doing so already. A visceral distrust of any sort of authority, combined with paranoia about the machinations or intentions of powerful institutions or people, underline ever conspiracy theory. The rise of globalization, along with increasing public concern about the role of government and big corporations, have added more fuel – and some would say proof – to the claims about some impending new world order.

But nothing has contributed more than the internet,  has allowed such views to spread, develop, become as close to mainstream as possible. Indeed, while some manner of conspiracy-oriented thinking has always existed, it is a largely modern phenomenon, and it’s becoming more so with time. The same internet that has exposed billions of people to the sum of all the knowledge of human endeavor is also subjecting them to ideas that are dangerous, unfounded, false, or just plain nonsensical.

The Foreign Policy article that inspired this post, titled Dangerous Minds, not only details the rise of fringe views, but rightly identifies them as a symptom of a larger problem: our inability to come to terms with the data overload that now characterizes the contemporary world.

report released last week by the British think tank Demos takes a different line. It argues that the Internet’s greatest strength — free access to unprecedented amounts of unregulated information — can also be asphyxiating. Too many young people do not know what to believe online, and as a result they are influenced by information they probably ought to discard. The era of mass, unmediated information needs to be attended by a new educational paradigm based on a renewal of critical, skeptical thought fit for the online age.

The sheer amount of material at our fingertips today is unfathomable, like trying to imagine the number of stars in the universe. When we fire up a browser, we can choose from more than 250 million websites and 150 million blogs, and the numbers are growing. A whole day’s worth of YouTube footage is uploaded every minute. The online content created last year alone was several million times more than is contained in every single book ever written. Much of this content consists of trustworthy journalism, niche expertise, and accurate information. But there is an equal measure of mistakes, half-truths, propaganda, misinformation, and general nonsense.

Trying to separate the information wheat from the disinformation chaff has always been difficult — the Greeks were at least as exercised about it as we are. But the Internet makes it more difficult because it throws up novel challenges that require at least some ration of technical savvy.

What is true is largely becoming a subjective matter, and the problem of discerning fact or fiction is resolved in one of two ways: people respond either by being uncommitted and indecisive about most ideas or positions (post-modernism or nihilism comes to mind), or taking an absolutist view which simplifies matters through adherence to a single truth (again, conspiracy theorist, as well as religious fundamentalists and ideological dogmatists).

The prevalence of either faction is troubling: either we become detached from decision making or the world as a whole – a dangerous prospect given all the global problems we’re facing – or we split up into intolerant and parochial factions, which will not only ignore evidence contrary to their views but engage in hostilities with one another. In many ways, our world is already drifting in both directions: some people are growing more cynical and apathetic towards social and political issues, while others are reverting to nationalism, extremism, or militant faith.

As in most matters, I find myself in the middle. I think some truths clearly exist, and can be validated. Sometimes these truths change with higher reasoning or new evidence, but that doesn’t mean there isn’t any fact – merely that the facts change with time, new knowledge, or the progression of society. On that basis, I don’t believe in an absolute following of any political or dogmatic belief, as such closed-mindedness can leave one blind to new evidence and perspective that may provide crucial insights to a particular issue. That sort of thinking also breeds the sort of uncompromising intolerance that will do us no favors in a diverse world that requires unity in tackling all sorts of worldwide problems.

I worry about the effects all this will have on younger people, who are becoming to first to have grown up exclusively in a digital age. I still remember having to dig through books for sources and citations, and going to libraries to collect research material. Now the internet has become part and parcel of any search for knowledge, be it personal or academic. Again, this is a wonderful thing, but only if you know what you’re doing.

Teenagers facing this avalanche of data are struggling to deal with it. They are often unable to find the information they are looking for or trust the first thing they do. They do not fact-check what they read and are unable to recognize online bias and propaganda — and teachers are worried about the effect this is having. Around half of the teachers polled for the study had encountered arguments about conspiracy theories with their students (such as those surrounding the 9/11 attacks), and the same proportion report pupils bringing dubious Internet-based content into the classroom.

Young people are overwhelmed, and in their formative years they’re starting to drift into either of the two sides I noted earlier. What effect this will have on the future of politics and society is anyone’s guess. For all I know, concerns about this are overblown and it’s a non-issue. After all, we’ve always been perennially worried about the advent of new technology, from the creation of written texts to the development of mass printing.  Dopes, liars, and madmen have always existed, and always will. Like any new technology or development, we’ll just come to adapt to it eventually, right?

Who knows. I for one think we shouldn’t overreact to this problem, but shouldn’t be complacent either. Reasoned caution never hurts. What’s interesting is that a lot of fringe thinkers do, in theory, largely agree with me. To them, there is a problem with a lack of critical thinking; we do need changes to the way we study and understand sources, claims, and data; and we do have a problem with separating fact or fiction. The difference is that they think they’re the ones exercising critical thinking and comprehending the system, and everyone else is being blind or manipulated.

In a world where truth is a matter of opinion, and everyone has their own valid evidence for what they believe, how do we even begin to come together on this problem? Admittedly for myself, the jury is still out on that. I think teaching people to be more scientific, philosophical, and dialectical is a start. But that sort of engagement is easier said than done, especially with education taking such a hit in this country. Utilizing the web to proliferate such ideas will be necessary to reach as many people in as many different ways as possible. In the 21st century, the internet will be the battleground between ideas and entire modes of thinking. If there is one thing I know for certain, it’s that the next few years are going to be very interesting and informative.

Do You Know Your Scientists?

The New York Times published an interesting online quiz that challenges your knowledge about modern day scientific figures. Unfortunately, as it’s introduction, most people have little knowledge of contemporary intellectuals and academics.

In a recent survey asking Americans to name a scientist, 47% responded with “Einstein,” who has been dead since 1955. Next, at 23%, was “I don’t know.” In another survey, only 4% of respondents could name a living scientist.

Needless to say, those aren’t surprising result, but it’s still discouraging. I’ve long lamented the fact that our society seems at best, indifferent to science and at worst hostile to it. Anti-intellectualism runs deep in our culture and history, but the attitude seems particularly profound as of late, given the increasing polarization of our society. This couldn’t be happening at a worse time, as humanity is beset by all sorts of existential problems – resource scarcity, environmental degradation, energy needs, overpopulation – that must be addressed through rationalism, critical thinking, and scientific inquiry.

It’s sad that after all that science has done for us – and all the wondrous benefits we’ve reaped from the works of numerous thinkers, researchers, and inventors – and most of us remain either unaware, apathetic, or even distrustful. It’s easy for most people to name a list of celebrities, athletes, or even internet stars, but a few scientists tend to be out of reach. I know I’m not the first one to note this bizarre and counter-intuitive sense of priority in society, but it doesn’t make me feel any less justified in my displeasure.

Anyway, I managed to pull off an eight out of ten, thankfully – the two I didn’t get were at least narrowed down to 50/50, so I feel a bit better about that.The quiz includes some interesting links into the works and writings of each figure, and I definitely learned quite a bit.  Though I focus a lot on politics and the humanities (definitely my stronger suites), I consider myself a lover of science, particularly in it’s it’s empirical methodology, philosophical commitment to reason and open-mindedness, and it’s constant awe inspiring discoveries.  For all that they’ve given the world, scientists deserve far more respect and recognition than they deserve.