fbpx
Logan Hochstetler

Logan Hochstetler

Saturday, 22 April 2017 16:40

The Future Of Search Engines Is Context

Columnist Aaron Friedman discusses a mobile search future based on context and the Internet of Things (IoT).

As mobile continues to grow, as more smartphones are bought, and as more apps are developed, we have also seen the growth of businesses like “app store optimization” and other services around optimizing for mobile.

But what exactly are users doing on their phones? Is mobile SEO just going to be about “chasing the algorithm” and using the signals we learn through testing like traditional SEO? Or is there something bigger out there that we are missing?

As I write this, I have to remind myself that I am the minority and that most users are not as “technically minded” as myself or my internet-obsessed peers. I tend to search through the app stores to see what’s new. I scour tech sites, my RSS feeds, and other tech reporters on social media to see what’s changing in the market.

The average user in society does not behave this way, despite what some reports might say. The average user is looking for simplicity and wants information spoon-fed to them. They don’t want to search at all; in fact, the data suggest that they aren’t searching all that much on their phones to begin with:

The average person with an Android smartphone is using it to search the web, from a browser, only 1.25 times per day.Roi Carthy, head of special projects at Everything.Me

Kind of makes you wonder what people are actually doing on their phones, right?

What People Do (And Don’t Do) On Smartphones

Future of Search  - Phone image

Almost all users, if not all users, use their phones with apps. They are highly engaged in social media, playing games, and communicating. But what most users are not doing regularly is conducting searches in a browser, let alone within the app store. The data suggest that users are more interested in the usefulness of the apps they have– they don’t necessarily search to find new ones that solve a need.

Context Will Solve The App Discovery Problem

Contextual understanding is about giving people the information they want when they need it the most. Remove the hard work for them and deliver what they will likely want at that exact moment.

As an example: I am a frequent user of Uber and GetTaxi, but what if I land in a city where neither of those apps work? I can either call a taxi or find a comparable app, though it’s unlikely that I am going to take out my phone right then and start searching the app store.

Bridging that gap with a contextual app discovery engine would solve for that by algorithmically recommending relevant apps to me based on my behavior, location, and/or other factors.

But this contextual understanding can span far beyond apps.

Adding Context To Search

Google has been personalizing search results for a years now using our search history and social activity. This is, however, limited to activity within Google products.

Imagine for a moment you are on a strict workout regimen, tracking the calories eaten on a phone app. Google may find this information useful when you perform a search for recipes. The results could be impacted by the calorie limit you set. Or better yet, Google could strive to understand your general eating habits and show different recipes at different times of days to help you achieve your weight loss goals.

Thinking Beyond Search

When the industry talks about context, limiting this to search and apps alone is a mistake. It needs to incorporate other parts of our daily activities for it to truly work. Just look at your Amazon recommendations after you search for “bachelor party favors,” or the Netflix history when there is trouble in paradise. The results can be terrifying, inaccurate, and not a true reflection of your interests. There needs to be more context behind it.

Google Now Is Contextual, But Not Necessarily Integrated

Google has been headed into the world of context more and more. Sometimes, the integration can be so incredibly seamless it’s frightening. Take, for example, how Google scans your email for various information like flights.

When my mother was visiting us from abroad, I was supposed to drive her to the airport around 6:30 p.m. for her 11:00 p.m. flight — that is, until I got a notification on my phone that her flight was delayed.

So, I casually said to her, “Mom, your flight is delayed.”

“How do you know?”

“My phone just told me,” I responded. It was so natural.

A part of me thought, “Google, stay out of my life!” But the more technically savvy part thought that it was truly a magical experience.

Google now- taking mom to airport

Contextual Understanding Moves To Everything

Take that example a step further. What if my phone had found a way to provide discounts to the airport bar — or, in the event of a flight cancellation, a possible hotel recommendation using HotelTonight or a similar app?

The possibilities are endless. It’s just about mixing and matching permissions across apps and having one program smart enough to do that.

Permit me some examples:

Navigation & Travel

When I turn my phone on, Waze understands (or predicts) where I will be traveling based on numerous factors.

Future of Search - Navigation and Travel

Waze does use predictive technology to some extent with ads they show when you stop, but there is still room for a significant amount of personalization to occur. The opportunities are limitless to guess what I will do next and connect me with my friends across the network.

Wearable Technology Serving Ads In Context

The Fitbit I own tracks how far I am walking, my speed, calories burned, sleep, and other various components. That is an incredible amount of data that could significantly enhance my mobile experience.

Future of Search - wearable

Using this data, my phone can start making predictions based on how fast I am moving or where I am geographically located. If the data tells Google I had a bad night sleep, and missed my bus, a nearby local coffee shop may decide to serve me a coupon for a free cup of coffee to get me in the door. Or, based on my eating habits (which come from the food app I am using), and depending on the time of day and the direction I am driving, it could suggest a nearby restaurant that fits that category but is new to me.

Helping Relationships

Facebook understands who my wife is based on my relationship status. My phone also has the capability to “always listen” (cue “OK Google”).

Imagine for a second I am out, and a certain song comes on: the song that played at our wedding. SoundHound, which is a song recognition app, could pick that signal up and remind me to send my wife flowers, just because.

Future of Search - Relationships
A Rainy Day Solution

Lets say I have a family outing planned at the county fair. As fate would have it, storms start rolling in. Understanding this, I might get a recommendations for alternative indoor plans, keeping me one step ahead of the crowd. Perhaps Lazer Tag.

Future of Search  -  Rainy Day

The Future Is Around the Corner

We see this happening in silos right now so I don’t think this is so far off. Some apps are getting smarter, Google Now is providing additional context around our lives. But this fusion hasn’t entirely happened yet. It will only be a matter of time before someone decides to utilize this data to effectively make “a decision engine” for us, the end users. This contextual understanding is the future, and it’s only a matter of time.

The dynamic search engine we know today will be significantly altered in the future. The future is about understanding user behavior online and offline. Perhaps the future of optimization has nothing to do with the internet at all, but everything to do with optimizing the user’s experience, helping shape their behavior, which will ultimately affect everything.

Source : Searchengineland.com

Matt Stoller is a fellow at the Open Markets Program at New America.

Silicon Valley is the story of overthrowing entrenched interests through innovation.

Children dream of becoming inventors, and scientists come to Silicon Valley from all over the world.

But something is wrong when Juicero and Theranos are in the headlines, and bad behavior from Uber executives overshadows actual innovation.

$120 million in venture funding from Google Ventures and Kleiner Perkins, for a juicer? And the founder, Doug Evans, calling himself himself Steve Jobs "in his pursuit of juicing perfection?" And how is Theranos's Elizabeth Holmes walking around freely?

Eventually, the rhetoric of innovation turns into .... a Google-backed punchline.

These stories are embarrassing, yes. But there's something deeper going on here. Silicon Valley, an international treasure that birthed the technology of our age, is being destroyed.

Monopolies are now so powerful that they dictate the roll-out of new technology, and the only things left to invest in are the scraps that fall off the table.

Sometimes those scraps are Snapchat, which managed to keep alive, despite what Ben Thompson calls 'theft' by Facebook.

Sometimes it's Diapers.com, which was destroyed and bought out by Amazon through predatory pricing. And sometimes it's Juicero and Theranos.

It's not that Juicero and Theranos that are the problem. Mistakes — even really big, stupid ones — happen.

juicero    8Business Insider/Alyson Shontell

It's that there is increasingly less good stuff to offset the bad. Pets.com was embarrassing in 2000, but that was also when Google was getting going. Today it's all scraps.

When platform monopolies dictate the roll-out of technology, there is less and less innovation, fewer places to invest, less to invent. Eventually, the rhetoric of innovation turns into DISRUPT, a quickly canceled show on MSNBC, and Juicero, a Google-backed punchline.

This moment of stagnating innovation and productivity is happening because Silicon Valley has turned its back on its most important political friend: antitrust. Instead, it's embraced what it should understand as the enemy of innovation: monopoly.

As Barry Lynn has shown, Silicon Valley was born of anti-monopoly.

Elizabeth Holmes TheranosElizabeth Holmes, CEO of Theranos. Larry Busacca/Getty

In 1956, a Republican administration and AT&T signed a consent decree forbidding AT&T from competing in any but common carrier communications services. The decree also forced AT&T to license its patents in a non-discriminatory manner to all comers.

One of those patents was for something called the transistor, which two small companies — Texas Instruments and Motorola — would commercialize. 

In the 1990s, a suit against Microsoft allowed another startup named Google to offer an innovative search engine.

In the 1960s and 1970s, an antitrust suit against IBM caused the company to unbundle its hardware and software, leading to the creation of the American software industry. It treated suppliers for its new personal computing business with kid gloves, including a small company called Micro-Soft. In the 1990s, a suit against Microsoft allowed another startup named Google to offer an innovative search engine

and ad business without fear that Microsoft would use its control of the browser to strangle it. 

The great business historian Alfred Chandler, in his book on the electronic century, called antitrust regulators the "Gods" of creation. Antitrust was originally understood as a uniquely American "charter of economic liberty".

But there hasn't been a Sherman Act Section 2 anti-monopolization case for 15 years. And the anti-merger Clayton Act is not being enforced. Neither Bush, nor Obama, nor Trump (so far), has seen fit to stop the monopolists from buying their way into dominance and blocking innovation. 

Take Google.

Sergey BrinSergey Brin is the President of Alphabet, Google's parent company. Robert Galbraith/Reuters

Yes, the company created an amazing search engine over fifteen years ago. Since then, the company bought YouTube, Doubleclick, Maps, and Admob; it buys a company a week at this point. And it often shuts down products that don't reach 100M+ users, while investing in luxury juicing machines. Surely Google is creating cool technology. But is that technology really being deployed? Or is it locked away, as patents were in AT&T's 1956 vault before the government stepped in?

What once were upstarts and innovators are now enthroned. For instance, the iPhone is ten years old. Innovation means waiting to see if Apple will offer a bigger screen.

Innovation means waiting to see if Apple will offer a bigger screen.

It's almost as thrilling as seeing yet another press release about how self-driving cars are almost working. I'm on the edge of my seat.

This is a ridiculous situation. Silicon Valley helped created the personal computer! It commercialized the internet! Popularized email!

Its scientists and engineers change the world. We have such amazing technology, and such big problems. But our liberty to address those problems in the commercial world must be protected by a democracy in the form of antitrust rules and suits, or Silicon Valley will die. 

American flag phone iphoneMark Wilson/Getty Images

Is that what Silicon Valley scientists and business leaders really want? To invest in and produce subpar juicers while everything cool waits on Jeff Bezos's whim? Is that what they dreamed when they were young? Is that why they admired astronauts and entrepreneurs? Was their goal really to create "anti-competitive juice packet lock-in"?

That is where a lack of democracy has brought us, and Silicon Valley.

It is time for leaders in Silicon Valley to start demanding from our government the birthright of every American, which is an open market for commerce, innovation, and personal liberty.

It is time to demand antitrust, so that what once were innovative upstarts, and are now Kings, do not stop the next wave of innovation. Then there will be so much more to invest in, so much more to invent, and so much more to actually create.

Matt Stoller is a fellow at the Open Markets Program at New America. He first shared a version of this story on Twitter. The original tweets are below.

 stoller 2Screenshot/Twitter

stoller

Source : uk.businessinsider.com

First they partnered, and now comes the acquisition: The computing giant Intel has confirmed that it is acquiring Mobileye, a leader in computer vision for autonomous driving technology, for $15.3 billion — the biggest-ever acquisition of an Israeli tech company.

Specifically, “Under the terms of the agreement, a subsidiary of Intel will commence a tender offer to acquire all of the issued and outstanding ordinary shares of Mobileye for $63.54 per share in cash, representing a fully-diluted equity value of approximately $15.3 billion and an enterprise value of $14.7 billion,” the company noted in a statement. The deal is expected to close in about nine months, Intel said.

Mobileye today covers a range of technology and services, including sensor fusion, mapping, front- and rear-facing camera tech and, beginning in 2018, crowdsourcing data for high-definition maps, as well as driving policy intelligence underlying driving decisions. This deal will bring under Intel’s umbrella not only a much bigger range of the different pieces that go into autonomous driving systems, but also a number of relationships with automakers. In the call today, Mobileye’s CTO and co-founder Amnon Shashua said the company is working with 27 car manufacturers, including 10 production programs with Audi, BMW and others going into 2017.

“This acquisition is a great step forward for our shareholders, the automotive industry and consumers,” said Brian Krzanich, Intel CEO, in a statement. “Intel provides critical foundational technologies for autonomous driving including plotting the car’s path and making real-time driving decisions. Mobileye brings the industry’s best automotive-grade computer vision and strong momentum with automakers and suppliers. Together, we can accelerate the future of autonomous driving with improved performance in a cloud-to-car solution at a lower cost for automakers.”

“We expect the growth towards autonomous driving to be transformative. It will provide consumers with safer, more flexible, and less costly transportation options, and provide incremental business model opportunities for our automaker customers,” Ziv Aviram, Mobileye co-founder, president and CEO, added. “By pooling together our infrastructure and resources, we can enhance and accelerate our combined know-how in the areas of mapping, virtual driving, simulators, development tool chains, hardware, data centers and high-performance computing platforms. Together, we will provide an attractive value proposition for the automotive industry.”

Confirming our earlier report, Intel said that Mobileye’s CTO and co-founder, Prof. Amnon Shashua, will lead Intel’s autonomous driving division, which will be based in Israel. Doug Davis, Intel’s SVP, will oversee how Mobileye and Intel work together across the whole company and will report to Shashua.

Other notable exits that have tapped into Israel’s expertise in computer vision and machine learning have included Google buying mapping startup Waze for $1.1 billion and Apple buying 3D sensor specialist PrimeSense for reportedly around $300 million.

The negotiations about what stays where for Mobileye and Intel are reminiscent of one of the other big M&A deals in Israel’s tech history, concerning Waze. Originally, Waze was being courted by Facebook, although there were disagreements over where Waze’s staff would be centered: engineering wanted to stay in Israel while Facebook was keen to get them to Facebook’s HQ in Menlo Park. Ultimately, that delay led to Google swooping in, agreeing to Waze’s terms and closing the deal.

Intel has been working officially with Mobileye since last yearEarlier this year, with BMW, the two started to test 40 self-driving cars equipped with the two companies’ technology. Mobileye was also an early partner of Tesla’s for its autonomous technology, although that relationship is ending amid some controversial undertones about safety measures at the carmaker. Other investments that Intel has made in the space of cars include taking a stake in Here (which will feed into the mapping initiatives at Mobileye); acquiring Itseez and Yogitech for safety and navigation functionalities in autonomous cars; making a commitment of at least $250 million to the space (which sounds so tiny considering today’s price tag); keeping a strong presence at auto shows; and, in November, launching a dedicated autonomous driving group, which is headed up by Doug Davis, who will now report to Mobileye’s CTO.

Mobileye went public on the Nasdaq in 2014 and currently has a market cap of about $10.5 billion. It’s trading up now more than 33 percent ahead of the market opening on the news. As a point of context, the company had moved only 0.83 percent on Friday’s trades.

Intel had been a leader in processors at the peak of the PC era, although it has competed hard (and often lost) as smartphones overtook the larger devices as consumers’ computers of choice.

Moving deeper into self-driving technology is part of Intel’s bigger strategy to build up its position in emerging areas of computing. Other verticals that Intel has focused on include connected “objects” (IoT) and virtual and augmented reality. It has been following through on this strategy with acquisitions as well as organic growth.

“The combination is expected to accelerate innovation for the automotive industry and position Intel as a leading technology provider in the fast-growing market for highly and fully autonomous vehicles,” the company continued. “Intel estimates the vehicle systems, data and services market opportunity to be up to $70 billion by 2030. The transaction extends Intel’s strategy to invest in data-intensive market opportunities that build on the company’s strengths in computing and connectivity from the cloud, through the network, to the device.”

Intel has disclosed several other acquisitions in Israel to fill out that strategy, including buying a personal assistant platform from Ginger Software; Omek Interactive for gesture-based technologies; and Replay Technologies for 3D video.

Intel is not the only company that is investing in and acquiring startups in the area of computer vision to raise its game in the area of autonomous cars.

Just earlier today, Valeo, the automotive parts giant, announced that it had acquired gestigon, a start‑up out of Germany that develops in-car 3D image processing software — used both to communicate to the driver as well as pick up signals from within the car and from the driver to communicate to a self-driving (or partially self-driving) car what to do next.

Terms of the deal, which includes staff as well as IP, were not disclosed. Valeo has been a regular investor in autonomous driving tech, taking a stake, for example, in French autonomous shuttle company Navya and getting a license in California to test self-driving cars. This latest acquisition shows that it remains serious about doing more in this area.

We’ll be dialing into the companies’ call with investors in about 30 minutes and will update this story with more then.

Source : techcrunch.com

The technology is currently in development, and freely available on GitHub.

In a time where we carry GB’s worth of photos and videos on our smartphones, it’s no wonder a number of us will have ponderously massive collections of videos stored at home. Which is all well and good, you have everything you want to save tucked away somewhere, but how do you find any of it?

Even major companies Google and Apple have recognised the issue at hand, implementing new sorting and search functions in their local and cloud storage tools. But researchers at the University of Basel in Switzerland are looking to develop and even better system, called ‘vitrivr’.

Vitrivr, is the team’s in-development video retrieval system that has a unique method to find content. The user searches for a video from a database but, instead of typing out a name, the user can draw a basic sketch of the picture or video on a linked tablet to query it. Colouring in the basic shades of the background for example might throw up a few results, which can in turn be used to modify the search for better accuracy. If you’re still confused, just check out the demo video below.

Right now, vitrivr is being tested with a custom search engine and database, but it is scalable, meaning that in the future it can even be used with very large collections. Consider the possibility of the tool embedded into say Netflix. If you’re watching on your phone, you don’t even need to scroll out type out the show’s name, just draw the show’s logo and you find what you need.

The vitrivr system is open source and freely available on GitHub. And if you’re good at coding, and looking to try it out, there’s even a handy startup guide here.

Source : dnaindia.com

The growth prospects and revenue generation of a business are directly related to the market dynamics and scenarios. Here, an extensive research is the strongest weapon in the armor of entrepreneurs. The startups can largely benefit from this market research.

It helps them in understanding the current business environments and assessing the best practices through business intelligence. However, gathering this massive pool of information can be tedious and cost-intensive. Hence, you can prefer opting for the virtual assistant services for internet-based research.

What Should You Expect From Virtual Internet Research Specialist?

There are many important research tasks that you can entrust to the virtual assistant companies.You can avail these services for gaining better insights about your business and make strategic decisions for enhanced performance. Some services that you can expect from these virtual assistants are:

Identify the Data Sources:

An efficient virtual research specialist will identify the resources where you can find the information that you seek. It works extensively for finding the best websites and online resources having updated information regarding your business.

Compiling the Data:

After finding the data resources, these specialists compile the desired information in an easily understandable format. They generally use a lot of graphs, statistics, and charts for its presentation.

Analyzing the Data:

The virtual internet research specialists help in analyzing the data collected from various trusted resources. They can create proper reports highlighting the strengths and weaknesses of your business. Also, these reports help in assessing the opportunities available and competition growing in your industry and niche. Some facts and statistics are interesting to notice but hard to comprehend. These specialists compile this data in a simple format. This data analysis can be shared within the organization with the investors and top-level managers for effective decision-making.

Assessing the Choices Available:

All the choices in the terms of vendors and service providers can be comprehensively assessed on the basis of data collected by virtual internet research specialists. It saves a lot of time spent in analyzing these vendors on various parameters.

On every count, this service helps your teams in understanding the market trends and finding the best resources for the growth of your business. Apart from collecting and compiling the significant data, these specialists can also help you in making the sense out of this information.

Why Is Hiring Virtual Internet Research Specialists A Wise Decision?

Most of the business owners are concerned about the efficiency of these virtual specialists. Hence, they are apprehensive about the same. People argue that they can get this task done by appointing an employee. Here, they need to understand the amount of research to be done for your business. If it is required to be done as a short-term assignment, you should never consider hiring an employee. In this regard, virtual research specialists come across as a viable option since these services are easily available on per hour basis.

Secondly, the costs entailed in appointing a full-time employee are truly intimidating. Apart from the salary, you need to invest in office overheads as well. On the contrary, you needn’t invest any extra cost for the virtual specialists. They possess all the required resources and tools. Moreover, they have the right set of skills, expertise, and experience needed for specialized research work. 

At AIRS we have dedicated an education program that provides training material and a certification examination to address this new and dynamic job description of Internet Research Specialist. It is now required that professional researchers owe it to themselves to seek structured certification programs and stay in touch as new materials and new tools transform research problems from very difficult or impossible to quick and simple tasks by getting involved in a more centralized way through associations such as AIRS.

Source : infognana.com

Sunday, 16 April 2017 15:02

How the internet was invented

In the kingdom of apps and unicorns, Rossotti’s is a rarity. This beer garden in the heart of Silicon Valley has been standing on the same spot since 1852. It isn’t disruptive; it doesn’t scale. But for more than 150 years, it has done one thing and done it well: it has given Californians a good place to get drunk.

During the course of its long existence, Rossotti’s has been a frontier saloon, a gold rush gambling den, and a Hells Angels hangout. These days it is called the Alpine Inn Beer Garden, and the clientele remains as motley as ever. On the patio out back, there are cyclists in spandex and bikers in leather. There is a wild-haired man who might be a professor or a lunatic or a CEO, scribbling into a notebook. In the parking lot is a Harley, a Maserati, and a horse.

It doesn’t seem a likely spot for a major act of innovation. But 40 years ago this August, a small team of scientists set up a computer terminal at one of its picnic tables and conducted an extraordinary experiment. Over plastic cups of beer, they proved that a strange idea called the internet could work.

The internet is so vast and formless that it’s hard to imagine it being invented. It’s easy to picture Thomas Edison inventing the lightbulb, because a lightbulb is easy to visualize. You can hold it in your hand and examine it from every angle.

The internet is the opposite. It’s everywhere, but we only see it in glimpses. The internet is like the holy ghost: it makes itself knowable to us by taking possession of the pixels on our screens to manifest sites and apps and email, but its essence is always elsewhere.

This feature of the internet makes it seem extremely complex. Surely something so ubiquitous yet invisible must require deep technical sophistication to understand. But it doesn’t. The internet is fundamentally simple. And that simplicity is the key to its success.

The people who invented the internet came from all over the world. They worked at places as varied as the French government-sponsored computer network Cyclades, England’s National Physical Laboratory, the University of Hawaii and Xerox. But the mothership was the US defense department’s lavishly funded research arm, the Advanced Research Projects Agency (Arpa) – which later changed its name to the Defense Advanced Research Projects Agency (Darpa) – and its many contractors. Without Arpa, the internet wouldn’t exist.

An old image of Rossotti’s, one of the birthplaces of the internet.

As a military venture, Arpa had a specifically military motivation for creating the internet: it offered a way to bring computing to the front lines. In 1969, Arpa had built a computer network called Arpanet, which linked mainframes at universities, government agencies, and defense contractors around the country. Arpanet grew fast, and included nearly 60 nodes by the mid-1970s.

But Arpanet had a problem: it wasn’t mobile. The computers on Arpanet were gigantic by today’s standards, and they communicated over fixed links. That might work for researchers, who could sit at a terminal in Cambridge or Menlo Park – but it did little for soldiers deployed deep in enemy territory. For Arpanet to be useful to forces in the field, it had to be accessible anywhere in the world.

Picture a jeep in the jungles of Zaire, or a B-52 miles above North Vietnam. Then imagine these as nodes in a wireless network linked to another network of powerful computers thousands of miles away. This is the dream of a networked military using computing power to defeat the Soviet Union and its allies. This is the dream that produced the internet.

Making this dream a reality required doing two things. The first was building a wireless network that could relay packets of data among the widely dispersed cogs of the US military machine by radio or satellite. The second was connecting those wireless networks to the wired network of Arpanet, so that multimillion-dollar mainframes could serve soldiers in combat. “Internetworking,” the scientists called it.

Trying to move data between networks was like writing a letter in Mandarin to someone who only knows Hungarian

Internetworking is the problem the internet was invented to solve. It presented enormous challenges. Getting computers to talk to one another – networking – had been hard enough. But getting networks to talk to one another – internetworking – posed a whole new set of difficulties, because the networks spoke alien and incompatible dialects. Trying to move data from one to another was like writing a letter in Mandarin to someone who only knows Hungarian and hoping to be understood. It didn’t work.

In response, the architects of the internet developed a kind of digital Esperanto: a common language that enabled data to travel across any network. In 1974, two Arpa researchers named Robert Kahn and Vint Cerf published an early blueprint. Drawing on conversations happening throughout the international networking community, they sketched a design for “a simple but very flexible protocol”: a universal set of rules for how computers should communicate.

These rules had to strike a very delicate balance. On the one hand, they needed to be strict enough to ensure the reliable transmission of data. On the other, they needed to be loose enough to accommodate all of the different ways that data might be transmitted.

Vinton Cerf (left) and Robert Kahn, who devised the first internet protocol.

“It had to be future-proof,” Cerf tells me. You couldn’t write the protocol for one point in time, because it would soon become obsolete. The military would keep innovating. They would keep building new networks and new technologies. The protocol had to keep pace: it had to work across “an arbitrarily large number of distinct and potentially non-interoperable packet switched networks,” Cerf says – including ones that hadn’t been invented yet. This feature would make the system not only future-proof, but potentially infinite. If the rules were robust enough, the “ensemble of networks” could grow indefinitely, assimilating any and all digital forms into its sprawling multithreaded mesh.

Eventually, these rules became the lingua franca of the internet. But first, they needed to be implemented and tweaked and tested – over and over and over again. There was nothing inevitable about the internet getting built. It seemed like a ludicrous idea to many, even among those who were building it. The scale, the ambition – the internet was a skyscraper and nobody had ever seen anything more than a few stories tall. Even with a firehose of cold war military cash behind it, the internet looked like a long shot.

Then, in the summer of 1976, it started working.

If you had walked into Rossotti’s beer garden on 27 August 1976, you would have seen the following: seven men and one woman at a table, hovering around a computer terminal, the woman typing. A pair of cables ran from the terminal to the parking lot, disappearing into a big grey van.

Inside the van were machines that transformed the words being typed on the terminal into packets of data. An antenna on the van’s roof then transmitted these packets as radio signals. These signals radiated through the air to a repeater on a nearby mountain top, where they were amplified and rebroadcast. With this extra boost, they could make it all the way to Menlo Park, where an antenna at an office building received them.

It was here that the real magic began. Inside the office building, the incoming packets passed seamlessly from one network to another: from the packet radio network to Arpanet. To make this jump, the packets had to undergo a subtle metamorphosis. They had to change their form without changing their content. Think about water: it can be vapor, liquid or ice, but its chemical composition remains the same. This miraculous flexibility is a feature of the natural universe – which is lucky, because life depends on it.

A plaque at Rossotti’s commemorating the August 1976 experiment.

The flexibility that the internet depends on, by contrast, had to be engineered. And on that day in August, it enabled packets that had only existed as radio signals in a wireless network to become electrical signals in the wired network of Arpanet. Remarkably, this transformation preserved the data perfectly. The packets remained completely intact.

[moduleplant id="583"

So intact, in fact, that they could travel another 3,000 miles to a computer in Boston and be reassembled into exactly the same message that was typed into the terminal at Rossotti’s. Powering this internetwork odyssey was the new protocol cooked up by Kahn and Cerf. Two networks had become one. The internet worked.

“There weren’t balloons or anything like that,” Don Nielson tells me. Now in his 80s, Nielson led the experiment at Rossotti’s on behalf of the Stanford Research Institute (SRI), a major Arpa contractor. Tall and soft-spoken, he is relentlessly modest; seldom has someone had a better excuse for bragging and less of a desire to indulge in it. We are sitting in the living room of his Palo Alto home, four miles from Google, nine from Facebook, and at no point does he even partly take credit for creating the technology that made these extravagantly profitable corporations possible.

'This thing is turning out to be a big deal,' he remembers thinking

The internet was a group effort, Nielson insists. SRI was only one of many organizations working on it. Perhaps that’s why they didn’t feel comfortable popping bottles of champagne at Rossotti’s – by claiming too much glory for one team, it would’ve violated the collaborative spirit of the international networking community. Or maybe they just didn’t have the time. Dave Retz, one of the researchers at Rossotti’s, says they were too worried about getting the experiment to work – and then when it did, too worried about whatever came next. There was always more to accomplish: as soon as they’d stitched two networks together, they started working on three – which they achieved a little over a year later, in November 1977.

[moduleplant id="583"

Over time, the memory of Rossotti’s receded. Nielson himself had forgotten about it until a reporter reminded him 20 years later. “I was sitting in my office one day,” he recalls, when the phone rang. The reporter on the other end had heard about the experiment at Rossotti’s, and wanted to know what it had to do with the birth of the internet. By 1996, Americans were having cybersex in AOL chatrooms and building hideous, seizure-inducing homepages on GeoCities. The internet had outgrown its military roots and gone mainstream, and people were becoming curious about its origins. So Nielson dug out a few old reports from his files, and started reflecting on how the internet began. “This thing is turning out to be a big deal,” he remembers thinking.

What made the internet a big deal is the feature Nielson’s team demonstrated that summer day at Rossotti’s: its flexibility. Forty years ago, the internet teleported thousands of words from the Bay Area to Boston over channels as dissimilar as radio waves and copper telephone lines. Today it bridges far greater distances, over an even wider variety of media. It ferries data among billions of devices, conveying our tweets and Tinder swipes across multiple networks in milliseconds.

The Alpine Inn Beer Garden today – still a place where Silicon Valley crowds gather.

This isn’t just a technical accomplishment – it’s a design decision. The most important thing to understand about the origins of the internet, Nielson says, is that it came out of the military. While Arpa had wide latitude, it still had to choose its projects with an eye toward developing technologies that might someday be useful for winning wars. The engineers who built the internet understood that, and tailored it accordingly.

That’s why they designed the internet to run anywhere: because the US military is everywhere. It maintains nearly 800 bases in more than 70 countries around the world. It has hundreds of ships, thousands of warplanes, and tens of thousands of armored vehicles. The reason the internet can work across any device, network, and medium – the reason a smartphone in Sao Paulo can stream a song from a server in Singapore – is because it needed to be as ubiquitous as the American security apparatus that financed its construction.

The internet would end up being useful to the US military, if not quite in the ways its architects intended. But it didn’t really take off until it became civilianized and commercialized – a phenomenon that the Arpa researchers of the 1970s could never have anticipated. “Quite honestly, if anyone would have said they could have imagined the internet of today in those days, they’re lying,” says Nielson. What surprised him most was how “willing people were to spend money to put themselves on the internet”. “Everybody wanted to be there,” he says. “That was absolutely startling to me: the clamor of wanting to be present in this new world.”

The fact that we think of the internet as a world of its own, as a place we can be “in” or “on” – this too is the legacy of Don Nielson and his fellow scientists. By binding different networks together so seamlessly, they made the internet feel like a single space. Strictly speaking, this is an illusion. The internet is composed of many, many networks: when I go to Google’s website, my data must traverse 11 different routers before it arrives. But the internet is a master weaver: it conceals its stitches extremely well. We’re left with the sensation of a boundless, borderless digital universe – cyberspace, as we used to call it. Forty years ago, this universe first flickered into existence in the foothills outside of Palo Alto, and has been expanding ever since.

Author: Ben Tarnoff
Source: theguardian.com

With the press of a digital “panic” button, immigrants detained by ICE may soon be able to send customized, encrypted messages to friends and family from their mobile phones in a last-minute attempt to share final parting words or critical information.

Notifica, which has yet to launch, is just one mobile app being developed for immigrants at a time when both legal and undocumented immigrants are increasingly worried about their status in the U.S. Others are working on tools that provide real-time alerts about ICE raids and information on legal resources.

While apps aimed at helping immigrant communities have been around for years, developing digital safety nets and resources for this growing market has taken on new urgency in the wake of President Donald Trump’s immigration policies.

“A lot of people in the tech community have been upset with what happened after the election and want to do something about it,” said Natalia Margolis, a software engineer at the digital agency Huge who partnered with Notifica founder Adrian Reyna to design the app. “Trump winning the election was certainly a wake up call to a lot of people.”

Potential users of these apps have grown exponentially in recent years — an estimated 95 percent of Americans own a cellphone of some kind and the share of those who own smartphones is now 77 percent, up from 35 percent in 2011, when Pew Research Center conducted its first survey of smartphone ownership.

An estimated 98 percent of U.S. Latinos own cellphones, according to Pew. Within that group, about 75 percent own smartphones.

So far, thereare about 8,000 people on a waiting list to download Notifica, according to Reyna, director of membership and technology strategies for the national immigrant rights organization, United We Dream.

As an undocumented resident himself, Reyna understands what’s needed: “In a moment when you don’t know what to do because everything is crashing down on you and you’re trying to get a hold of many people at the same time … we want to narrow that down to one thing.”

The app gained national attention after its debut at the SXSW conference last month.

Notifica, or “notify,” works with the press of a button by sending out a message blast to up to 15 of the person’s trusted contacts in a matter of seconds. The messages are pre-loaded and protected by a PIN number, which users share with loved ones. All communication is deleted from the app once the messages are sent out, according to Reyna. If they don’t have immediate access to their cellphones, users can also call a hotline to have the messages sent out at a later time.

Gabriel Belmonte, of San Jose, described the app as “a good resource to send out any status reports.” The 35-year-old, who has temporary deportation relief under the Deferred Action for Childhood Arrivals program, said younger people would be much more likely to use the app.

“For any apps, it’s really the younger people that take precedence in using it,” he said. “There’s a disconnection between the older generation” and the younger generation.”

Technology has made a natural progression into the realm of immigration, but “the Trump administration has accelerated this, especially in Silicon Valley,” said Eduardo Gaitan, co-founder of the Arrived app, which provides a one-stop shop of resources for immigrants, from housing and education to employment and deportation. The app launched in 2016.

“Most companies are feeling the effects of the new administration directly. I think it’s at the top of everyone’s mind,” said Gaitan, a Google employee.

The app’s founders are considering adding an emergency feature in which users can send custom messages in a variety of languages, much like Notifica.

“Mobile technology is moving into more and more spheres, and immigration makes a lot of sense,” said co-founder William McLaughlin. “It makes sense to have something that addresses one of the biggest challenges one can face.”

Ira Mehlman, spokesman for the Federation for American Immigration Reform, said, “Technology is going to do what it’s going to do.”

“It can inform people about what their options are, but in the end, if ICE and the administration are intent on enforcing the law, then it’s going to happen,” he added.

Software developer Celso Mireles is building a tool that delivers real-time alerts about nearby ICE raids or checkpoints. RedadAlertas, or “Raid Alerts,” will start off as a web app accessible through a website and text message-based alerts, according to Mireles, who plans to eventually build a mobile app.

“These alerts will inform vulnerable immigrants of risks they may face in their neighborhood or workplace,” Mireles wrote on a website for the app. “They will also enable legal aid groups, community organizations and activists to respond rapidly to protect immigrant communities.”


Apps for undocumented immigrants

Notifica: With the press of a button, immigrants who have been detained by ICE can send customized, encrypted messages to friends and family from their mobile phone.

Arrived: Created by a team largely composed of Google employees, the app aims to empower immigrants with knowledge on a variety of topics, from housing and education to jobs and deportation.

RedadAlertas: Delivers real-time, verified alerts about nearby ICE raids or checkpoints.

Source : mercurynews.com

Tuesday, 04 April 2017 18:17

How To Tread Lightly Into The Dark Web

Though anyone can gain access to the dark net using Tor software, the illicit and unregulated part of the internet is not for the faint of heart. Being that it’s so unstructured, the dark web is not a place where one can go without knowing exactly what they’re looking for and exactly where to find it.

But what about businesses that want to explore the dark web, specifically to see if their data or information has been compromised, without the risk that comes with poking around in such an unfamiliar place?

Owl Cybersecurity created a way to safely query the dark net.

Just weeks ago at PYMNTS’ Innovation Project, Alison Connolly, director of strategic partnerships at Owl Cybersecurity, shared how the company is creating an opportunity for dark net big data to be harnessed through a commercially available database.

Let The Search Begin

Owl Cybersecurity utilizes bots to anonymously and continuously collect information from the dark net, which is then indexed and stored in a database. That database provides a reflection of what exists on the dark net, without having to actually dig into it.

Though there are no search engines available in the dark web, Connolly said the database of actionable dark net data that Owl Cybersecurity provides can serve a similar purpose.

Using the database, an organization can query information, such as an email address, Social Security Number, a phrase or even an internal project name. The database will then return results of where mentions of that data exist, meaning all of the pages associated with that queried information across the dark web.

Connolly demonstrated a query that would show how many pages on the dark net have a number of credit cards available between 5 and 50,000. The results showed that within the last week, more than 49,000 pages contained the credit card dump Connolly was looking for. On one page, a person’s full credit card number, CV, ATM PIN, Social Security Number, birth date and mother’s maiden name was displayed and available for sale.

Of course the person whose compromised data was on display would want to know that their information was widely available on the dark web, but this exposure was also beneficial to the issuing bank of the credit card, Connolly noted.

Organizations can leverage the database to monitor for specific queries or data by setting up a saved search.

Putting Dark Data To Use

Before creating its database of DARKINT — actionable data from the dark net – Owl Cybersecurity’s core business was performing assessments and penetration testing for companies.

The firm would be hired to break into an organization’s system.

After the penetration tests, Owl Cybersecurity would share its results with the organization in order to help them point to vulnerabilities and places where their infrastructure needed to be fixed.

Connolly said that work would always include first going to the dark net to see how much exposure an organization actually had.

In an effort to be more efficient in that initial process, Owl Cybersecurity started cataloging and indexing data because doing it manually each time would take hours.

“Fast forward to today: We’ve actually put the user interface on it and now offer clients direct access to the dark net database,” Connolly explained.

This safe way to access the dark net can be especially helpful, considering Owl Cybersecurity’s research has shown that about half of dark net websites go up and down at least once a day.

“That’s one of the big hurdles on the dark net, whereas our tool will capture that and then we have a 60-day look back,” she added.

Not only does the database help to solve problems for Owl Cybersecurity’s clients, but it is also more complete than the dark net and easier to navigate.

“You have a way to query the dark net without exposing your organization,” Connolly said.

Source : pymnts.com

There are several search engine options available to people on the web. Google or Bing are not the only one, and you’ll be surprised to find out that some folks tend to use some of the most unknown search engines. But what happens when your favorite search engine is not enough to cut it? It means you’ll have to go ahead and type the same query in another search engine. However, what if it was possible to type your query once and get results from some of the most relevant search engines available on the web right now.

Noog for Windows PC

For that to happen, you’ll need to download a software called NooG. Hey, it’s a weird name for a web search related software, but that’s fine because the name doesn’t matter if the software is capable of working wonders.

The Noog program is used to make queries by country, language, document type, time of creation, website, excluding certain phrases, etc.

To get NooG up and running on your computer system, download and install it on your Windows PC. It is less than 1MB, so it should be a quick download. The installation is pretty quick as well, and when it comes down to viruses, we came across none, so that’s a plus right there.

After installing, the user should see a blue search box on his or her computer screen. As stated above, only three search engines are supported here; Google, Bing, and Wikipedia. The default is Google, but this can be changed quite quickly.

Just click on the hamburger button, hover the mouse pointer over “Search Engines” then click your preferred. Once you’ve added your search query, hit enter, and it should pop up in your search engine of choice.

Bear in mind, though, queries do not appear in NooG itself, nor does it necessarily show up in your default web browser from the get-go. By default, NooG shows queries in Google Chrome, but this can be changed by simply clicking inside the search box to bring up the Advanced menu. For now, only five web browsers are supported, and they are Internet Explorer, Google Chrome, Mozilla Firefox, Opera, and Opera Stable.

Users can do a host of other things via the Advanced menu as well. It is possible to change the interface language, the country, Time Period among other things. Everything here affects the way you search, but be careful and be sure to know what you’re doing.

Overall, we find NooG to be quite easy to use, and it does get the job done. However, the number of search engines the user will be able to search from is limited to only three, and that is not enough.

It is available for download from its official website.

Source : thewindowsclub.com

Monday, 03 April 2017 18:10

Apple iPhone 8 rumors and news

Apple’s iPhone 8 isn’t supposed to arrive until much later in 2017, but that hasn’t stopped legions of fans from fervently speculating about it. It may or may not launch alongside the 2017 iPhone models, the iPhone 7S and 7S Plus, or it could end up being an incredible technical showcase phone released as an anniversary celebration model at a later date.

For now, it’s all up in the air, but what we’re hearing about it keeps us interested. Very interested. Here’s what we’ve learned so far about the iPhone 8, which has also become known as the iPhone X and iPhone Edition.

Design: Bezel-less screen, no home button

For some time now, we’ve been hearing rumors the iPhone 8 will have an edge-to-edge, or bezel-less, screen, potentially with OLED technology. Many reports say Apple is still finalizing the design for the new iPhone, such as this one from Mac Otakara. which states that while there was a prototype built without a home button, that prototype may not end up being the final design.

However, if one was built without a home button and a bezel-less screen, what would happen to the fingerprint sensor for Touch ID? A suspicious rumor, published in early April, says Apple will move the sensor to the back of the phone. Talk of Touch ID being integrated into the screen was premature, says the report, due to the technology not being ready for widespread use yet.

Why suspicious? Apparently, all this came from a third-party speaking to an anonymous Foxconn employee, meaning the renders accompanying the report are based on secondhand information, and have been created using eye-witness accounts. It also looks a lot like a manipulated image of the Samsung Galaxy S8. The source relaying the information to the site says the resulting image is close to a nearly-final version of the iPhone 8.

It continues to say the iPhone Edition will be made from metal and not all glass, while the screen will resemble 2.5D glass panels over screens we see today, rather than being completely curved. The concept also shows a vertical dual-lens camera stack on the back of the phone, rather than the horizontal layout seen on the iPhone 7 Plus. We’d suggest taking this image, and the information, as fan speculation rather than hard evidence for now.

Before this, a report from market research firm Cowen and Company suggested that the iPhone 8’s earpiece, FaceTime camera, and Touch ID fingerprint sensor will be embedded into the screen, allowing for a seamless edge-to-edge front panel. It said Apple may switch to Synaptic’s optical-based fingerprint reader for the new Touch ID, citing it as “currently the only workable solution” for detecting a fingerprint through a smartphone screen.

BGR is intimating that the top bezel will also be removed. That would be a tricky move, as the top bezel houses the ambient light sensor, a proximity sensor, the front speaker, and the front-facing camera. However, the iPhone 8 may feature a touchscreen with embedded sensors.

A patent discovered by Apple Insider suggests that Apple has considered moving the front-facing sensors to underneath the display. For a closer look, you can check out U.S. patent No. 9,466,653, titled “Electronic devices with display-integrated light sensors.”

 

These reports corroborate rumors brought to light by Apple insider John Gruber, who was among the first to say that the iPhone 8 also may not have a single bezel — that plays well with the idea of a single sheet of glass. The entire front of the device could be one giant display, and the Touch ID sensor would be embedded in the screen itself. This has been reiterated in The New York Times.

As for the size of the devices, Gruber says he doesn’t know whether Apple is “going to shrink the actual thing in your hand to fit the screen sizes we already have, or whether they’re going to grow the screens to fit the devices we’re already used to holding.”

The news follows another bombshell. The Wall Street Journal reported in February that Apple will ditch the iPhone’s Lightning port in favor of USB Type-C, the industry standard connector for smartphones, laptops, and chargers. It isn’t clear from the report if Apple means to replace the iPhone’s Lightning port with a Type-C port, and one possible interpretation is that the Cupertino, California-based company will adopt Type-C for the phone’s wall charger and retain the Type-C plug on the iPhone 8 itself.

Either way, that seems highly unlikely. Apple has long eschewed standard USB chargers on iOS devices, preferring its own proprietary 30-pin connector and Lightning port. It has relaxed its rigidness recently, most visibly on its MacBook line, which features USB-C connectors. But Apple’s new Ultra Accessory Connector would seem to signal that the company has no plans to drop Lightning anytime soon.

The Wall Street Journal corroborated other rumblings about the iPhone 8’s display, home button, and more. The iPhone 8 will feature a curved OLED screen similar to those on Samsung’s Galaxy S7 devices. It will also do away with the physical home button. And it will launch alongside two other smartphones. Analyst Ming-Chi Kou agreed, saying the iPhone 8 could see the elimination of the iconic home button and Touch ID sensor in favor of “virtual buttons” at the bottom of the screen.

The screen size of the new iPhone has been the subject of some debate, but it now seems as though everyone is agreeing that it will sit at 5.8 inches. Previously, Nikkei Asian Review suggested that the display would instead be 5 inches, however the outlet, which is known for iPhone leaks, has since changed its tune in a revised report.

Kuo reports that the virtual buttons will take up part of the iPhone 8’s screen, a rumored 5.8-inch OLED panel with a resolution of 2,800 x 1,242 pixels — a figure now agreed upon by Nikkei. And he believes the phone “will come with other biometric technologies that replace the current fingerprint recognition technology.” Kuo notes the overall footprint would be comparable to the 4.7-inch TFT-LED iPhone, though with a measurably larger display size and battery life.

The Wall Street Journal said in mid-June that the iPhone 8 will be radically different. New information from sources speaking to Bloomberg also reiterate that the iPhone 8 may “appear like a single sheet of glass.” That would eliminate much of the bezel around the display as well as the home button. There are mockups of the possible design all over the internet.

Although going back to glass may seem like an odd retro move for Apple, it would also open up possibilities like wireless charging, which is nearly impossible to achieve with an all-metal device. Moreover, Kuo suggests that higher-end models of future iPhones will likely use stainless steel in their cases — so look out, world. We’re about to get real fancy.

Here’s a concept video made by Ran Avni at ConceptsiPhone, which shows what a borderless iPhone could look like. The design in the video is based on Marek Weidlich’s design on Behance.

The name: iPhone 8, iPhone X, or even iPhone Edition

Despite being known as the iPhone 8 in many rumors, the phone may come alongside the iPhone 7S and 7S Plus in late 2017. This may confuse things for 2018, when an iPhone 8 would be more logical, if we follow Apple’s usual naming traditions. That’s why Apple may decide to use a different name for it, and there are several possible options being rumored.

The latest is the possibility it will be named the iPhone Edition. Although unusual, this fits in with Apple’s strategy with the Apple Watch, when it referred to the expensive versions made from precious metals as the Watch Edition. We’re expecting the iPhone 8 to be a special edition of some kind, due to the rumors connecting it with some cutting-edge technology not found in the regular iPhone, so it does make some sense. However, it’s not a catchy name, and although it comes from a credible source, is still entirely unofficial. Additionally, the source says the iPhone Edition will have a 5-inch screen, not a 5.8-inch screen as previously expected, which may point to it being a different phone entirely.

Alternatively, the other name being rumored is the iPhone X. Cool, right? And one suitably fitting for all the next-generation technology that’s supposed to be packed inside. This comes from anonymous sources speaking to Fast Company, where several other rumors about the phone are repeated, including that it will have a 5.8-inch OLED screen, without bezels, mounted in a body made of stainless steel. The glass back from older iPhone models may make a return, too. The familiar Home button may disappear, to be replaced by new touch technology under the screen, and even the volume and sleep/wake keys may be removed, ready for touch-sensitive panels instead.

It doesn’t stop there. Another piece of exciting new tech rumored for the iPhone X is a 3D-sensing camera, which may be used for facial recognition, or for augmented reality. Add in a bigger battery and a cool-sounding “monolithic” design, and the iPhone X sounds like the most technically exciting iPhone we’ve ever seen. However, it may all come at a steep price, with a tag in excess of $1,000 likely.

Before we get too far ahead of ourselves, none of the iPhone X’s rumored specification has been confirmed, and there’s a chance some of the features mentioned are for future iPhone models, and won’t all arrive on a single, amazing phone. We can still hope though.

Screen: OLED for the iPhone

{youtube}WSf8aJlYCjg{/youtube}

If the iPhone 8, or a special edition iPhone under a different name, does arrive in 2017, rumors say it may have an OLED screen. This would be a first for Apple on the iPhone, but it’s not a done deal yet. While the majority of the rumors link the two together, a report at the beginning of March 2017 said Apple is still testing technology for the device, including OLED and LCD screen panels, both curved and flat.

Latest rumors indicate that not only will the display be OLED, but it will also be True Tone, at least according to a report from MacRumors, which cites Barclays. Currently, the only Apple device with a True Tone display is the 9.7-inch iPad Pro. True Tone displays basically change the colorization of the display depending on the ambient light. So, if you’re in a room with an orange light bulb, the color will change a little so it appears more realistic. It’s very similar to Night Shift, which adjusts the color to cut out blue light at night.

Previous rumors indicated that not only will the display use OLED technology, but it will also be slightly curved and built by Samsung. The report comes from Nikkei Asian Review, which notes that Apple has chosen not to use drastically curved edges as found in some of Samsung’s phones and, as a result, there aren’t expected to be many features tied to the curve other than a fresh design.

Still, the previous reports indicate that rumors of a curved-display iPhone are exaggerated. While Apple may at one time have been testing an iPhone with a curved display, IHS analyst Wayne Lam, who analyzes Apple’s supply chain, said the iPhone will only come with a flat display.

“Much like the recently announced LG G6, we anticipate a touchscreen with a new, longer aspect ratio design to take advantage of higher coverage area of the iPhone in its entirety. This new design language is expected to become the trend for 2017, as we all anticipate Samsung’s reveal later this month,” said Lam.

The rumors first began with Ming-Chi Kuo, an analyst from KGI Securities known for his often accurate insight and predictions about Apple’s upcoming products, who said the 2017 iPhone will feature an OLED display that is flexible. This flexible display will also have a structural metal component to “avoid deforming the form factor of the flexible OLED display.” According to 9to5Mac, Kuo said the 2017 OLED iPhone may also use a film sensor for a better 3D Touch user experience, as film sensors reportedly offer “higher sensitivity.”

Apple’s Touch ID technology could also be revamped to complement the bezel-less device, in that it may be replaced by a facial recognition system. But as there are a lot of technical challenges to the technology, the company may use a combination of fingerprint and facial recognition technology.

“Before Apple can fully replace the fingerprint system with facial recognition, a combination of the two steps of bio-recognition could be a valid solution for enhancing transactions security,” Kuo said.

It may have to fall to the two-step method, as one of the issues reportedly plaguing the iPhone 8 is OLED production. A report from Bloomberg revealed that the company which makes many of the machines responsible for fabricating OLED panels, Japan-based Canon Tokki, may have trouble meeting demand.

It notes that Canon Tokki has “a growing backlog” of orders even after doubling its output in 2016. Case in point: The wait for an OLED-making machine from Canon Tokki Corp. is about two years.

Initially, Samsung was expected to be the sole supplier of new iPhone models’ displays. According to DigitTimes, it expects to ship between 60 million and 70 million OLED units in 2017 — up to a maximum of 20 million units per month.

However, LG and Japan Display are reportedly looking to enter production later on in the process. Another report suggests Apple is speaking with Sharp Corp. to supply additional OLED screens. On September 30, Sharp announced a $566 million investment in developing OLED production facilities, citing June 2018 as the target date for product output.

“Apple has unofficially or as a nod encouraged Sharp to go into it,” Amir Anvarzadeh, Singapore-based head of Japanese equity sales at BGC Partners Inc., told Bloomberg in a phone interview. “Apple’s general strategy is to increase the competition on the supply side, and dilute the risk exposure to one company.”

If the iPhone 8 does come with an OLED screen, it’ll likely be expensive, possibly beyond $1,000. The Wall Street Journal reports that the displays are costly to produce. However, the same report notes that Apple could decide against the OLED model altogether. There’s precedent for such a 180: Last year, a global shortage of sapphire glass forced Apple to abandon the material for the iPhone 7.

The Motley Fool report hints that OLED panels may be used across the iPhone range in 2017 or 2018, not just on the so-called iPhone 8. According to the DigitTimes note it found, “supply chain sources believe that 50 million of these AMOLED-equipped iPhones will make it out to customers in the first year of availability.” While 50 million sounds like a lot, it’s a far cry from the 200 million iPhones Apple sold last year, perhaps indicating that the company will gradually shift into OLED rather than equipping all phones with the new tech right away.

The new iPhone’s display may be flexible. Samsung Display, the subdivision of electronics behemoth Samsung that oversees the company’s display technologies, will supply Apple with “millions” of curved screens for the iPhone 8, according to The Korea Herald. It is reportedly of the plastic variety as opposed to the Gorilla Glass screens of iPhones past, and “curved all over.”

Earlier, a report from Nikkei Asian Review suggests that at least one new iPhone will feature a premium OLED display that’s curved on both sides, somewhat like Samsung’s Edge series. And a document surfaced by a Chinese social media corroborated those details. The sketches show a high-end iPhone, code-named “Ferrari,” that boasts a “glass sandwich” design, an edge-to-edge OLED display, wireless charging, and a touch-sensitive home button. This device may have evolved into the iPhone X, about which rumors began in early 2017.

Specs: A11 chip

Every year, Apple upgrades the processor in its new iPhone. The iPhone 8 will likely get the new A11 chip, which DigiTimes reports will use a 10nm manufacturing process. The chip should be even faster than the A10 Fusion processor, which has been heralded by critics as the best mobile phone processor.

New reports indicate that the new A11 chip is set to soon enter production. The report comes from Economic Daily News and was picked up by Apple Insider, which notes that TSMC will begin production of the new A11 chip in April, and will aim to produce 50 million chips before July. Not only that, but the firm will reportedly produce 100 million chips before the end of 2017.

If the iPhone 8 is announced alongside the iPhone 7S and 7S Plus, it may share the same A11 chip as those new phones. And according to Chinese research firm Trendforce, it will have plenty of internal storage: The iPhone 8 is said to come in two configurations, 64GB and 256GB. RAM remains a mystery.

Release date and price

Apple traditionally launches a new iPhone every year in September. So far, that seems likely to happen yet again in 2017. The iPhone 8, as it has become known, may be renamed the iPhone X or iPhone Edition, and act as a special anniversary edition of the phone. However, while the phone may be announced alongside the iPhone 7S/7S Plus, it may not be released until later, according to this source.

We have no information on a specific launch or release date yet. The price? The iPhone 8/iPhone X/iPhone Edition may cost more than $1,000, due to the amount of new technology inside and mandatory high-end specifications. A 256GB iPhone 7 Plus is $970 already, so this isn’t a huge stretch. Specifically, reports indicate that the new iPhone’s OLED 3D Touch module costs a hefty 60 percent more than the LCD 3D Touch module — leading to a higher price for the device. The report comes from 9to5Mac, which cites information from Digitimes.

Apple has reportedly asked manufacturers to start trial production earlier than usual — in the first quarter of 2017, according to DigiTimes. It’s unlikely the release date will be moved up earlier, but perhaps Apple is anticipating higher demand than usual.

Battery: Wireless charging

bettry.jpg
Apple has long been rumored to be working on wireless charging for a future version of the iPhone, and rumors suggest it will use its own self-built tech — but that doesn’t mean third-parties won’t develop their own wireless charging accessories for the phone.

In fact, popular wireless charger manufacturer Powermat has said that it will work to support whatever wireless charging standard the new iPhone uses. Some reports have indicated that the device will support Qi charging, however, others indicate that the device may use a modified version of Qi that won’t work with standard Qi chargers, like the Apple Watch.

“In the wake of recent news that Apple has joined the Wireless Power Consortium, Powermat announced that the company would be ready to support iPhones with whatever wireless charging protocol Apple employs,” Powermat said in a statement.

According to Reuters, Apple has at least five different groups working on wireless charging technology. And lending credence to the wireless charging rumors, Apple recently joined the Wireless Power Consortium, which promotes the Qi charging standard. It doesn’t necessarily mean the next iPhone will have wireless charging or that Apple will use the Qi standard, but it adds more weight to the rumors.

A new report by Mac Otakara claims the OLED 5-inch iPhone model will be the only one of these three to feature glass casing and wireless-charging capabilities — this goes against the grain of previous reports that suggested all devices will have wireless charging, so take the information with a dose of skepticism.

Mac Otakara suggests that wireless charging will feature a separate accessory, according to MacRumors, and will be contact-based — similar to the Apple Watch. The report also says Apple will not have a Lightning to 3.5mm headphone jack adapter in the box, and the Lightning to USB Type-C cable will still be an optional purchase.

A separate report by KGI Securities analyst Ming-Chi Kuo claimed the iPhone 8 will have a more expensive logic board design, which would allow for longer battery life, according to MacRumors. The new logic-board design would allow for the OLED iPhone to have dimensions similar to a 4.7-inch iPhone, but it could offer comparable battery life to a 5.5-inch iPhone.

Kuo expects the device to have a 2,700mAh L-shaped two-cell battery pack. The OLED display could also allow the device to be more energy-efficient, meaning the device could have better battery life than previous 5.5-inch iPhones.

A previous report by KGI Securities analyst Ming-Chi Kuo, from MacRumors, said Apple will use wireless charging in all three of its upcoming iPhones this year. As wireless charging increases the internal temperature of the device, the iPhone 8 will have a 3D Touch module with “additional graphite sheet lamination” — this protects the phone from malfunctioning if it overheats.

“While we don’t expect general users to notice any difference, lamination of an additional graphite sheet is needed for better thermal control and, thus, steady operation; this is because FPCB is replaced with film, which is more sensitive to temperature change of the 3D touch sensor in OLED iPhone,” Kuo said.

The 3D Touch module could cost Apple $5 to produce per device — which adds to reports claiming the iPhone 8 will likely cost upwards of $1,000 due to more premium components and a massive redesign.

Energous CEO Steve Rizzone has hinted in the past that it has inked a deal with “one of the largest consumer electronic companies in the world,” but an investor’s note from Copperfield Research suggested that Apple has no plans to use Energous’ WattUp true wireless charging solution.

Camera: A better dual camera and “revolutionary” 3D front camera

A new rumor suggests the iPhone 8 has a thing or two in common with Microsoft’s depth-sensing Kinect sensor. According to a report published by Apple analyst Ming-Chi Kuo, the upcoming iPhone’s front camera boasts a “revolutionary” infrared sensor that can sense the three-dimensional space in front of it.

It’s said to be aimed at taking selfies. According to Kuo, the front sensor merges depth information with 2D images for features like facial recognition in tandem with Touch ID. It could be used to replace a video game character’s head with that of the user, or to generate a 3D selfie that would integrate seamlessly with virtual reality applications.

Apple is likely to eventually open the 3D scanning capabilities to third-party developers.

The technology was developed by PrimeSense, the company behind Microsoft’s Kinect. The infrared transmitter reportedly uses a technique known as vertical-cavity surface-emitting laser technology from Lumentum, which works by sending invisible IR light signals outward from the phone and then detecting the signals that bounce back off of objects.

camra.jpg

It’s said to be expensive. According to an analysis by JPMorgan market researcher Rod Hall, the infrared sensor could add as much as $10 to $15 per module, lending credence to rumors that the iPhone 8 will be as much as $100 pricier than its predecessors.

The iPhone’s front camera isn’t the handset’s only highlight. It’ll boast dual vertical cameras, likely with functionality similar to the cameras on the existing iPhone 7 Plus. And the new iPhones will reportedly be able to shoot in portrait orientation and take advantage of the dual lens system.

Japanese blog Mac Otakara, citing an unnamed Taiwanese supplier, suggests dual cameras will be a part of the upcoming iPhone lineup. They’re rumored to be arranged in a vertical configuration as opposed to the current horizontal layout on the 7 Plus.

And according to a report from The Korea Economic Daily, Apple is collaborating with LG to create a dual-camera module that would allow for 3D photographing. This would certainly make sense, as LG is already the company behind the iPhone 7 Plus camera. While Apple previously patented 3D-object and gesture recognition, it’s unclear whether the upcoming iPhone will bring these patents to life.

Special powers: Iris scanner and facial recognition

iris.jpg

A Fujitsu phone with an iris scanner is pctured above.

Don’t look now, but Apple may be jumping on the iris sensor bandwagon. A new report from DigiTimes claims that Apple is planning on adding iris sensor technology in the iPhone 8.

Previously, analyst Timothy Arcuri of Cowen and Company said the iPhone 8 may feature facial and gesture recognition powered by a laser sensor and an infrared sensor near the front-facing camera. And an older DigiTimes report citing “unnamed” industry sources reported that Apple’s is prepping some form of pupil-scanning tech for a debut on the iPhone 8 as early as 2018.

The report was otherwise light on detail, save that electronics manufacturers like Qualcomm, Truly Opto-Electronics, O-film Tech, and Beijing IrisKing are expected to ramp up production of the necessary silicon. But it’s not an outrageous report. Apple, no doubt pressured by rivals like Samsung, has been pursuing alternative forms of biometric identification for some time. It’s just a question of whether the tech will arrive in 2017 or 2018.

Well-respected industry analyst Ming-Chi Kuo reported as recently as March that the iPhone 8 would introduce “recognition technology” like face and iris scanning alongside a curved, all-glass chassis and AMOLED screen. In the past several months, the company has made acquisitions that lend credence to those rumors. It purchased facial recognition firm Emotient earlier this year, and in September acquired real-time rendering firm Faceshift.

Apple’s patent filings, meanwhile, suggest a long-running effort to develop a reliable method of identifying facial features. Four years ago, the company was granted one patent, “Electronic Device Operation Adjustment Based on Face Detection,” that details a front-facing camera system that can recognize a user’s face. A second, “Low Threshold Face Recognition,” describes a facial recognition solution capable of identifying the individual features of a face even in poor lighting conditions. Both, intriguingly, mention accompanying software that automatically tailors the device’s settings and screens to individual, recognized users.

Dovetailing with those developments is Apple’s long-rumored desire to eliminate the iPhone’s iconic Touch ID home button. Apple Insider reported that the company is developing — and has several patents describing — a display that can recognize multiple fingerprints. But it reportedly won’t come to market until next year, as Apple engineers work to overcome the many “technical challenges” with the technology.

In early 2017, anonymous sources talking to Fast Company said Apple has been working with a company called Lumentum and may include a 3D-sensing camera or sensor on the iPhone 8/iPhone X. It may be used for facial recognition, rather than iris scanning, or for augmented reality applications.

Size and materials: Glass, ceramic, steel?

If you’re one of the few who thinks that the iPhone 6S Plus is too small for your probably huge hands, we’ve got good news for you. According to several reports, the iPhone 8 may feature a massive 5.8-inch screen. Of course, if the bezel is removed, the larger screen size may not mean an increase in body size, and the phone itself could have the same physical dimensions as the current 5.5-inch iPhone 7 Plus.

The first report came from the Motley Fool, which picked up on a note obtained from DigiTimes. The iPhone 8 would complement the already quite large 5.5-inch iPhone 7S Plus, which could debut at the same time. As for design, noted KGI Securities Analyst Ming-Chi Kuo suggests that Apple may experiment with either glass, ceramic, or plastic backs on the iPhone 8.

There have been a ton of rumors about the materials to be used to build the next iPhone, and while some suggest that the device will have an all-glass back, new rumors say that instead, Apple may switch out aluminum for stainless steel. The report, from DigiTimes, says that the iPhone 8 will come with a stainless steel frame. If true, it would mark a return to stainless steel for Apple, which used the material in its phone in the iPhone 4S. This rumor also showed up in early 2017, alongside talk the iPhone 8 maybe renamed the iPhone X.

Durability: More water-resistant

water.jpg

Julian Chokkattu/Digital Trends

The iPhone 7 was the first device in the iPhone family that could be submerged up to a meter underwater for 30 minutes. But rumor has it that Apple’s going to take it a step further with the next iPhones, ramping up the IP rating from IP67 to IP68 and putting it on par with the Samsung Galaxy S7 Edge.

The rumor comes from the Korea Herald, which cites “multiple sources.” Most people will hardly notice a difference, to be fair. The IP68-rating allows submersion up to about 5 feet for 30 minutes, a minor improvement over the 3.3 depth against which IP67-certified devices are protected.

Source : yahoo.com

airs logo

Association of Internet Research Specialists is the world's leading community for the Internet Research Specialist and provide a Unified Platform that delivers, Education, Training and Certification for Online Research.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.

Follow Us on Social Media