Category : Uncategorized

It’s Now Easy to Shift Facebook Pics to Google

The EU’s General Data Protection Regulation is delivering results. Big time. Facebook on Monday announced a tool that allows anyone to copy their endless selfies and holiday pictures from the Zuckerberg empire to Google Photos.

A beta of the photo-transfer tool is rolling out today in Ireland with a wider release expected during the early months of 2020. The tool will move photos and their related metadata—including the folders they are in, file names, and any other information attached to the image. Transferring to Google comes first, with other services to follow at a later date.

But Facebook isn’t doing this out of the goodness of its own heart. Data portability, as its known, is a key part of GDPR. And that means being able to easily shift your Facebook photos to another service. They’re your photos, after all, so why not? “We’re increasingly hearing calls from policymakers and regulators, particularly those focused on competition, that large platforms should be doing more to enable innovation,” Satterfield says. “Including by allowing people to move their data to different providers.”

To transfer data, a Facebook account holder has to enter their password, then authenticate their Google account for the change to happen. But not everything will be shifted across. “You can move the photo as the user,” Satterfield says. “The tag which identifies the people in the photo we’re not making portable right now.”

Photos are just the beginning. The data-moving tool has been created as a result of the Data Transfer Project (DTP), which was set up in 2018 and is a collaboration between the world’s biggest tech companies: Apple, Facebook, Google, Microsoft, and Twitter are the group’s key members.

Developers from all the firms are using open source code and their APIs to create ways for data to pass from one to the other seamlessly. All the code is listed on GitHub.

Options are being developed that allow calendars, emails, tasks, playlists, and videos to be moved. This hints at the likelihood that one day you’ll be able to transfer all your Outlook contacts and calendars to Gmail, or Apple Mail, with just a few clicks. This would automatically pull in all of the data that’s used regularly and reduce the administrative burden of trying out a new service.

A data-portability white paper (PDF) published by Facebook in September explains that the company is looking at technical ways information contained in a user’s “social graph”—defined as the connections between users on social networks—can be shared securely. “Enabling portability of the social graph can be important for innovation and competition, but doing so also comes with important privacy questions,” the white paper says.

The Data Transfer Project isn’t limited to big companies. The open source nature of the project means that smaller companies can be involved in moving information around. All they have to do is devote some developer resource to creating the structures needed for data to be moved.

Offering ways to access your data isn’t new. For the best part of a decade, both Facebook and Google have had ways for people to download their data. For Facebook, this has taken the shape of a “download your information” page. It allows you to extract posts, photos, comments, friends, page information, places you’ve visited, and ad information. It can be grabbed in HTML format or JSON, and there’s the option to pull the information from the date.

Around the same time the search giant introduced Google+ (RIP) in 2011, Takeout debuted. It let people pull contact information, photos, and profile data. This has evolved into a separate tool that lets you download Google data from all of its separate services.

One big issue with the download tools is that the formats are pretty limited. Once you’ve downloaded your information, other than keeping a permanent record of it on a hard drive it’s hard to know what to do with it.

So why has this data-transfer tool been created now? That’s all down to the GDPR—in particular, Article 20 of the law. The DTP was set up months after GDPR came into force.

Under the GDPR, people have the right to obtain and reuse their personal data. (California’s Consumer Privacy Act, which will take affect in 2020, has similar provisions). The GDPR’s Article 20 says people have the right for their “personal data transmitted directly from one controller to another, where technically feasible.” The last three words here are key.

William Morland, a Facebook engineer based in the company’s London office, explains that there are three major components the DTP has created that allow information to pass between services. First are data models; these are the categories of information (calendars, contacts, etc). Second are adaptors; these make the system work.

“These convert data from whatever a specific service’s representation of what that data may look like,” Morland explains. “Facebook has one way of looking at photos, Google has another, Flickr has another.” And the third component is a task management system, which works behind the scenes to technically conduct the data transfers.

Morland says the DTP being open source is crucial to its success. “It’s specifically an open source project, rather than a standards body or something else like that, because the main aim is to make the technical barriers to providing data portability lower,” he says. “So it’s open to big companies and small companies, and everyone can participate.”

While Facebook has been the first to move with its data-sharing tool, it’s expected others will soon follow. At a joint presentation between Google and Facebook, staff at Google showed prototype versions of both Google and Twitter transfer tools.

Google is finally killing off Chrome apps, which nobody really used anyhow

Today, Google shared an updated timeline for when Chrome apps will stop working on all platforms. June 2022 is when they’ll be gone for good, but it depends on which platform you’re on (via 9to5Google). Previously, we knew that Chrome apps someday wouldn’t work on Windows, macOS, and Linux, but today, Google revealed that Chrome apps will eventually stop working on Chrome OS, too.

A Chrome app is a web-based app that you can install in Chrome that looks and functions kind of like an app you’d launch from your desktop. Take this one for the read-it-later app Pocket, for example — when you install it, it opens in a separate window that makes it seem as if Pocket is functioning as its own app.

You probably don’t need to worry about the death of Chrome apps messing up your browsing experience too much. At this point, most apps on the web are just regular web apps, which is why you’ll be able to keep using Pocket without issue in much the same way by navigating to https://getpocket.com/. In rarer cases, you might also be using Progressive Web Apps, which are basically websites that are cached to your device so they can have some offline functionality and be launched like an app. Some Chrome apps you have installed may already redirect to websites, like many of Google’s apps. And Chrome extensions are also different from Chrome apps, and those will keep working just fine.

There’s a pretty decent chance you’re not using any real Chrome apps at all, even if you use web apps all the time. When Google first announced all the way back in 2016 that it would end support for Chrome apps on Windows, macOS, and Linux, it said approximately one percent of users on those platforms were actively using packaged Chrome apps. That was nearly four years ago, and web developers have moved on.

If you do use Chrome apps, they will stop working much sooner on Windows, macOS, or Linux than they will on Chrome OS. Here’s Google’s timeline:

March 2020: Chrome Web Store will stop accepting new Chrome Apps. Developers will be able to update existing Chrome Apps through June 2022.

June 2020: End support for Chrome Apps on Windows, Mac, and Linux. Customers who have Chrome Enterprise and Chrome Education Upgrade will have access to a policy to extend support through December 2020.

December 2020: End support for Chrome Apps on Windows, Mac, and Linux.

June 2021: End support for NaCl, PNaCl, and PPAPI APIs.

June 2021: End support for Chrome Apps on Chrome OS. Customers who have Chrome Enterprise and Chrome Education Upgrade will have access to a policy to extend support through June 2022.

June 2022: End support for Chrome Apps on Chrome OS for all customers.

To break that down a bit:

  • At some point in June 2020, Chrome apps will stop working on Windows, macOS, and Linux, unless you have Chrome Enterprise or Chrome Education Upgrade, which lets you use Chrome apps for six more months.
  • If you’re on Chrome OS, Chrome apps will work until June 2021. Again, if you have Chrome Enterprise or Chrome Education Upgrade, Google says you can use Chrome apps for an additional year.

Originally, Chrome apps were supposed to stop working on Windows, macOS, and Linux in early 2018, but in December 2017, when Google removed the Chrome apps section from the Chrome Web Store, it pushed that early 2018 deadline to an unspecified date in the future. Now, more than three years later, we finally know when Chrome apps won’t work on those platforms — and when they won’t work on any platform at all.

Microsoft’s new Edge Chromium browser launches on Windows and macOS

Microsoft is officially launching its new Edge Chromium browser today across both Windows and macOS. A stable version of the browser is now available for everyone to download, just over a year after the software maker revealed its plans to switch to Chromium. Microsoft is initially targeting Edge at enterprise users of Windows and macOS, but consumers will be able to manually download and install it, too.

In the coming months, Microsoft plans to automatically update Windows 10 users with this new version of Edge which will fully replace the existing built-in browser. The company is taking a slow and careful approach, bringing the new Edge gradually to groups of Windows 10 users through Windows Update before it’s fully rolled out to everyone in the summertime. Microsoft is also releasing this version of Edge to OEMs today, so expect to see machines start arriving in the back-to-school period with the new version of Edge preinstalled. Microsoft will eventually bake this directly into a future Windows 10 update, and it will be part of Windows 10X for foldable and dual-screen devices. An ARM64 version of Edge won’t be available today, but it’s expected to come to the stable channel shortly.

While Edge Chromium is available today, it’s also launching without some features you might be familiar with if you’re used to using Chrome. Both history sync and extension sync are missing at launch, but things like favorites, settings, addresses / contact info, and passwords will all sync. Microsoft is planning to have these missing sync features available later this year. The good thing is the rest of Edge is very similar to Chrome and even includes support for Chrome extensions. Where Edge differs is new features like Collections, which allows you to collate images and content from the web, and tracking prevention.

You can choose from three different levels to avoid being tracked on the web in Edge, and the default setting will block trackers from sites you haven’t visited before. This makes sure content and ads are less personalized and harmful trackers are blocked. There’s also a strict setting that blocks the majority of trackers on the web, but that could mean some parts of sites fail to load or might not work correctly. If you’re familiar with Ghostery, then Microsoft’s built-in protection Edge is similar.

So why even switch to Microsoft’s Edge Chromium browser? Microsoft is banking on enterprise users switching to get access to features like Internet Explorer mode, which lets businesses load legacy IE sites within Edge automatically. The added tracking features, Collections, and support for 4K Netflix with Dolby Atmos and Dolby Vision will also be important differentiators over Chrome.

There’s also the aspect of trust and which browser company you want to trust with your browsing history and privacy. Google is phasing out third-party cookies and trackers in Chrome but not for two years. That gives Edge, Safari, Firefox, and others an opportunity to capitalize on web users who are a little more privacy-conscious. This alone won’t be enough to get everyone to switch away from Chrome, but Microsoft has a better opportunity than most with its Windows dominance in the enterprise and the fact Edge is now a lot more compatible with the web.

Compatibility is key, and it’s one of the big reasons why Microsoft chose Chromium in the first place. Chromium offers instant web compatibility, and it also allows Microsoft to bring its web browser elsewhere. Unusually, Microsoft is releasing Edge for Windows 7 today, even though it just went out of support. The company won’t say how long it will support Edge on Windows 7 for, but Google has committed to at least mid-2021. Edge is also arriving on Windows 8.1 and macOS, and it’s being updated on both Android and iOS.

Ultimately, the success of Edge Chromium could come down to whether it’s fully embraced by web developers and competitors like Google. During the beta period of Edge, we’ve seen both Google Meet and Google Stadia be inaccessible in Edge Chromium, despite working in both Chrome and beta versions of Chrome. Hopefully, this new version of Edge will prevent Chrome from turning into the new Internet Explorer 6 and restore some healthy browser competition to a market that is dominated by Chrome. It’s a good thing for consumers to have two tech giants competing to improve the web, as everyone gets a better web browser as a result.

If you’re interested in trying out the new Edge, you can download it for Windows or macOS over at Microsoft’s Edge site.

David_Shelley_Keep_Your_Business_Safe_With_Mobile_Device_Management_youtube

Mobile Device Management

Mobile device management is taking on an increasingly important role in today’s business climate. Here’s why.

Mobile device management is very important in today’s business climate. Obviously, all of your users employ mobile phones to facilitate corporate emails on their devices so they can access them in an easy and precise manner. The problem with this, though, is that the password that typically accesses your email is also your active network directory and credentials. If a mobile device falls into the wrong hands and gets hacked, someone can access that password fairly easily. As a result, they’ll have access to your network and intellectual property via your data. That’s why many organizations are using mobile device management to prevent this from happening.

In the video above, I’ve outlined how this tool allows you to control mobile devices, whether they’re provided by the company or an employee chooses to leverage their own phone.

Feel free to use to watch the video in its entirety or use these timestamps to browse specific points at your convenience:

1:41—Solutions we’ve deployed in the past that address mobile device management

1:59—Other devices that can be controlled by commercial, enterprise-level platforms

2:48—The most successful products we’ve seen in the last few years

3:18—What these products cost and what we believe is the most efficient platform price-wise

4:33—Why companies with a BYOD (bring your own device) approach need an HR policy in place that everyone signs off on

6:05—Cheaper alternatives for Android or IOS phones

6:58—Wrapping things up

If you have any questions about this topic, don’t hesitate to reach out to me. I’d be happy to help you.

DELL XPS 13

The camera is in the right place on the new Dell XPS 13. That may seem like a small thing, but to people who have used this laptop in the past it’s the main thing. It’s the main thing because that seemingly minor complaint was the only real knock on an otherwise excellent laptop. Now that the webcam is above the screen instead of below it, I don’t have to talk about the XPS 13 as an “otherwise excellent” laptop.

The 2019 version of Dell’s nearly iconic XPS 13 has another update compared to the previous version, an updated 8th Gen Intel Whiskey Lake processor. Though that’s a nice thing to have, it’s not as important as the fact that Dell has nailed most of the fundamentals that make a good laptop.

Even though the XPS 13 has a strong pedigree, it’s worth talking about again. It was one of the first mainstream laptops with a nearly edge-to-edge screen. It doesn’t go in for 360-degree hinge tricks — there’s the XPS 13 2-in-1 for that — it was just always a good, well-built laptop. It has become something of a default alternative to the MacBook Air for Windows users — something thin, light, stylish, and also reliable. Windows has many more such laptops available now (the Surface Laptop 2 is a good choice), but the XPS 13 is still at or near the front of that pack.

I really like the white version of this laptop. The top has a sort of silvery depth to its finish and the keyboard deck consists of woven carbon fiber. That extra touch makes it feel much more comfortable and I prefer it to the Surface Laptop’s fabric finish. Based on what people have said about the last XPS model, it should also hold up well over time. It’s a very nice-looking laptop.

The 2019 version of the XPS 13 starts at $899, but I think most people will want to step up to the $1,199 (as of this writing) version. That will get you a Core i5 processor, 8GB of RAM, 256GB of storage, and the 1080p screen. Unfortunately, the only way to get a touchscreen is to jump all the way up to the $1,799 (as of this writing) model, which has a 4K screen, more RAM, and a faster processor.

Most people probably won’t miss the touchscreen, especially since this is a traditional laptop form factor, but I find it super convenient to have. Because the touchpad is smaller than on a lot of other modern laptops, I often find myself quickly reaching up to dismiss a notification or tap an icon on the taskbar. I wish Dell offered a touchscreen option on the 1080p screen — which costs less, has better battery life, and is available on most of its competitors.

But if you’re willing to spend the extra money (and take the battery life hit) to get the 4K screen, you’ll find it to be excellent. The 13.3-inch screen does go nearly edge-to-edge on the top, left, and right, but there’s a large-ish bezel on the bottom. Along with a lot of other people, I prefer a 3:2 aspect ratio for productivity work on laptops, which the XPS 13 does not have; it makes do with a traditional 16:9 screen. Dell has also put an anti-reflective coating on the screen which really helps with using it in a bright room.

7 Infamous Cloud Security Breaches You May Not Know About…

Pro wrestling giant WWE was recently the victim of a security breach leaked personal data for 3 million customers. Hackers reportedly gained access to the information after striking a database left unprotected on an Amazon cloud server. According to Forbes, residential addresses, ethnicity, earnings, and other personal details were included. As a day one subscriber of the WWE Network, I must admit that this is more than a little unsettling. But it’s also a perfect segue into our next post.

Without further ado, we rundown seven of the most dastardly cloud security breaches in history.

1. Microsoft

In late 2010, Microsoft experienced a breach that was traced back to a configuration issue within its Business Productivity Online Suite. The problem allowed non-authorized users of the cloud service to access employee contact info in their offline address books. Microsoft claims that customer had access to their data and that they fixed the issue two hours after it occured. While only a small number of users were affected, this incident is worth noting. It was not only the first significant cloud security breach, but a harbinger of things to come.

2. Dropbox

No one knew the severity of the breach cloud-based file sharing giant Dropbox announced back in 2012. In fact, it wasn’t until four years later that we learned what really happened. Hackers tapped into more than 68 million user accounts – email addresses and passwords included – representing nearly 5 gigabytes of data. But there’s more! Those stolen credentials reportedly made their way to a dark web marketplace – the price for them was bitcoins. At the time, this was equivalent to roughly $1,141. Dropbox responded by requesting a site-wide password reset from the user base. They also went into some generic spiel about its ongoing commitment to data security.

3. National Electoral Institute of Mexico

Elections and shenanigans usually go hand in hand. However, you can’t blame this next one on political chicanery. In April 2016, the National Electoral Institute of Mexico was the victim of a breach that saw over 93 million voter registration records compromised. Most of the records were lost due to a poorly configured database that made this confidential information publicly available to anyone. The icing on the cake came when we learned that the Institute was storing data on an insecure, illegally hosted Amazon cloud server outside of Mexico. Cue the silent head shake.

4. LinkedIn

Some guys have all the luck – or not. Business-focused social networking site LinkedIn felt the sting of cyber criminals when some 6 million user passwords were stolen then published on a Russian forum in 2012. Unfortunately its streak of bad luck was just getting started. In May 2016, hackers stole and posted for sale on the dark web an estimated 167 million LiknedIn email addresses and passwords. In addition to changing their passwords, LinkedIn implemented two-way authentication, an optional feature that makes you enter a pin code on your mobile device prior to logging in to the network.

5. Home Depot

DIY retailer Home Depot reminded us of the financial repercussions that may follow a major security breach. In 2014, an attack exploited the Home Depot point-of-sale terminals at the self-checkout lanes for months before someone finally detected it. The strategic onslaught affected 56 million credit card numbers, making it the biggest data breach of its kind at the time. Home Depot paid out well over a hundred million dollars in lawsuit settlements and compensation to the consumers and financial institutions affected by the incident.

6 Apple iCloud

Apple suffered what may be the largest high-profile cloud security breach due to the victims involved. Jennifer Lawrence and other celebrities had their private photos leaked online. Many of the victims initially thought that someone had hacked their individual phones. Instead, the iCloud service they used for personal storage had been compromised. In response, Apple urged users to employ stronger passwords and introduced a notification system that sends alerts when suspicious account activity is detected.

7. Yahoo

The web titans of today are using cloud infrastructures almost exclusively. That includes internet Pioneer Yahoo, who found itself on the wrong side of the history books. For whatever reason, it took the better part of three years to tally all the damage, but Yahoo finally disclosed the final numbers on the breach that occurred in 2013. Apparently more than one billion user accounts were compromised in the attack. This includes first and last names, email addresses, dates of birth, and questions and answers to security questions. This incident is on record as the largest data breach in history and unrelated to a separate incident that exposed 500 million accounts months prior.

Businesses have come to realize the cloud has both advantages and disadvantages as far as security is concerned. According to a recent study, security is ranked as both the primary benefit and biggest challenge of cloud computing for IT pros. I guess the moral of the story is that while there is plenty to love about it, addressing the security concerns is the only way to take full advantage of all the cloud has to offer.

Image result for cloud data

 

 

Cyberwar with Iran: How vulnerable is America?

The U.S. airstrike in Baghdad that killed Iranian General Qassem Soleimani on Friday will likely lead to retaliatory cyberattacks against America, security authorities say.

That means the power and electricity you use, the smart devices you carry and your bank accounts could be more vulnerable than ever to bad actors looking for revenge.

The U.S. military attack on Iran will “generate some significant response from the Iranians and that response could very well come in the form of a major cyberattack,” said Jamil N. Jaffer, vice president at IronNet Cybersecurity, a startup that helps nations defend against advanced digital threats.

A cyber conflict between the U.S. and Iran has been silently raging for years, with hacking attempts from the Middle East being made every single day. But now that the government has taken out one of the most powerful figures in Iran, an influx of hacking attempts is expected.

“Maybe they’ll double,” said Oded Vanunu, a leading vulnerability researcher at Check Point. “There will be many more cyberattacks in a short time. Most of which will target online services.” Products that connect to the internet are inherently hackable, and since most consumer-focused tools these days connect to a network, hackers in Iran can go after some of the largest and most widely used services in the country.

Private-sector corporations, which include banking, health care and energy services, would be the primary targets, according to Paul Martini, co-founder of the network security platform iBoss.

In the worst-case scenario, Iranian hackers “could instantaneously shut down an entire power grid,” Martini said. “It’s not just the lights, it’s also the internet which shuts down communication systems. Without shooting a single bullet or missile, you can shut down an entire county or nation.”

And even if Iran’s hacking capabilities aren’t sophisticated enough to fully undermine the U.S., high-ranking officials could bribe advanced hackers from around the world with bitcoin, Martini said.

Big cities like Atlanta, Boston and New Orleans have been crippled by various forms of cybersecurity attacks in recent history.

In the past few years, the Trump administration has issued a series of cyberattacks against Iran. Iran and hackers in general have gotten more sophisticated in orchestrating attacks on interconnected computing systems over time. Cybersecurity has become an increasingly important issue.

INSIDE INTEL’S INCREDIBLY SMALL MODULAR DESKTOP PC:

Today, you can upgrade a desktop PC’s gaming performance just by plugging in a new graphics card. What if you could do the same exact thing with everything else in that computer — slotting in a cartridge with a new CPU, memory, storage, even a new motherboard and set of ports? What if that new “everything else in your computer” card meant you could build an upgradable desktop gaming PC far smaller than you’d ever built or bought before?

The Intel NUC 9 Extreme, aka Ghost Canyon, is smack dab at the intersection of two ideas the chipmaker has been pursuing for years. Since 2012, the company’s been building barebones desktop computers called NUCs (short for “Next Unit of Computing”) that are typically so tiny, you could fit one anywhere. And in 2015, Intel started working on an idea called the Compute Card to easily upgrade the brains of a PC by swapping in a new CPU cartridge. But the Compute Card was a commercial failure, and the NUC was only ever as powerful as its onboard graphics could manage.

Intel conducted a freak experiment that put AMD graphics inside an Intel processor, the company proved that a NUC could be powerful enough for gamers as well — and starting wondering what it’d be like to build a fully upgradable version that could fit powerful desktop graphics cards, too. And while Intel was at it, the company resurrected the idea of a modular “brain” for the rest of the computer as well.

The result is a desktop gaming PC that’s just 5 liters in volume — slightly smaller than a PlayStation 4 Proand taking up less than half the 12 liter volume of a Corsair One, one of our favorite pre-built compact gaming PC designs, and yet with enough space to fit a desktop-grade Nvidia RTX 2070 GPU. Besides, its “NUC Element” modular CPU cartridge could make it even easier to upgrade than computers with far more space to work inside.

It just takes two screws to open the lid, with its twin 80mm exhaust fans and spring-loaded copper connectors — there’s no fan cable to unplug — plus half a dozen cables to dislodge, including snaps for a pair of Wi-Fi antennas, the power cable, and cables for the front audio, USB ports, and full-size SD card reader. Then push a lever, and the computer’s entire brain lifts right out — a graphics card-shaped brain that houses practically every critical part of the system save the modified 500W FlexATX power supply.

Pop open the NUC Element module after two more screws, and you’ll find an L-shaped vapor chamber cooler for the 45W Core i5, i7, or i9-9980HK CPU (laptop grade, but one of the fastest Intel makes); an 80mm blower-style fan and heat sink to dissipate the heat; two standard DDR4 laptop memory module slots; and two slots for stick-shaped M.2 NVMe solid state drives — just load it up with whatever you like. Around back there’s an Intel Wi-Fi 6 module baked right in, plus a full array of ports including four USB 3.1 jacks, two Thunderbolt 3 ports, dual Gigabit Ethernet sockets, and an HDMI port. That’s a fairly comprehensive array of ports today, but it’s neat to think that the next time you upgrade the CPU cartridge, it’d come with the latest and greatest ports as well.

Right next to the CPU brain is a second PCIe x16 slot where your gaming graphics card goes, and you should know it won’t fit every card — you’ll be looking for a “mini” GPU like a GeForce RTX 2070 Mini that’s less than eight inches in length. And it’s a little cramped in such a tiny case, with the graphics card practically blocking the CPU fan’s intake, so I’m curious if a gaming system would be able to maintain full performance for hours on end. But I did see a butter-smooth 60+ frames per second in Shadow of the Tomb Raider at max settings on a 2560 x 1440 monitor with surprisingly little fan noise during an early test — pretty impressive for a PC barely wider than two graphics cards side by side.

The bigger question is whether you’ll actually be able to upgrade such a system for years to come. While practically every other part of the system is easily upgradable with off-the-shelf parts, the NUC Element modules themselves are proprietary and their CPUs are soldered down, meaning you’d need to rely on Intel to keep making them for the foreseeable future, and Intel isn’t promising to do that for sure. But the company says there are currently at least two years of upgrades on the road map and it’s hoping to see more — and perhaps more importantly, Intel has actual partners promising to take up the modular torch.

While Intel will be selling the NUC 9 Extreme this March as a barebones system (read: bring your own OS, memory, storage, and GPU) starting at around $1,050 with a Core i5 module, $1,250 for Core i7, or around $1,700 for the flagship Core i9, it won’t be the only company pushing the idea. Razer and Cooler Master have both confirmed they’ll be selling their own complete turnkey gaming rigs later this year based on the NUC Element module, but with standard SFX power supplies and room for larger graphics cards than Intel’s own box, as well as their very own distinct NUC Element enclosures. You’ll be able to buy Intel’s board separately and stick it into one.

Both Cooler Master and Razer are notable partners because they’re new to the desktop market; Intel’s move means a PC parts vendor can now sell entire computers.

Plus, Intel says other vendors are signing up to sell its own Ghost Canyon box as a complete system as well — and there’s even a second version of the NUC, dubbed Quartz Canyon, that’ll offer Intel Xeon processor modules to businesses that need them.

 

Google’s identity crisis is deepening

The biggest tech news yesterday is that the former Google human rights chief says he was “sidelined” over the proposed, censored Chinese search engine known as “Dragonfly.” Ross LaJeunesse, the executive, knew how to ensure his story would make an impact. He spoke with Colin Lecher and many other media outlets, published a Medium post with frankly shocking details, and dominated tech news all day. Good. His story deserves attention.

An idea that I’ve been kicking around as we prepare for season two of the Land of the Giants podcast (about Google, naturally) is that until very recently, Google was a special kind of naive. It is a powerful, massive company that used to see itself as a utopian collective which just so happened to make oodles of cash.

If you get annoyed that Google has pivoted its way through launching and killing a dozen messaging apps, that open, freewheeling culture is why.

That kind of naiveté would be endearing if it wasn’t also so dangerous. An organization as powerful as Google that isn’t willing to admit its size, influence, and power is bound to stumble into problems. I think Dragonfly was one result of that disconnect.

Even if you could believe that Google could have made a moral case for Dragonfly (and I’m leaving that judgment for another time), the telling thing is that Google didn’t try — it was not openly discussed with employees like so many other Google products.

Assuming LaJeunesse’s account is accurate, there are any number of motivations you could ascribe to these decisions. But I want to home in on just one: I think that dealing forthrightly with the Dragonfly decision in a more traditional, open, “Googley” way would have forced the entire company to contend with the uncomfortable truth that it is a massive, geopolitical entity. It would have popped the bubble of Google’s self-image.

The partnership between TechsoMed and Lenovo is enabling physicians to target cancer with never before seen precision.

To treat cancerous tumors, physicians have long relied on surgery, chemotherapy and radiation — highly invasive treatments that push patients’ bodies to their limits. Surgically removing tumors requires long hospital stays, recovery times and extensive follow-ups. In the near future, all of this might change. 

Medical imaging company TechsoMed is revolutionizing the way physicians treat certain tumors by combining decades-old medical methods with innovative algorithm-powered technology. They’ve set their sights on bringing a little-known cancer treatment, called thermal ablation, into mainstream use.

Chemotherapy and traditional tumor resection take a massive toll on patients, both physically and financially. These standard treatments cost tens of thousands of dollars and have serious risks and complications, often requiring days or weeks for recovery. On the other hand, thermal ablation is a minimally invasive procedure that takes under five minutes and uses local anesthesia, resulting in fewer potential complications and significantly shorter recovery times. Plus, it costs up to ten times less than the alternative.

Despite being a preferred treatment modality for tumor removal, thermal ablation has one main drawback preventing it from becoming a first-line treatment: physicians lack the ability to visualize and control the damage to tissue in real-time.

Thermal ablation works by applying intense heat to early-stage tumors, smaller than three centimeters in diameter. During a standard ablation procedure, physicians use grainy ultrasound images to identify the “estimated treatment area,” but once they start the treatment, they have no indication of the actual damage caused to the tissue. This can result in over-treatment — the destruction of excessive tissue around the tumor — or under-treatment, which can lead to tumor recurrence. An added problem: It takes up to 24 hours post-procedure to learn the effectiveness of the procedure, by then there’s not much to be done. 

“This is the common practice and the gold standard,” says Yossi Abu, the founder and CEO of TechsoMed. He plans to change that.

And Abu has big plans. Ultimately, he hopes to scrap traditional ultrasound systems altogether by developing the world’s first AI-powered ultrasound machine with data gleaned from clinical sites to “take a smart ultrasound, and transform it into a smarter ultrasound.”