viernes, 29 de mayo de 2020

Aaron Levie: ‘We have way too many manual processes in businesses’

Box CEO Aaron Levie has been working to change the software world for 15 years, but the pandemic has accelerated the move to cloud services much faster than anyone imagined. As he pointed out yesterday in an Extra Crunch Live interview, who would have thought three months ago that businesses like yoga and cooking classes would have moved online — but here we are.

Levie says we are just beginning to see the range of what’s possible because circumstances are forcing us to move to the cloud much faster than most businesses probably would have without the pandemic acting as a change agent.

“Overall, what we’re going to see is that anything that can become digital probably will be in a much more accelerated way than we’ve ever seen before,” Levie said.

Fellow TechCrunch reporter Jon Shieber and I spent an hour chatting with Levie about how digital transformation is accelerating in general, how Box is coping with that internally and externally, his advice for founders in an economic crisis and what life might be like when we return to our offices.

Our interview was broadcast on YouTube and we have included the embed below.


Just a note that Extra Crunch Live is our new virtual speaker series for Extra Crunch members. Folks can ask their own questions live during the chat, with past and future guests like Alexis Ohanian, Garry Tan, GGV’s Hans Tung and Jeff Richards, Eventbrite’s Julia Hartz and many, many more. You can check out the schedule here. If you’d like to submit a question during a live chat, please join Extra Crunch.


On digital transformation

The way that we think about digital transformation is that much of the world has a whole bunch of processes and ways of working — ways of communicating and ways of collaborating where if those business processes or that way we worked were able to be done in digital forms or in the cloud, you’d actually be more productive, more secure and you’d be able to serve your customers better. You’d be able to automate more business processes.

We think we’re [in] an environment that anything that can be digitized probably will be. Certainly as this pandemic has reinforced, we have way too many manual processes in businesses. We have way too slow ways of working together and collaborating. And we know that we’re going to move more and more of that to digital platforms.

In some cases, it’s simple, like moving to being able to do video conferences and being able to collaborate virtually. Some of it will become more advanced. How do I begin to automate things like client onboarding processes or doing research in a life sciences organization or delivering telemedicine digitally, but overall, what we’re going to see is that anything that can become digital probably will be in a much more accelerated way than we’ve ever seen before.

How the pandemic is driving change faster



from TechCrunch https://ift.tt/3ccSVFh
via IFTTT

miércoles, 27 de mayo de 2020

Instagram’s AR filters are getting more dynamic

Augmented reality filters on Instagram are picking up some new tricks with the latest update to Facebook’s Spark AR platform.

Spark AR has been making pretty consistent updates to the feature sets developers can play with in creating AR filters since it exited closed beta on Instagram last year. Today, Facebook added some new functionality to the platform on Instagram, allowing creators to build more complex filters to entice users with. Creators can now build filters that respond visually to music or allow users to apply effects to media from their camera roll. In addition to the new features, Facebook has also created AR Sticker templates that can allow creators to customize AR filters quickly.

The new AR Music feature allows developers to create filters that interact with music, be that tunes that are uploaded directly, selected from Instagram’s music selection tool or just audio that’s playing in the background. It’s a pretty logical step for Instagram, bringing equalizer-style visual effects into filters and pushing users to bring music and AR into their Stories simultaneously.

Bringing gallery selection tools to Instagram’s filters allow users to spin new AR effects on previously captured photos or video. With Media Library, one can easily grab an old photo or video and toss a filter on it, with Gallery Picker, users can transform a visual filter with media from their gallery allowing for a level of customization that could promote more consistent usage of singular filters among users.

You can see what they look like in action on Instagram’s blog announcing the updates.

Facebook has talked a big game about augmented reality’s future across all of its platforms, but over the past several years the company has had a rough time making the camera a meaningful platform inside the Facebook app, leaving much of the development advances to Instagram which has always had the advantage of a hefty reliance on both its in-app camera and visual filters. These new updates are iterative but partially address one of the big underlying usability issues with AR filter effects: they often aren’t dynamic enough to encourage reuse. Bringing audio effects and greater customizability will allow developers to build filters that can hopefully have new life instilled in them again and again based on user creativity.

These new updates to Spark AR Studio are available today.

 



from TechCrunch https://ift.tt/3d9H2Bw
via IFTTT

We throw away 80% of our content ideas, and you should too

We’ve talked a bit publicly about our ideation process, but to be honest, it’s constantly evolving. With every piece of content we create and promote, we gain a better understanding of what works and what doesn’t.

But part of that process has always been allowing for the creative freedom to come up with ideas and then — and most importantly — kill your darlings if they don’t meet the criteria for a good idea.

It’s not always easy; creativity is personal. But culling the list of ideas is necessary for a successful content plan.

So how do you know which ones to cut?

Ask yourself these questions.

Is the idea packed with emotion?

Make a list of all the emotions associated with your idea. If you can’t think of any, it means the idea may need some tweaking, or you need to explore it in more depth.

Even helpful how-to content is tied to emotion. Take, for example, “Give Your Kids the Gift of Automotive Repair Skills While You’re Home Together,” a genius piece of content by Car and Driver.

There’s the emotional component of it being in the context of COVID-19, yes, but it’s more than that. It’s about spending quality time with your children and teaching them crucial skills. Related emotions include love, pride, empowerment, accountability, parental responsibility and more.

And the content creators were smart enough to call out the emotional component, like they did here:

The post garnered nearly 5,000 engagements on Facebook, which to me indicates it hit the sweet spot of being helpful while also tapping into our emotions.

Fractl did a study back in 2013 that explored which type of emotions were the most prevalent in viral images, and, as it turns out, positive emotions had more representation than negative ones. Most prevalent of all? Surprise. People enjoy being astonished, delighted and unexpectedly joyful. Do any of your content ideas fit this bill?



from TechCrunch https://ift.tt/2M7Dd3C
via IFTTT

viernes, 22 de mayo de 2020

Box will let employees work from home until at least 2021

Another tech company is joining the list of those planning on going remote for the long haul: Box.

Box CEO Aaron Levie announced this morning that the company will “remain a digital-first organization” moving forward. While it sounds like they’re still working out exactly what that entails, one key aspect is that Box employees will be able to work “from anywhere” until at least January of 2021.

Box isn’t planning to ditch the office outright. In a blog post about the shift, Levie notes that plenty of people prefer working from an office, and that the company is aware of the “power of having office hubs where in-person communities, mentorship, networking, and creativity can happen.” Instead, they’ll be focusing on finding ways to make a hybrid setup — some remote, some in office — work. Meanwhile, they’re shifting all future all-hands meetings to virtual, adjusting their interview/onboarding process for remote hiring, and offering stipends to employees looking to build out their home office setups.

More and more companies are promising to making work-from-home/work-from-anywhere setups work, albeit with varying levels of commitment. Box joins companies like Google and Spotify in making it officially-okay until at least 2021; Square and Twitter, meanwhile, both went ahead and just made it permanent policy.

Levie will be joining us for an Extra Crunch Live interview next week on May 28th. Find the details here.



from TechCrunch https://ift.tt/3ebT9OC
via IFTTT

jueves, 21 de mayo de 2020

Extra Crunch Live: Join Box CEO Aaron Levie May 28th at noon PT/3 pm ET/7 pm GMT

We’ve been on a roll with our Extra Crunch Live Series for Extra Crunch members, where we’re talking to some of the biggest names in Silicon Valley about business, investment and the startup community. Recent interviews include Kirsten Green from Forerunner Ventures, Charles Hudson from Precursor Ventures and investor Mark Cuban.

Next week, we’re pleased to welcome Box CEO Aaron Levie. He is a well-known advocate of digital transformation, often a years-long process that many companies have compressed into a few months because of the pandemic, as he has pointed out lately.

As the head of an enterprise SaaS company that started out to help users manage information online, he has a unique perspective on what’s happening in this period as companies move employees home and implement cloud services to ease the transition.

Levie started his company 15 years ago while still an undergrad in the proverbial dorm room and has matured from those early days into a public company executive, guiding his employees, customers and investors through the current crisis. This is not the first economic downturn he has faced as CEO at Box; when it was still an early-stage startup, he saw it through the 2008 financial crisis. Presumably, he’s taking the lessons he learned then and applying them now to a much more mature organization.

Please join TechCrunch writers Ron Miller and Jon Shieber as we chat with Levie about how he’s handling the COVID-19 crisis, moving employees offsite and what advice he has for companies that are accelerating their digital transformation. After he’s shared his wisdom for startups seeking survival strategies, we’ll discuss what life might look like for Box and other companies in a post-pandemic environment.

During the call, audience members are encouraged to ask questions. We’ll get to as many as we can, but you can only participate if you’re an Extra Crunch member, so please subscribe here.

Extra Crunch subscribers can find the Zoom link below (with YouTube to follow) as well as a calendar invite so you won’t miss this conversation.



from TechCrunch https://ift.tt/3g9R6w8
via IFTTT

miércoles, 6 de mayo de 2020

Invisible AI uses computer vision to help (but hopefully not nag) assembly line workers

“Assembly” may sound like one of the simpler tests in the manufacturing process, but as anyone who’s ever put together a piece of flat-pack furniture knows, it can be surprisingly (and frustratingly) complex. Invisible AI is a startup that aims to monitor people doing assembly tasks using computer vision, helping maintain safety and efficiency — without succumbing to the obvious all-seeing-eye pitfalls. A $3.6 million seed round ought to help get them going.

The company makes self-contained camera-computer units that run highly optimized computer vision algorithms to track the movements of the people they see. By comparing those movements with a set of canonical ones (someone performing the task correctly), the system can watch for mistakes or identify other problems in the workflow — missing parts, injuries, and so on.

Obviously, right at the outset, this sounds the kind of thing that results in a pitiless computer overseer that punishes workers every time they fall below an artificial and constantly rising standard — and Amazon has probably already patented that. But co-founder and CEO Eric Danziger was eager to explain that this isn’t the idea at all.

“The most important parts of this product are for the operators themselves. This is skilled labor, and they have a lot of pride in their work,” he said. “They’re the ones in the trenches doing the work, and catching and correcting mistakes is a big part of it.”

“These assembly jobs are pretty athletic and fast paced. You have to remember the 15 steps you have to do, then move on to the next one, and that might be a totally different variation. The challenge is keeping all that in your head,” he continued. “The goal is to be a part of that loop in real time. When they’re about to move on to the next piece we can provide a double check and say, ‘Hey, we think you missed step 8.’ That can save a huge amount of pain. It might be as simple as plugging in a cable, but catching it there is huge — if it’s after the vehicle has been assembled, you’d have to tear it down again.”

This kind of body tracking exists in various forms and for various reasons; Veo Robotics, for instance, uses depth sensors to track an operator and robot’s exact positions to dynamically prevent collisions.

But the challenge at the industrial scale is less “how do we track a person’s movements in the first place” than “how can we easily deploy and apply the results of tracking a person’s movements.” After all, it does no good if the system takes a month to install and days to reprogram. So Invisible AI focused on simplicity of installation and administration, with no code needed and entirely edge-based computer vision.

“The goal was to make it as easy to deploy as possible. You buy a camera from us, with compute and everything built in. You install it in your facility, you show it a few examples of the assembly process, then you annotate them. And that’s less complicated than it sounds,” Danziger explained. “Within something like an hour they can be up and running.”

Once the camera and machine learning system is set up, it’s really not such a difficult problem for it to be working on. Tracking human movements is a fairly straightforward task for a smart camera these days, and comparing those movements to an example set is comparatively easy as well. There’s no “creativity” involved, like trying to guess what a person is doing or match it to some huge library of gestures, as you might find in an AI dedicated to captioning video or interpreting sign language (both still very much works in progress elsewhere in the research community).

As for privacy and the possibility of being unnerved by being on camera constantly, that’s something that has to be addressed by the companies using this technology. There’s a distinct possibility for good, but also for evil, like pretty much any new tech.

One of Invisible’s early partners is Toyota, which has been both an early adopter and skeptic when it comes to AI and automation. Their philosophy, one that has been arrived at after some experimentation, is one of empowering expert workers. A tool like this is an opportunity to provide systematic improvement that’s based on what those workers already do.

It’s easy to imagine a version of this system where, like in Amazon’s warehouses, workers are pushed to meet nearly inhuman quotas through ruthless optimization. But Danziger said that a more likely outcome, based on anecdotes from companies he’s worked with already, is more about sourcing improvements from the workers themselves.

Having built a product day in and day out year after year, these are employees with deep and highly specific knowledge on how to do it right, and that knowledge can be difficult to pass on formally. “Hold the piece like this when you bolt it or your elbow will get in the way” is easy to say in training but not so easy to make standard practice. Invisible AI’s posture and position detection could help with that.

“We see less of a focus on cycle time for an individual, and more like, streamlining steps, avoiding repetitive stress, etc.,” Danziger said.

Importantly, this kind of capability can be offered with a code-free, compact device that requires no connection except to an intranet of some kind to send its results to. There’s no need to stream the video to the cloud for analysis; footage and metadata are both kept totally on-premise if desired.

Like any compelling new tech, the possibilities for abuse are there, but they are not — unlike an endeavor like Clearview AI — built for abuse.

“It’s a fine line. It definitely reflects the companies it’s deployed in,” Danziger said. “The companies we interact with really value their employees and want them to be respected and engaged in the process as possible. This helps them with that.”

The $3.6 million seed round was led by 8VC, with participating investors including iRobot Corporation, K9 Ventures, Sierra Ventures and Slow Ventures.



from TechCrunch https://ift.tt/35NwK7n
via IFTTT