A new era of tech coverage at Vox


For those defined by change, the tech world feels very disruptive these days. Artificial intelligence regularly makes headlines. Electric cars are taking over the roads. The microchip is made in the USA again. For the techno-optimists out there, we’re finally living in the sci-fi-inspired version of the future that was promised.

But our present is more complicated than that. The technology industry is facing a series of crossroads. Businesses that once seemed like unstoppable profit machines are starting to falter, slowing the tech giant’s meteoric growth as Washington’s leaders target them as too big. increase. A changing global economy is bringing high-tech manufacturing jobs back to the United States. The office worker is debating whether to return to the office or become independent. Our roads aren’t really ready for all these electric vehicles, and the AI ​​technology sweeping Silicon Valley has real-time, unpredictable consequences as it becomes available to the public. Skeptics might say we have a sci-fi future we’ve built for ourselves.

Recode’s mission is to help readers understand technological change and how it’s impacting their lives. When Recode partnered with Vox in 2019, we set out to combine our technology and media expertise with Vox’s command of descriptive journalism. And we are very proud of what we have achieved. But looking to the future, we believe we can serve you better behind a more united front.

So, starting today, we are discontinuing the Recode brand and continuing our mission under the Vox banner. Over time, we’ve heard some feedback from readers who found the Vox sub-brands confusing. This is the exact opposite of what Vox is aiming for. And as technology’s role in our lives continues to expand, our reporters look forward to working more with the rest of Vox’s team (ranging from political buffs to science geeks).

Vox continues to explain how technology is changing the world and changing us. The same reporters continue to cover many of the same familiar topics from Recode, including the changing climate of Silicon Valley, the power struggle between Big Tech and Washington, the future of work, and media everywhere. You’ll also notice a renewed focus on covering innovation and transformation. The role of technology in fighting climate change, the reinvention of America’s cities, the creeping of artificial intelligence into the mainstream, and more.

Of course, our unique approach wouldn’t exist without the influence of the tireless innovators Kara Swisher and Walt Mossberg, who launched Recode nearly a decade ago. After Walt retired and stepped down as editor-in-chief of her Recode in 2019, Kara has focused on building podcasts on her Vox Media. with Kara Swisher and pivotA big thank you to Walt and Kara for their pioneering work in technology journalism. Their vision will continue to guide our work in this new era.

Expect some exciting things in the coming months. Peter Kafka’s popular podcast is coming back soon with a new name and new look. Vox Media will also continue to hold Code Conferences. At this conference, Vox writers will take the stage alongside some of the industry’s most important leaders.

A paradigm shift, progress, and perhaps a fair amount of uncertainty about what that means. At Vox, we look forward to continuing to explain the news and help you understand how it’s relevant to you.



Source link

How generative AI from OpenAI and Google is transforming search — and maybe everything else


The world’s first generative AI-powered search engine is here, and it’s in love with you. Or it thinks you’re kind of like Hitler. Or it’s gaslighting you into thinking it’s still 2022, a more innocent time when generative AI seemed more like a cool party trick than a powerful technology about to be unleashed on a world that might not be ready for it.

If you feel like you’ve been hearing a lot about generative AI, you’re not wrong. After a generative AI tool called ChatGPT went viral a few months ago, it seems everyone in Silicon Valley is trying to find a use for this new technology. Generative AI is essentially a more advanced and useful version of the conventional artificial intelligence that already helps power everything from autocomplete to Siri. The big difference is that generative AI can create new content, such as images, text, audio, video, and even code — usually from a prompt or command. It can write news articles, movie scripts, and poetry. It can make images out of some really specific parameters. And if you listen to some experts and developers, generative AI will eventually be able to make almost anything, including entire apps, from scratch. For now, the killer app for generative AI appears to be search.

One of the first major generative AI products for the consumer market is Microsoft’s new AI-infused Bing, which debuted in January to great fanfare. The new Bing uses generative AI in its web search function to return results that appear as longer, written answers culled from various internet sources instead of a list of links to relevant websites. There’s also a new accompanying chat feature that lets users have human-seeming conversations with an AI chatbot. Google, the undisputed king of search for decades now, is planning to release its own version of AI-powered search as well as a chatbot called Bard in the coming weeks, the company said just days after Microsoft announced the new Bing.

In other words, the AI wars have begun. And the battles may not just be over search engines. Generative AI is already starting to find its way into mainstream applications for everything from food shopping to social media.

Microsoft and Google are the biggest companies with public-facing generative AI products, but they aren’t the only ones working on it. Apple, Meta, and Amazon have their own AI initiatives, and there are plenty of startups and smaller companies developing generative AI or working it into their existing products. TikTok has a generative AI text-to-image system. Design platform Canva has one, too. An app called Lensa creates stylized selfies and portraits (sometimes with ample bosoms). And the open-source model Stable Diffusion can generate detailed and specific images in all kinds of styles from text prompts.

There’s a good chance we’re about to see a lot more generative AI showing up in a lot more applications, too. OpenAI, the AI developer that built the ChatGPT language model, recently announced the release of APIs, or application programming interfaces, for its ChatGPT and Whisper, a speech recognition model. Companies like Instacart and Shopify are already implementing this tech into their products, using generative AI to write shopping lists and offer recommendations. There’s no telling how many more apps might come up with novel ways to take advantage of what generative AI can do.

Generative AI has the potential to be a revolutionary technology, and it’s certainly being hyped as such. Venture capitalists, who are always looking for the next big tech thing, believe that generative AI can replace or automate a lot of creative processes, freeing up humans to do more complex tasks and making people more productive overall. But it’s not just creative work that generative AI can produce. It can help developers make software. It could improve education. It may be able to discover new drugs or become your therapist. It just might make our lives easier and better.

Or it could make things a lot worse. There are reasons to be concerned about the damage generative AI can do if it’s released to a society that isn’t ready for it — or if we ask the AI program to do something it isn’t ready for. How ethical or responsible generative AI technologies are is largely in the hands of the companies developing them, as there are few if any regulations or laws in place governing AI. This powerful technology could put millions of people out of work if it’s able to automate entire industries. It could spawn a destructive new era of misinformation. There are also concerns of bias due to a lack of diversity in the material and data that generative AI is trained on, or the people who are overseeing that training.

Nevertheless, powerful generative AI tools are making their way to the masses. If 2022 was the “year of generative AI,” 2023 may be the year that generative AI is actually put to use, ready or not.

The slow, then sudden, rise of generative AI

Conventional artificial intelligence is already integrated into a ton of products we use all the time, like autocomplete, voice assistants like Amazon’s Alexa, and even the recommendations for music or movies we might enjoy on streaming services. But generative AI is more sophisticated. It uses deep learning, or algorithms that create artificial neural networks that are meant to mimic how human brains process information and learn. And then those models are fed enormous amounts of data to train on. For example, large language models power things like ChatGPT, which train on text collected from around the internet until they learn to generate and mimic those kinds of texts and conversations upon request. Image models have been fed tons of images and captions that describe them in order to learn how to create new content based on prompts.

After years of development, most of it outside of public view, generative AI hit the mainstream in 2022 with the widespread releases of art and text models. Models like Stable Diffusion and DALL-E, which was released by OpenAI, were first to go viral, and they let anyone create new images from text prompts. Then came OpenAI’s ChatGPT (GPT stands for “generative pre-trained transformer”) which got everyone’s attention. This tool could create large, entirely new chunks of text from simple prompts. For the most part, ChatGPT worked really well, too — better than anything the world had seen before.

Though it’s one of many AI startups out there, OpenAI seems to have the most advanced or powerful products right now. Or at least, it’s the startup that has given the general public access to its services, thereby providing the most evidence of its progress in the generative AI field. This is a demonstration of its abilities as well as a source of even more data for OpenAI’s models to learn from.

OpenAI is also backed by some of the biggest names in Silicon Valley. It was founded in 2015 as a nonprofit research lab with $1 billion in support from the likes of Elon Musk, Reid Hoffman, Peter Thiel, Amazon, and former Y Combinator president Sam Altman, who is now the company’s CEO. OpenAI has since changed its structure to become a for-profit company but has yet to make a profit or even much by way of revenue. That’s not a problem yet, as OpenAI has gotten a considerable amount of funding from Microsoft, which began investing in OpenAI in 2019. And OpenAI is seizing on the wave of excitement for ChatGPT to promote its API services, which are not free. Neither is the company’s upcoming ChatGPT Plus service.

A drawing of a human hand reaching out to shake a robot hand.

Malte Mueller/Getty Images

Other big tech companies have for years been working on their own generative AI initiatives. There’s Apple’s Gaudi, Meta’s LLaMA and Make-a-Scene, Amazon’s collaboration with Hugging Face, and Google’s LaMDA (which is good enough that one Google engineer thought it was sentient). But thanks to its early investment in OpenAI, Microsoft had access to the AI project everyone knew about and was trying out.

In January 2023, Microsoft announced it was giving $10 billion to OpenAI, bringing its total investment in the company to $13 billion. From that partnership, Microsoft has gotten what it hopes will be a real challenge to Google’s longtime dominance in web search: a new Bing powered by generative AI.

AI search will give us the first glimpse of how generative AI can be used in our everyday lives … if it works

Tech companies and investors are willing to pour resources into generative AI because they hope that, eventually, it will be able to create or generate just about any kind of content humans ask for. Some of those aspirations may be a long way from becoming reality, but right now, it’s possible that generative AI will power the next evolution of the humble internet search.

After months of rumors that both Microsoft and Google were working on generative AI versions of their web search engines, Microsoft debuted its AI-integrated Bing in January in a splashy media event that showed off all the cool things it could do, thanks to OpenAI’s custom-built technology that powered it. Instead of entering a prompt for Bing to look up and return a list of relevant links, you could ask Bing a question and get a “complete answer” composed by Bing’s generative AI and culled from various sources on the web that you didn’t have to take the time to visit yourself. You could also use Bing’s chatbot to ask follow-up questions to better refine your search results.

Microsoft wants you to think the possibilities of these new tools are just about endless. And notably, Bing AI appeared to be ready for the general public when the company announced it last month. It’s now being rolled out to people on an ever-growing wait list and incorporated into other Microsoft products, like its Windows 11 operating system and Skype.

This poses a major threat to Google, which has had the search market sewn up for decades and makes most of its revenue from the ads placed alongside its search results. The new Bing could chip away at Google’s search dominance and its main moneymaker. And while Google has been working on its own generative AI models for years, its AI-powered search engine and corresponding chatbot, which it calls Bard, appear to be months away from debut. All of this suggests that, so far, Microsoft is winning the AI-powered search engine battle.

Or is it?

Once the new Bing made it to the masses, it quickly became apparent that the technology might not be ready for primetime after all. Right out of the gate, Bing made basic factual errors or made up stuff entirely, also known as “hallucinating.” What was perhaps more problematic, however, was that its chatbot was also saying some disturbing and weird things. One person asked Bing for movie showtimes, only to be told the movie hadn’t come out yet (it had) because the date was February 2022 (it wasn’t). The user insisted that it was, at that time, February 2023. Bing AI responded by telling the user they were being rude, had “bad intentions,” and had lost Bing’s “trust and respect.” A New York Times reporter pronounced Bing “not ready for human contact” after its chatbot — with a considerable amount of prodding from the reporter — began expressing its “desires,” one of which was the reporter himself. Bing also told an AP reporter that he was acting like Hitler.

In response to the bad press, Microsoft has tried to put some limits and guardrails on Bing, like limiting the number of interactions one person can have with its chatbot. But the question remains: How thoroughly could Microsoft have tested Bing’s chatbot before releasing it if it took only a matter of days for users to get it to give such wild responses?

Google, on the other hand, may have been watching this all unfold with a certain sense of glee. Its limited Bard rollout hasn’t exactly gone perfectly, but Bard hasn’t compared any of its users to one of the most reviled people in human history, either. At least, not that we know of. Not yet.

Again, Microsoft and Google aren’t the only companies working on generative AI, but their public releases have put more pressure on others to roll out their offerings as soon as possible, too. ChatGPT’s release and OpenAI’s partnership with Microsoft likely accelerated Google’s plans. Meanwhile, Meta is working to get its generative AI into as many of its own products as possible and just released a large language model of its own, called Large Language Model Meta AI, or LLaMA.

With the rollout of APIs that help developers add ChatGPT and Whisper to their applications, OpenAI seems eager to expand quickly. Some of these integrations seem pretty useful, too. Snapchat now has a chatbot called “My AI” for its paid subscribers, with plans to offer it to everyone soon. Initial reports say it’s just ChatGPT in Snapchat, but with even more restrictions about what it will talk about (no swearing, sex, or violence). Instacart will use ChatGPT in a feature called “Ask Instacart” that can answer customers’ questions about food. And Shopify’s Shop app has a ChatGPT-powered assistant to make personalized recommendations from the brands and stores that use the platform.

Generative AI is here to stay, but we don’t yet know if that’s for the best

Bing AI’s problems were just a glimpse of how generative AI can go wrong and have potentially disastrous consequences. That’s why pretty much every company that’s in the field of AI goes out of its way to reassure the public that it’s being very responsible with its products and taking great care before unleashing them on the world. Yet for all of their stated commitment to “building AI systems and products that are trustworthy and safe,” Microsoft and OpenAI either didn’t or couldn’t ensure a Bing chatbot could live up to those principles, but they released it anyway. Google and Meta, by contrast, were very conservative about releasing their products — until Microsoft and OpenAI gave them a push.

Error-prone generative AI is being put out there by many other companies that have promised to be careful. Some text-to-image models are infamous for producing images with missing or extra limbs. There are chatbots that confidently declare the winner of a Super Bowl that has yet to be played. These mistakes are funny as isolated incidents, but we’ve already seen one publication rely on generative AI to write authoritative articles with significant factual errors.

A drawing of hands with 12 fingers using a laptop.

Malte Mueller/Getty Images

These screw-ups have been happening for years. Microsoft had one high-profile AI chatbot flop with its 2016 release of Tay, which Twitter users almost immediately trained to say some really offensive things. Microsoft quickly took it offline. Meta’s Blenderbot is based on a large language model and was released in August 2022. It didn’t go well. The bot seemed to hate Facebook, got racist and antisemitic, and wasn’t very accurate. It’s still available to try out, but after seeing what ChatGPT can do, it feels like a clunky, slow, and weird step backward.

There are even more serious concerns. Generative AI threatens to put a lot of people out of work if it’s good enough to replace them. It could have a profound impact on education. There are also questions of legalities over the material AI developers are using to train their models, which is typically scraped from millions of sources that the developers don’t have the rights to. And there are questions of bias both in the material that AI models are training on and the people who are training them.

On the other side, some conservative bomb-throwers have accused generative AI developers of moderating their platforms’ outputs too much and making them “woke” and biased against the right wing. To that end, Musk, the self-proclaimed free-speech absolutist and OpenAI critic as well as an early investor, is reportedly considering developing a ChatGPT rival that won’t have content restrictions or be trained on supposedly “woke” material.

And then there’s the fear not of generative AI but of the technology it could lead to: artificial general intelligence. AGI can learn and think and solve problems like a human, if not better. This has given rise to science fiction-based fears that AGI will lead to an army of super-robots that quickly realize they have no need for humans and either turn us into slaves or wipe us out entirely.

There are plenty of reasons to be optimistic about generative AI’s future, too. It’s a powerful technology with a ton of potential, and we’ve still seen relatively little of what it can do and who it can help. Silicon Valley clearly sees this potential, and venture capitalists like Andreessen Horowitz and Sequoia seem to be all-in. OpenAI is valued at nearly $30 billion, despite not having yet proved itself as a revenue generator.

Generative AI has the power to upend a lot of things, but that doesn’t necessarily mean it’ll make them worse. Its ability to automate tasks may give humans more time to focus on the stuff that can’t be done by increasingly sophisticated machines, as has been true for technological advances before it. And in the near future — once the bugs are worked out — it could make searching the web better. In the years and decades to come, it might even make everything else better, too.

Oh, and in case you were wondering: No, generative AI did not write this explainer.





Source link

Is the US actually banning TikTok over China ties?


Since its introduction to the US in 2018, TikTok has been fighting for its right to exist. First, the company struggled to convince the public that it wasn’t just for pre-teens making cringey memes; then it had to make the case that it wasn’t responsible for the platform’s rampant misinformation (or cultural appropriation … or pro-anorexia content … or potentially deadly trends … or general creepiness, etc). But mostly, and especially over the past three years, TikTok has been fighting against increased scrutiny from US lawmakers about its ties to the Chinese government via its China-based parent company, ByteDance.

On March 1, the US House Foreign Affairs Committee voted to give President Biden the power to ban TikTok. But banning TikTok isn’t as simple as flipping a switch and deleting the app from every American’s phone. It’s a complex knot of technical and political decisions that could have consequences for US-China relations, for the cottage industry of influencers that has blossomed over the past five years, and for culture at large. The whole thing could also be overblown.

The thing is, nobody really knows if a TikTok ban, however broad or all-encompassing, will even happen at all or how it would work if it did. It’s been three years since the US government has seriously begun considering the possibility, but the future remains just as murky as ever. Here’s what we know so far.

1. Do politicians even use TikTok? Do they know how it works or what they’re trying to ban?

Among the challenges lawmakers face in trying to ban TikTok outright is a public relations problem. Americans already think their government leaders are too old, ill-equipped to deal with modern tech, and generally out of touch. A kind of tradition has even emerged whenever Congress tries to do oversight of Big Tech: A committee will convene a hearing, tech CEOs will show up, and then lawmakers make fools of themselves by asking questions that reveal how little they know about the platforms they’re trying to rein in.

Congress has never heard from TikTok’s CEO, Shou Zi Chew, in a public committee hearing before, but representatives will get their chance this month. Unlike with many of the American social media companies they’ve scrutinized before, few members of Congress have extensive experience with TikTok. Few use it for campaign purposes, and even fewer use it for official purposes. Though at least a few dozen members have some kind of account, most don’t have big followings. There are some notable exceptions: Sen. Bernie Sanders, and Reps. Katie Porter of California, Jeff Jackson of North Carolina, and Ilhan Omar of Minnesota use it frequently for official and campaign reasons and have big followings, while Sens. Jon Ossoff of Georgia and Ed Markey of Massachusetts are inactive on it after using it extensively during their campaigns in 2020 and 2021. —Christian Paz

2. Who is behind these efforts? Who is trying to ban TikTok or trying to impose restrictions?

While TikTok doesn’t have vocal defenders in Congress, it does have a long list of vocal antagonists from across the country, who span party and ideological lines in both the Senate and the House.

The leading Republicans hoping to ban TikTok are Sens. Marco Rubio of Florida and Josh Hawley of Missouri, and Rep. Mike Gallagher of Wisconsin, who is the new chairman of the House select committee on competition with China. All three have introduced some kind of legislation attempting to ban the app or force its parent company ByteDance to sell the platform to an American company. Many more Republicans in both chambers who are critics of China, like Sen. Tom Cotton of Arkansas and Ted Cruz of Texas, endorse some kind of tougher restriction on the app.

Independent Sen. Angus King of Maine has also joined Rubio in introducing legislation that would ban the app.

Democrats are less united in their opposition to the platform. Sens. Mark Warner of Virginia and Michael Bennet of Colorado are two vocal skeptics. Bennet has called for Apple and Google to remove the app from their app stores, while Warner wants stronger guardrails for tech companies that would ban a “category of applications” instead of a single app (that’s the same position Sen. Elizabeth Warren of Massachusetts is taking). In the House, Gallagher’s Democratic counterpart, Rep. Raja Krishnamoorthi of Illinois, has also called for a ban or tougher restrictions, though he doesn’t think a ban will happen this year. —Christian Paz

3. What is the relationship between TikTok and the Chinese government? Do they have users’ info?

If you ask TikTok, the company will tell you there is no relationship and that it has not and would not give US user data to the Chinese government.

But TikTok is owned by ByteDance, a company based in Beijing that is subject to Chinese laws. Those laws compel businesses to assist the government whenever it asks, which many believe would force ByteDance to give the Chinese government any user data it has access to whenever it asks for it. Or it could be ordered to push certain kinds of content, like propaganda or disinformation, on American users.

We don’t know if this has actually happened at this point. We only know that it could, assuming ByteDance even has access to TikTok’s US user data and algorithms. TikTok has been working hard to convince everyone that it has protections in place that wall off US user data from ByteDance and, by extension, the Chinese government. —Sara Morrison

4. What happens to people whose income comes from TikTok? If there is a ban, is it even possible for creators to find similar success on Reels or Shorts or other platforms?

Most people who’ve counted on TikTok as their main source of revenue have long been prepared for a possible ban. Fifteen years into the influencer industry, it’s old hat that, eventually, social media platforms will betray their most loyal users in one way or another. Plus, after President Trump attempted a ban in the summer of 2020, many established TikTokers diversified their online presence by focusing more of their efforts on other platforms like Instagram Reels or YouTube Shorts.

That doesn’t mean that losing TikTok won’t hurt influencers. No other social platform is quite as good as TikTok at turning a completely unknown person or brand into a global superstar, thanks to its emphasis on discovery versus keeping people up to date on the users they already follow. Which means that without TikTok, it’ll be far more difficult for aspiring influencers to see the kind of overnight success enjoyed by OG TikTokers.

The good news is that there’s likely more money to be made on other platforms, specifically Instagram Reels. Creators can sometimes make tens of thousands of dollars per month from Instagram’s creator fund, which rewards users with money based on the number of views their videos get. Instagram is also viewed as a safer, more predictable platform for influencers in their dealings with brands, which can use an influencer’s previous metrics to set a fair rate for the work. (It’s a different story on TikTok, where even a post by someone with millions of followers could get buried by the algorithm, and it’s less evident that past success will continue in the future.) —Rebecca Jennings

5. What does the TikTok ban look like to me, the user? Am I going to get arrested for using TikTok?

Almost certainly not. The most likely way a ban would happen would be through an executive order that cites national security grounds to forbid business transactions with TikTok. Those transactions would likely be defined as services that facilitate the app’s operations and distribution. Which means you might have a much harder time finding and using TikTok, but you won’t go to jail if you do. —Sara Morrison

6. How is it enforced? What does the TikTok ban look like to the App Store and other businesses?

The most likely path — and the one that lawmakers have zeroed in on — is using the International Emergency Economic Powers Act, which gives the president broader powers than he otherwise has. President Trump used this when he tried to ban TikTok in 2020, and lawmakers have since introduced TikTok-banning bills that essentially call for the current president to try again, but this time with additional measures in place that might avoid the court battles that stalled Trump’s attempt.

Trump’s ban attempt does give us some guidance on what such a ban would look like, however. The Trump administration spelled out some examples of banned transactions, including app stores not being allowed to carry it and internet hosting services not being allowed to host it. If you have an iPhone, it’s exceedingly difficult to get a native app on your phone that isn’t allowed in Apple’s App Store — or to get updates for that app if you downloaded it before this hypothetical ban came down. It’s also conceivable that companies would be prohibited from advertising on the app and content creators wouldn’t be able to use TikTok’s monetization tools.

There are considerable civil and criminal penalties for violating the IEEPA. Don’t expect Apple or Google or Mr. Beast to do so. —Sara Morrison

7. On what grounds would TikTok be reinstated? Are there any changes big enough that would make it “safe” in the eyes of the US government?

TikTok is already trying to make those changes to convince a multi-agency government panel that it can operate in the US without being a national security risk. If that panel, called the Committee on Foreign Investments in the United States (CFIUS), can’t reach an agreement with TikTok, then it’s doubtful there’s anything more TikTok can do.

Well, there is one thing: If ByteDance sold TikTok off to an American company — something that was considered back in the Trump administration — most of its issues would go away. But even if ByteDance wanted to sell TikTok, it may not be allowed to. The Chinese government would have to approve such a sale, and it’s made it pretty clear that it won’t. —Sara Morrison

8. Is there any kind of precedent for banning apps?

China and other countries do ban US apps. The TikTok app doesn’t even exist in China. It has a domestic version, called Douyin, instead. TikTok also isn’t in India, which banned it in 2020. So there is precedent for other countries banning apps, including TikTok. But these are different countries with different laws. That kind of censorship doesn’t really fly here. President Trump’s attempt to ban TikTok in 2020 wasn’t going well in the courts, but we never got an ultimate decision because Trump lost the election and the Biden administration rescinded the order.

The closest thing we have to the TikTok debacle is probably Grindr. A Chinese company bought the gay dating app in 2018, only to be forced by CFIUS to sell it off the next year. It did, thus avoiding a ban. So we don’t know how a TikTok ban would play out if it came down to it. —Sara Morrison

9. How overblown is this?

At the moment, there’s no indication that the Chinese government has asked for private data of American citizens from ByteDance, or that the parent company has provided that information to Chinese government officials. But American user data has reportedly been accessed by China-based employees of ByteDance, according to a BuzzFeed News investigation last year. The company has also set up protocols under which employees abroad could remotely access American data. The company stresses that this is no different from how other “global companies” operate and that it is moving to funnel all US data through American servers. But the possibility of the Chinese government having access to this data at some point is fueling the national security concerns in the US.

This doesn’t speak to the other reasons driving government scrutiny of the app: data privacy and mental health. Some elected officials would like to see stricter rules and regulations in place limiting the kind of information that younger Americans have to give up when using TikTok and other platforms, (like Markey, the senator from Massachusetts), while others would like a closer look at limits on when children can use the app as part of broader regulations on Big Tech. Democratic members of Congress have also cited concerns with how much time children are spending online, potentially detrimental effects of social media, including TikTok, on children, and the greater mental health challenges younger Americans are facing today. TikTok is already making efforts to fend off this criticism: At the start of March, they announced new screen time limits for users under the age of 17. But even those measures are more like suggestions. —Christian Paz





Source link

TikTok’s new screen time limit for kids has limits


Amid mounting concerns (and lawsuits) about the impact of social media on children’s mental health, TikTok on Wednesday set a 60-minute time limit on screen time for users under 18, and announced several new pairings. Announced adding rental controls. However, these “restrictions” are actually more like suggestions. There are ways for young users to continue using your app after their usage time limit has passed.

The news comes amid a larger debate about the harms of social media to young people and a tremendous amount of scrutiny on TikTok itself over its ties to China. While it appears to be taking the lead over TikTok, it probably won’t be enough to allay the national security concerns many lawmakers have (or say they have) about TikTok. It may not be enough to allay their concerns that the media is harming their children.

Over the next few weeks, underage users will be subject to a 60-minute usage time limit by default, at which point the app will prompt them with the option to continue.

For users under the age of 13, a parent or guardian must enter a passcode every 30 minutes to give the child additional screen time. No parent code, no TikTok.

However, users between the ages of 13 and 17 can enter their passcode to continue using the app. You can also opt out of his default 60-minute screen time limit, but if he spends more than 100 minutes a day on TikTok, he’ll have to set his own limit, which can be bypassed with code. You’ll also see a weekly summary of your time spent on the app. TikTok believes these measures will make teens more aware of the time they spend on the app.

Finally, parents who have linked their TikTok account to their child’s account can see how much time their child spends in the app, how often the app is opened, set times to mute notifications, and custom Some additional controls and information are available, such as being able to set the time, etc. different day limits.

New screen time controls for TikTok.

New controls for your (or your child’s) TikTok experience.
tick tock

The Tech Oversight Project, a big tech accountability group, was unimpressed with TikTok’s announcement, calling it “a phony ploy to make parents feel safe without actually making the product safe.”

“Companies like YouTube, Instagram, and TikTok have focused their business models on getting kids hooked on their platforms and spending more time looking at screens to sell ads,” said deputy executive of the Tech Oversight Project. Director Kyle Morse said in a statement. “By design, technology platforms don’t care about the well-being of children and teens.”

TikTok has long been criticized for its addictive nature. Some users spend hours mindlessly scrolling through apps. It has implemented various screen time management tools over the years and now allows users to set their own time limits and set break and bedtime reminders. These new controls allow you to further customize these settings. TikTok says these controls will soon be available to adult users as well, but adults won’t get notifications for that time limit by default like kids do.

TikTok is one social media app that has introduced options for minor users. For example, Meta allows parents to limit the amount of time their child spends on her Instagram. There are also various options for parents on the devices their children use these apps on. However, like his 60 minute reminders for TikTok, these aren’t enabled by default.

This is all because lawmakers appear to be taking the laws regulating if and how children use social media seriously. In both his State of the Union addresses, President Biden said social media platforms profited from “experiments” on children and must be held accountable. Senator Josh Hawley (R-MO) wants to ban children under the age of 16 from using social media at all. On the less extreme side, Senators Richard Blumenthal (D-CT) and Marsha Blackburn (R-Tennessee) called the “Kids Online Safety Act” to force social media platforms to use and control children. We plan to resubmit the bipartisan bill. Allow parents to set them.

TikTok especially faces the possibility of being banned in the United States. Lawmakers concerned about the China-based parent company have become increasingly vocal about the app, as they believe China can use and access it and have introduced legislation to ban it. Transmits US user data or pushes propaganda or misinformation to US users. TikTok is already banned not only on federal devices, but also on government-owned devices in most states. The company is currently negotiating a deal with the government to ease national security concerns and allow it to continue operating in the country, but the process has dragged on for years.

In the meantime, TikTok has taken the lead by controlling your kids’ screen time with default settings. That might score some points with lawmakers who want to ban it outright, but probably not.

This story was first published in the Recode newsletter. SIGN UP HERE Don’t miss the next one!



Source link

Section 230: The Supreme Court considers two cases that may challenge it


You may have never heard of it, but Section 230 of the Communications Decency Act is the legal backbone of the internet. The law was created almost 30 years ago to protect internet platforms from liability for many of the things third parties say or do on them.

Decades later, it’s never been more controversial. People from both political parties and all three branches of government have threatened to reform or even repeal it. The debate centers around whether we should reconsider a law from the internet’s infancy that was meant to help struggling websites and internet-based companies grow. After all, these internet-based businesses are now some of the biggest and most powerful in the world, and users’ ability to speak freely on them bears much bigger consequences.

While President Biden pushes Congress to pass laws to reform Section 230, its fate may lie in the hands of the judicial branch, as the Supreme Court is considering two cases — one involving YouTube and Google, another targeting Twitter — that could significantly change the law and, therefore, the internet it helped create.

Section 230 says that internet platforms hosting third-party content are not liable for what those third parties post (with a few exceptions). That third-party content could include things like a news outlet’s reader comments, tweets on Twitter, posts on Facebook, photos on Instagram, or reviews on Yelp. If a Yelp reviewer were to post something defamatory about a business, for example, the business could sue the reviewer for libel, but thanks to Section 230, it couldn’t sue Yelp.

Without Section 230’s protections, the internet as we know it today would not exist. If the law were taken away, many websites driven by user-generated content would likely go dark. A repeal of Section 230 wouldn’t just affect the big platforms that seem to get all the negative attention, either. It could affect websites of all sizes and online discourse.

Section 230’s salacious origins

In the early ’90s, the internet was still in its relatively unregulated infancy. There was a lot of porn floating around, and anyone, including impressionable children, could easily find and see it. This alarmed some lawmakers. In an attempt to regulate this situation, in 1995 lawmakers introduced a bipartisan bill called the Communications Decency Act, which would extend laws governing obscene and indecent use of telephone services to the internet. This would also make websites and platforms responsible for any indecent or obscene things their users posted.

In the midst of this was a lawsuit between two companies you might recognize: Stratton Oakmont and Prodigy. The former is featured in The Wolf of Wall Street, and the latter was a pioneer of the early internet. But in 1994, Stratton Oakmont sued Prodigy for defamation after an anonymous user claimed on a Prodigy bulletin board that the financial company’s president engaged in fraudulent acts. The court ruled in Stratton Oakmont’s favor, saying that because Prodigy moderated posts on its forums, it exercised editorial control that made it just as liable for the speech on its platform as the people who actually made that speech. Meanwhile, Prodigy’s rival online service, Compuserve, was found liable for a user’s speech in an earlier case because Compuserve didn’t moderate content.

Fearing that the Communications Decency Act would stop the burgeoning internet in its tracks, and mindful of the Prodigy decision, then-Rep. (now Sen.) Ron Wyden and Rep. Chris Cox authored an amendment to CDA that said “interactive computer services” were not responsible for what their users posted, even if those services engaged in some moderation of that third-party content.

“What I was struck by then is that if somebody owned a website or a blog, they could be held personally liable for something posted on their site,” Wyden told Vox’s Emily Stewart in 2019. “And I said then — and it’s the heart of my concern now — if that’s the case, it will kill the little guy, the startup, the inventor, the person who is essential for a competitive marketplace. It will kill them in the crib.”

As the beginning of Section 230 says: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” These are considered by some to be the 26 words that created the internet, but the law says more than that.

Section 230 also allows those services to “restrict access” to any content they deem objectionable. In other words, the platforms themselves get to choose what is and what is not acceptable content, and they can decide to host it or moderate it accordingly. That means the free speech argument frequently employed by people who are suspended or banned from these platforms — that their Constitutional right to free speech has been violated — doesn’t apply. Wyden likens the dual nature of Section 230 to a sword and a shield for platforms: They’re shielded from liability for user content, and they have a sword to moderate it as they see fit.

The Communications Decency Act was signed into law in 1996. The indecency and obscenity provisions about transmitting porn to minors were immediately challenged by civil liberty groups and struck down by the Supreme Court, which said they were too restrictive of free speech. Section 230 stayed, and so a law that was initially meant to restrict free speech on the internet instead became the law that protected it.

This protection has allowed the internet to thrive. Think about it: Websites like Facebook, Reddit, and YouTube have millions and even billions of users. If these platforms had to monitor and approve every single thing every user posted, they simply wouldn’t be able to exist. No website or platform can moderate at such an incredible scale, and no one wants to open themselves up to the legal liability of doing so. On the other hand, a website that didn’t moderate anything at all would quickly become a spam-filled cesspool that few people would want to swim in.

That doesn’t mean Section 230 is perfect. Some argue that it gives platforms too little accountability, allowing some of the worst parts of the internet to flourish. Others say it allows platforms that have become hugely influential and important to suppress and censor speech based on their own whims or supposed political biases. Depending on who you talk to, internet platforms are either using the sword too much or not enough. Either way, they’re hiding behind the shield to protect themselves from lawsuits while they do it. Though it has been a law for nearly three decades, Section 230’s existence may have never been as precarious as it is now.

The Supreme Court might determine Section 230’s fate

Justice Clarence Thomas has made no secret of his desire for the court to consider Section 230, saying in multiple opinions that he believes lower courts have interpreted it to give too-broad protections to what have become very powerful companies. He got his wish in February 2023, when the court heard two similar cases that include it. In both, plaintiffs argued that their family members were killed by terrorists who posted content on those platforms. In the first, Gonzalez v. Google, the family of a woman killed in a 2015 terrorist attack in France said YouTube promoted ISIS videos and sold advertising on them, thereby materially supporting ISIS. In Twitter v. Taamneh, the family of a man killed in a 2017 ISIS attack in Turkey said the platform didn’t go far enough to identify and remove ISIS content, which is in violation of the Justice Against Sponsors of Terrorism Act — and could then mean that Section 230 doesn’t apply to such content.

These cases give the Supreme Court the chance to reshape, redefine, or even repeal the foundational law of the internet, which could fundamentally change it. And while the Supreme Court chose to take these cases on, it’s not certain that they’ll rule in favor of the plaintiffs. In oral arguments in late February, several justices didn’t seem too convinced during the Gonzalez v. Google arguments that they could or should, especially considering the monumental possible consequences and impact of such a decision. In Twitter v. Taamneh, the justices focused more on if and how the Sponsors of Terrorism law applied to tweets than they did on Section 230. The rulings are expected in June.

In the meantime, don’t expect the original authors of Section 230 to go away quietly. Wyden and Cox submitted an amicus brief to the Supreme Court for the Gonzalez case, where they said: “The real-time transmission of user-generated content that Section 230 fosters has become a backbone of online activity, relied upon by innumerable Internet users and platforms alike. Given the enormous volume of content created by Internet users today, Section 230’s protection is even more important now than when the statute was enacted.”

Congress and presidents are getting sick of Section 230, too

In 2018, two bills — the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) and the Stop Enabling Sex Traffickers Act (SESTA) — were signed into law, which changed parts of Section 230. The updates mean that platforms can now be deemed responsible for prostitution ads posted by third parties. These changes were ostensibly meant to make it easier for authorities to go after websites that were used for sex trafficking, but it did so by carving out an exception to Section 230. That could open the door to even more exceptions in the future.

Amid all of this was a growing public sentiment that social media platforms like Twitter and Facebook were becoming too powerful. In the minds of many, Facebook even influenced the outcome of the 2016 presidential election by offering up its user data to shady outfits like Cambridge Analytica. There were also allegations of anti-conservative bias. Right-wing figures who once rode the internet’s relative lack of moderation to fame and fortune were being held accountable for various infringements of hateful content rules and kicked off the very platforms that helped create them. Alex Jones and his expulsion from Facebook and other social media platforms — even Twitter under Elon Musk won’t let him back — is perhaps the best example of this.

In a 2018 op-ed, Sen. Ted Cruz (R-TX) claimed that Section 230 required the internet platforms it was designed to protect to be “neutral public forums.” The law doesn’t actually say that, but many Republican lawmakers have introduced legislation that would fulfill that promise. On the other side, Democrats have introduced bills that would hold social media platforms accountable if they didn’t do more to prevent harmful content or if their algorithms promoted it.

There are some bipartisan efforts to change Section 230, too. The EARN IT Act from Sens. Lindsey Graham (R-SC) and Richard Blumenthal (D-CT), for example, would remove Section 230 immunity from platforms that didn’t follow a set of best practices to detect and remove child sexual abuse material. The partisan bills haven’t really gotten anywhere in Congress. But EARN IT, which was introduced in the last two sessions, was passed out of committee in the Senate and ready for a Senate floor vote. That vote never came, but Blumenthal and Graham have already signaled that they plan to reintroduce EARN IT this session for a third try.

In the executive branch, former President Trump became a very vocal critic of Section 230 in 2020 after Twitter and Facebook started deleting and tagging his posts that contained inaccuracies about Covid-19 and mail-in voting. He issued an executive order that said Section 230 protections should only apply to platforms that have “good faith” moderation, and then called on the FCC to make rules about what constituted good faith. This didn’t happen, and President Biden revoked the executive order months after taking office.

But Biden isn’t a fan of Section 230, either. During his presidential campaign, he said he wanted it repealed. As president, Biden has said he wants it to be reformed by Congress. Until Congress can agree on what’s wrong with Section 230, however, it doesn’t look likely that they’ll pass a law that significantly changes it.

However, some Republican states have been making their own anti-Section 230 moves. In 2021, Florida passed the Stop Social Media Censorship Act, which prohibits certain social media platforms from banning politicians or media outlets. That same year, Texas passed HB 20, which forbids large platforms from removing or moderating content based on a user’s viewpoint.

Neither law is currently in effect. A federal judge blocked the Florida law in 2022 due to the possibility of it violating free speech laws as well as Section 230. The state has appealed to the Supreme Court. The Texas law has made a little more progress. A district court blocked the law last year, and then the Fifth Circuit controversially reversed that decision before deciding to stay the law in order to give the Supreme Court the chance to take the case. We’re still waiting to see if it does.

If Section 230 were to be repealed — or even significantly reformed — it really could change the internet as we know it. It remains to be seen if that’s for better or for worse.

Update, February 23, 2023, 3 pm ET: This story, originally published on May 28, 2020, has been updated several times, most recently with the latest news from the Supreme Court cases related to Section 230.





Source link

Meta Verified and Twitter Blue mark the end of free social media


“If you’re not paying for the product, you’re the product” has long been a popular saying about the business of social media.

The saying goes that users aren’t paying for apps like Instagram or Twitter, but rather offering something else: attention (and sometimes content) that is sold to advertisers. implying that it is not.

But now this social media free model (supported by advertising) is under pressure. Social media companies can no longer make as much profit from free users as they used to. A weakening ad market, privacy restrictions by Apple that make it harder to track users and their preferences, and constant regulatory threats make it difficult for social media apps to sell ads.

So we’re seeing the beginning of what could be a new era in social media: pay-to-play.

On Sunday, Meta became the newest and largest major social media company, announcing the paid version of its product using the “Meta Verified” program. Facebook and Instagram introduced blue verified badges, increased protection against account impersonation, access to “real people” in customer support to help with common account issues, and most importantly, “reach”. We charge users $12 per month each for “Performance and visibility”. This means that paying users will see more of your content in searches, comments, and recommendations. The company said it is testing the feature in Australia and New Zealand this week and will roll it out to the US and other countries soon.

Meta’s news comes months after Twitter released an $8/month paid verification program as part of new owner Elon Musk’s revamped Twitter Blue product. Meta is notorious for cloning competitors, but its subscription service is more than just another case of imitation. This is part of an industry-wide trend. In recent years, Snap, YouTube, and Discord have introduced or expanded premium products that charge their users special perks. Snap gives subscribers early access to new features, YouTube serves them fewer ads, and Discord offers more customization options for people’s chat channels.

Meta, which now owns the world’s largest social media app, examines the two-tier user system trend in social media. With this system, only paying users get services they would normally expect for free, such as proactive protection from scammers trying to impersonate them, and direct contact with customer support in case of technical problems. can do. Meta says it still offers some basic support for free users, but will have to charge to cover costs beyond that.

But the most newsworthy part of Meta’s paid verification plans isn’t how paying users get verified or get better customer support, but how they get more. . Visibility on Facebook and Instagram.

Until now, in theory, everyone had the same chance to be seen on social media. Paying $12/month on Meta Verified now increases the chances of others discovering your account and posts. This is an attractive system for creators who run professional businesses on Instagram and Facebook, but be careful. Failure to do so may compromise the quality of the user experience.

With this new program, Meta is blurring the lines between advertising and organic content more effectively than ever before. It has also made people more commercialized, as many users have already complained that Instagram can feel like a virtual shopping he mall full of creators plugging in their content and products. It’s hard to imagine enjoying the experience.

It remains to be seen what impact Meta Verified will have on Facebook’s ecosystem. But going forward, it’s clear that if you want to be fully seen, trusted, and cared for on Facebook, Instagram, Twitter, and other platforms participating in the premium model, you’ll have to pay.

Security and support are now luxuries, not taken for granted

If someone steals your credit card and impersonates you, you expect your bank to protect you. If you go to the supermarket and buy rotten milk, I think you’ll get your money back at the cash register. Consumers expect a basic level of customer service from businesses.

So it’s understandable why some users reacted to Meta’s news and insisted that basic services like customer support and account security should be free.

“This should be part of the core product. Users shouldn’t have to pay for this,” one user commented on Mark Zuckerberg’s Facebook page after the announcement. — but checking people’s government IDs to verify them and providing on-call customer service comes at a cost, and Meta has to charge to cover that cost.

Social media customer support and security services have always been somewhat broken and unreliable. Apps like Facebook, which serves 2 billion people a day for free, have effectively expanded basic programs such as a customer helpline to help people locked out of their accounts. , validation was always selective. In many cases, users who receive personal attention are VIPs, such as government officials, celebrities, media personnel, and people who happen to know someone who used to work for a company.

So while it may seem like Facebook is charging for what it used to do for free, it’s actually charging for what didn’t work.

The average user might not want to pay $24 a month for a blue badge on Facebook or Instagram, but if you run your business on those apps, that’s a different story.

Mae Karwowski, CEO of a social media influencer marketing firm, said that “so many people running business empires” on social media are paying for meta-validation packages as the “logical next step.” I said it’s easy to check. more business. The social media influencer industry was worth an estimated $16 billion in 2022. TikTok is growing, but Instagram remains the most popular influencer marketing platform for brands. Facebook and Instagram are also particularly popular with business owners, and on Facebook alone he has over 200 million businesses, many of which run their businesses on the network.

The blue badge is important to creators and business owners, Karwowski said.

Before Meta announced this paid tier, Karwowski said clients often ask her for help getting verified on Instagram. You can apply for verification on Instagram if you claim to be a prominent public figure.

“Before, it had to be like, ‘Oh, like someone’s best friend’s cousin works on Instagram,’ and then find them on LinkedIn and message them,” Karwowski said. says Mr. “There was very little standardization. At least now we have some processes.”

Still, some influencers Recode spoke to said they didn’t see enough value in Meta Verified.

“There aren’t many people impersonating me, so it’s not that important to me,” said Oorbee Roy, a skateboarder and mother who goes by the handle @auntyskates. “And the other I feel like I’m getting closer [verified] by myself. “

What Roy thought was worth it was Instagram’s promise of raising awareness.

“I have content dedicated to a particular niche, and I want to be able to reach that niche,” she said.

For perhaps the most valuable part of Facebook and Instagram pay-to-play perks, let’s move on to the next point.

Paying for Reach

Prior to this announcement, if you wanted to promote your post or account on Facebook or Instagram, you had to do it as an ad. Anything that is clearly labeled to the user as either advertising, sponsored, or “paid content.” , it was unintentional and the user was essentially violating the rules of the platform.)

Now, Instagram and Facebook are building features where people actually pay for eyeballs, but they don’t mark their promotions as ads.

Jason Goldman, Twitter’s Vice President of Products from 2007 to 2010, said: “It’s just a different way of pricing.”

These subscriptions could help boost revenue on Instagram and Facebook, where traditional advertising businesses are struggling, but they could also jeopardize relationships with users who don’t want to see more promoted content. I have.

“It’s kind of disappointing to see Instagram starting to lean toward more commercial, more money-seeking businesses,” said the New York City-based girl with more than 12,000 followers who goes by the handle @girlmeetsnewyorkcity. lifestyle influencer Erin Sheehan said.

“I wanted to switch to TikTok and enter that organic market, but I feel this might take me a step further. Because you may notice that it is being done.”

TikTok has attracted a new generation of creators. Many of them switched to the platform from older apps such as Instagram. Because creating what Sheehan called “organic content” can easily go viral, even if it’s relatively amateurish. The app currently doesn’t have a premium his subscription model, but it has successfully expanded its advertising business at a time when the growth of competitors like Meta and Snap has slowed.

Other social media incumbents such as Meta and YouTube have battled TikTok for younger users and creators. Instagram, in particular, is rolling out a new program to acquire creators from its TikTok clone, Reels. As such, it is imperative that Instagram and Facebook ensure that users are not turned off by promoted content from paid subscribers and that creators want to share their content on their apps.

Meta told Recode that it’s still focused on surfacing content that people want to see.

“Our intention is to show content that we believe people will enjoy, and this continues with the increased visibility provided through Meta Verified,” said Paige Cohen, spokesperson for Meta. Some of them are said in a statement. “As we test and learn from Meta Verified, we will continue to focus on ensuring greater visibility of our subscribers’ content in ways that are most valuable to the ecosystem as a whole,” she said.

Meta also said it doesn’t prioritize paid content everywhere. However, since he’s competing with TikTok in the short-form video space, prioritizing it over feeds is important in some ways because reels are a major focus for the company.

This developing paid social media model is still in its early stages. But from what we know so far, it’s likely that only a small percentage of users will be willing to pay. As of mid-January, Twitter reportedly paid only 0.2% of its total user base for Twitter Blue. (Service started in November)

Meta has more users in verified programs due to its sheer size (Meta has more than 10x more users than Twitter) and the fact that there are more influencers running real businesses on the platform. You may be more likely to find customers. It’s being deployed in a more discreet way than Twitter.

But there are big risks with this paid model. Whether it’s common to post pictures of dogs and babies, or professional influencers building careers with their followers, social media networks are built on their users. Creating a hierarchy of these users may prevent some users from sharing. With many young people shying away from social media and either logging off entirely or looking for another app that feels more trustworthy and less commercial, Meta has the users who need it most to stay relevant in the future. may be keeping away from



Source link

WhatsApp Account Takeover Shows Why Phone Numbers Are Not Proper Logins


When he moved to a new country last October, he got a new phone number. Ugo, who lives in Europe, where WhatsApp is very popular, didn’t immediately register his new phone number with the app, but was able to continue using it as normal. It was only when I told WhatsApp that I had a new phone number.

His profile picture changed to that of a young woman, and his phone was flooded with new messages from Italian-speaking strangers, including from a group chat he had suddenly been added to. I was. .

Hugo, who does not want to reveal his last name for privacy reasons, inadvertently hijacked the WhatsApp account of a woman who had a new phone number before him. It seems that I neglected to tell the app. So when Ugo told him he had a new phone number on his account, he took over control of his WhatsApp account that was still associated with it and it merged with his account.

“I don’t even know if she was able to regain access to her account because for days and weeks I was still getting her messages. ‘I thought it was,’ Hugo told Record. “She was lucky because I had good intentions. Her account could have been merged with someone less tolerant.”

Ugo isn’t the only WhatsApp user for whom this has happened. Phone number recycling is a problem his WhatsApp is aware of and it is up to the user to prevent or solve it. But it’s not just WhatsApp.

Countless apps and services use phone numbers to identify you, but those numbers aren’t always permanent. Phone numbers are also vulnerable to hackers. Because they were never meant to be persistent identifiers, incidents like the one with Ugo are a pervasive and ongoing problem that the industry has been aware of for years. There are at least two research papers on phone number recycling, from targeted attacks by hackers and from people who easily buy out recently abandoned phone numbers, allowing strangers to access your life completely disconnected from your account. Potential risk is indicated until .

However, users often bear the burden of protecting themselves from the security issues created by their favorite apps. Even the things these services recommend as extra security measures, such as text, SMS, and multi-factor authentication, can actually introduce more vulnerabilities.

number problem

If you don’t reuse phone numbers, you’ll quickly run out of them. An estimated 35 million phone numbers are recycled in the United States each year, according to a 2017 FCC analysis of data from the North American Numbering Plan Administrator (NANPA). Also, there are currently 2.74 billion assignable phone numbers in the United States and its territories, NANPA told his Recode, but not all of these numbers have actually been assigned. (About half of them are yet to be allocated, according to FCC data). So it’s only a matter of time before you give up your phone number and it’s reassigned to someone else.

In the US, carriers must wait at least 45 days before being able to assign new users. However, that minimum waiting period did not come into effect until 2020. Prior to that, it was up to carriers to decide how long to wait before recycling phone numbers. Some waited only a few days, according to the FCC’s report. In France, where Ugo got a new phone number, the minimum waiting time was recently reduced from his three months to his 45 days.

This makes calling the wrong destination very easy. Decades ago, it might have been annoying to get a phone call to your landline if you knew the phone number, but you’re not going to be bombarded with text, images and videos. was. Your phone number was the key to unlocking various goods and services.

However, in the age of smartphones, recycling phone numbers is a major privacy and security issue. Most of us store most of our lives on our phones and their apps. Some apps, such as WhatsApp, require a phone number to register an account. Or use your phone number as a security measure. However, phone numbers are not intended to perform these functions. And, as Hugo’s story shows, doing so has unintended consequences.

But even before the iPhone changed the mobile game, there were concerns about using phone numbers as identifiers.

“I saw this issue happen in 2001 when I was working at Vodafone,” said Marc Rogers, now chief security officer at cybersecurity firm Q-Net Security.

In 2006, SFGate published a story about a man who obtained a reused number and was barraged with texts from various women. This made the fiancée uncomfortable and charged him. more common. Lately, we’ve seen a lot of stories on platforms like Facebook and his Airbnb about phone numbers changing ownership that have caused strangers to take over their accounts. It happened before with WhatsApp too.

Accidental hijacking isn’t the only problem. Mobile phones have something called a SIM or subscriber ID module. It’s usually stored on a small removable card, but on newer iPhones it’s embedded in the device itself. If a malicious person hijacks your girlfriend’s SIM (this is known as SIMjacking or SIM swapping), or is able to reroute text messages to you, they can access your phone number. You can access accounts unlocked with .

“The whole SIM swap ecosystem has sprung up over the SMS vulnerability,” said Rogers.

In a study on the security risks from recycled phone numbers, Princeton University computer science professor Arvind Narayanan and researcher Kevin Lee found that most of the phone numbers available at T-Mobile and Verizon are still in accounts on various websites. found to be associated. Those numbers had not previously notified those services that they had changed numbers. Of the 200 recycled numbers that Lee and Narayanan purchased for their research, about 10% contained sensitive data (personally identifiable information or multi-factor authentication passcodes) intended for the number’s previous owner. ) was obtained. And that was just a week later.

Phone numbers aren’t the only problematic identifiers. I also have a social security number. It started as a way to track workers’ earnings even if they changed jobs, addresses or names, but has evolved into a national identification number used by the IRS, financial institutions and even health care providers. We did. Anyone whose ID was stolen can tell that this Social Security Number system is not perfect. Email addresses serve a similar unintended purpose. This creates a privacy issue if you have an e-mail address that is constantly mistaken for someone else’s e-mail address.

The industry could do more, but it probably won’t

WhatsApp says it is taking several steps to prevent scenarios like Ugo’s, including deleting account data from accounts that have been inactive for at least 45 days and activating them on another mobile device.

“If for some reason you don’t want to use WhatsApp associated with a particular phone number, the best thing to do is transfer it to a new number or delete your account within the app,” WhatsApp told Recode. I’m here. “In all cases, he highly recommends using two-factor authentication for added security.”

These solutions leave most of the work to the user, and some users are unaware of their responsibilities. Enabling his two-step or multi-factor authentication by default, which companies like Google and Amazon do for some of their services, can stop these hijacks. WhatsApp may also ask users to verify their phone number from time to time. This will encourage people like the previous owners of Ugo’s new number to give her account over before it is hijacked.

There is more that the industry can do, such as developers of apps, carriers, and phone operating systems. But I usually don’t unless I’m legally required to do so or something really bad has happened. In the meantime, many of them prefer to ask users for their phone numbers, even if they don’t have to have them. .

“We’ve known this to be a problem for 20 years, but little has been done to mitigate consumer risk. It’s time to start putting pressure on telecom companies to look at ways,” said Rogers.

Ultimately, business always has their best interests at heart, but that may not always be in your best interests. you have to protect yourself.

what you can do

If you don’t plan on changing your number, you may be thinking that this isn’t the case for you. However, that change may not have been planned. A hit song might come out with your phone number as the chorus. Alternatively, the president can distribute it during campaign rallies.or maybe you published on twitter Point out AI chatbots you didn’t know about considerThere are more serious reasons why you should change your phone number. Or you may die. In that case, you no longer care about privacy or security issues, but the people you leave behind might. I can not do it.

“Even if you don’t plan to change your number right away, contact friends and family who have changed your number,” Lee, a researcher at Princeton University, said. You may end up sending it to the new owner.”

The best way to solve the problem is to never have one. In other words, avoid attaching phone numbers to accounts whenever possible. Sometimes you have no choice, like signing up for a WhatsApp account. But at least you can minimize your exposure.

“People change their numbers for all sorts of reasons. It’s virtually impossible to update their number in every system and contact list,” said Narayanan.

You should also enable two-factor authentication whenever possible, but don’t use your phone number as a second factor. Not only is it useless if you lose access to that phone number, but it’s generally not a good way to secure your account given how vulnerable phone numbers can be. Please use the key. They cannot be SIM jacked and are unrelated to phone numbers.

There are some apps and services that require you to attach your phone number or that only offer text authentication. You can avoid using them, but it’s not always possible. As suggested by Lee and Narayanan’s study, using phone number parking services can prevent old numbers from going into circulation. Some are only a few dollars a month. It doesn’t even have to be forever. We recommend doing this for a year or two to allow time for the account to be identified and switched to the new number, and for your contacts to be aware that the number has changed.

However, the marginal cost might be worth it, given all the things that could go wrong if your phone number were passed on to someone else. You leave a lot of information to carriers, apps, websites, and whoever gets your phone number next. At that point, I can only hope they take care of it.





Source link

Elon Musk’s Twitter is getting worse


It’s far from perfect, but if you’re used to the days when Twitter was a surefire place to digest all sorts of breaking news, politics, celebrity gossip, or personal thoughts, it’s time to embrace the new reality. .

Twitter is becoming a degraded product.

In the four months since Elon Musk took over the company, the app experienced major glitches. For example, there were several hours last week when users around the world were unable to post Tweets, send messages, or follow new accounts. Like other social media networks, Twitter has always experienced periodic outages, but under Musk, the app’s unpredictability goes beyond technical issues. Musk’s insane decisions undermine the integrity of Twitter’s core product and alienate a wider audience.

As Platformer reported, Musk’s Super Bowl meltdown is one of the clearest signs of Twitter’s decline so far. Musk, apparently furious that his tweet about the Super Bowl got fewer views than President Biden’s, flew to Twitter’s headquarters and told engineers to change the algorithms underlying Twitter’s flagship product. and ordered his own tweets to appear higher than anyone else’s tweets. A Twitter user’s “For You” page. Musk’s cousin James Musk — now a full-time employee and internally reported as a “corrector type” — sent an urgent message at 2 a.m. to all qualified engineers for help. , the company tasked 80 engineers to manually tweak his Twitter infrastructure. A system for promoting Mr. Musk’s tweets.

Shortly after the change, many users began noticing their feeds being bombarded with Musk’s tweets. Mr. Musk seems to have acknowledged the phenomenon, Post a meme showing a woman Labeled “Elon tweets” and forced to give a bottle of milk to another woman labeled “Twitter” and then posted what Twitter was making. Algorithm “tuning”.

This episode shows that Twitter is becoming increasingly unreliable. The platform’s basic product design is tailored to the whims of Musk, a leader who seems to put his image and “free speech absolutist” ideology above business interests. increase.

A few examples: Musk, in the spirit of free speech that allows people to say almost anything they want on Twitter, has restored thousands of previously suspended user accounts, including those of neo-Nazis and QAnon. Did. It is the driving force behind the rise in hate speech on platforms, including a more than 200% increase in anti-black slurs against Musk since he took office until December 2022. was one of them, researchers told the New York Times. I suffered from harassment on the platform.

On the product side, Musk has rushed a project that has caused chaos on the platform. Musk’s best-known product, Twitter Blue, a paid version of an app that lets anyone purchase checkmark badges for verification, had a disastrous initial rollout. Musk, who has long attacked mainstream news outlets, framed Twitter Blue as a way to take away special privileges such as checkmarks that “elite” like journalists had on the platform unless they paid for it. . But a ill-conceived change to Twitter’s verification policy has led to a flood of spam into the platform as newly verified accounts use checkmarks to impersonate celebrities, including masks. The release was pulled back and he was delayed twice before finally being released in December.

Under Musk, Twitter also recently blocked third-party apps that improve user experience with apps like Tweetbot. Twitter has promised to provide developers with an improved paid version of its API, but Twitter’s abrupt cutoff of access soured relations with outside programmers who enriched the site with add-on apps. .

With Musk laying off or firing more than half of Twitter’s staff, there aren’t enough people left to clean up the mess. This includes teams that deal with fixing bugs, moderating content, and courting advertisers.

When Elon Musk first acquired Twitter, many were skeptical about the billionaire, but there was also optimism that Musk could turn the company around. Investors hoped Musk, a prolific and successful entrepreneur, would revive a company that was seen as unprofitable and underperforming its full business potential. A self-proclaimed “free speech absolutist,” Musk’s ideological supporters saw him as someone who could ease restrictions on Twitter and open it up to a wider range of speech.

Today, we see Musk’s potential to improve Twitter, both business and ideological, unrealized.

On the business side, Twitter’s main revenue line is in jeopardy as 500 high-profile advertisers have suspended spending on the platform since Musk took over. “Unprecedented” increase in hate speech on platforms. According to Reuters, Twitter’s top 30 advertisers cut their spending on Twitter by an average of 42% by the end of 2022 after Musk took over. His Musk’s solution to Twitter’s advertiser loss is to get more people to pay for her Twitter, but this doesn’t seem to be working so far. According to a recent report by Information, as of mid-January 2023, only about 180,000 users in the US paid for a Twitter subscription, which is less than 0.2% of his monthly active users. is.

Musk claimed in November that Twitter’s user base was the largest it’s ever been, but external data contradicts that claim. Twitter traffic was actually higher in March than it is today, and Twitter visitor growth was down year-over-year from 4.7% in November 2022, when Musk took office. In January 2023 it will be -2%.

Ideologically, Musk’s Twitter has repeatedly failed to meet the standards of free speech. It starts with Musk suspending comedians like Kathy Griffin (who made fun of him) and banning users from talking about Twitter’s competitors on the platform. Decentralized social network Mastodon (Musk backtracked on policy after a flood of criticism).

Even some popular figures who supported Musk on his free speech stance, like freelance journalist Barry Weiss, withdrew their support after Musk banned several high-profile journalists who criticized him. (Musk claimed journalists exposed him, which they denied.) In recent months, former Twitter CEO and co-founder Jack Dorsey announced that he would succeed Musk in April. As a person, he runs Twitter, “Expanding the Light of Consciousness,” also changed his stance and began openly criticizing Musk’s leadership, including all his recent technical glitches.

The main groups of people who appear to be staunch supporters of the new Twitter are conservative figures and politicians. Musk pardons many frozen accounts of right-wing provocateurs and political leaders, including shock jockey Andrew Tate, Rep. Marjorie Taylor-Green (R-GA) and former President Donald Trump Later, Musk achieved hero status in right-wing circles. , and drafted a Republican-led bill in his name that would require the Justice Department to disclose money spent on big tech companies. has received conservative praise for his work in The most notable is the “Twitter File”. US politicians and government agencies.

Even if Musk’s conservative fans liked the way he ran Twitter, it wouldn’t do much for them anymore if the app had issues and more users left the platform altogether. Nor is that the case with Musk, who needs a sound money-making app to pay off the roughly $13 billion he owes to his creditors.





Source link

YouTube CEO Susan Wojcicki is stepping down and will be replaced by Neil Mohan.


YouTube CEO Susan Wojcicki, who has led the world’s largest video site for the past nine years, is stepping down from her role. She will be replaced by her longtime lieutenant, Neil Mohan.

In a letter sent to YouTube employees, Wojcicki said he was leaving to “start a new chapter focused on my family, health and personal projects that I am passionate about.” I was.

During her tenure, YouTube became increasingly important to business for Google, which acquired the site in 2006, and Alphabet, the holding company that owns both companies. In 2022, YouTube generated her $29.2 billion in ad sales. gross income.

Wojcicki’s resignation is also a meaningful symbol for Google and technology in general. For years, she was one of the few women to run a large tech business. And she was integral to her founding of Google. She famously rented out her Silicon Valley garage to her co-founders Larry Page and Sergey Brin in 1998, and a year later she joined Google as its 16th employee. .

“Susan holds a unique place in Google’s history and has made some of the most incredible contributions to products used by people around the world,” Paige and Brin said in a statement. I am grateful for all that I have accomplished over the years.”

Wojcicki started out running marketing at Google, helping build an online advertising business, and at one point running the company’s video service, which was trying to compete with YouTube. She ultimately argued that Google should buy the site instead.

During her tenure as leader of YouTube, she focused on making advertisers more accessible, while also trying to crack down on the large, unruly group of video creators that ran the site. I was there.

This has been regularly followed by video creators who said YouTube’s rule changes and moderation decisions had made it harder for them to make a living, and how the company did enough to discourage hate speech and other remarks. It led to criticism from both outsiders who said they weren’t hitting. Objectionable content. “I was able to upset everyone,” Wojcicki told me in a 2019 interview.

Wojcicki has worked closely with her successor, Mohan, for many years. The two first worked together to build her Google display advertising business, and Mohan has been her Wojcicki’s number two on YouTube since 2015.

“Susan has built an extraordinary team, has a successor in Neil, and is quickly on track and ready to lead YouTube to success over the next decade,” Alphabet CEO Sundar Pichai said in a statement. is done,” he said.

Below is the full text of the letter to Wojcicki employees.

Subject: Personal update

Hello YouTubers,

Twenty-five years ago, I decided to join a group of graduate students at Stanford University who were building a new search engine. Their names were Larry and Sergey. I saw the potential of what they were building and it was incredibly inspiring. Even though the company had only a few users and no revenue, I decided to join the team. rice field.

It will be one of the best decisions of my life.

Over the years, I have played many roles and done so many things. For example, he worked on managing marketing, co-creating Google Image Search, leading Google’s first video and book search, the early stages of creating AdSense, and acquiring YouTube and DoubleClick. , served as Senior Vice President of Advertising, and for the past nine years he served as CEO of YouTube. With a mission to benefit the lives of so many people around the world, I tackled each challenge that came my way. It’s about finding information, telling stories, and supporting creators, artists, and small businesses. We are very proud of all that we have achieved. It’s exhilarating, meaningful, and all-consuming.

Today, after nearly 25 years here, I’m stepping back from my role as head of YouTube and starting a new chapter focused on my family, health, and personal projects that I’m passionate about. .

We have a great leadership team in place at YouTube, so I feel we can do this. When I joined YouTube nine years ago, one of my top priorities was assembling a great management team. Neal Mohan is one of those leaders and will be the new head of SVP and YouTube. I have spent nearly 15 years of my career with Neil since he came to his Google with his DoubleClick acquisition in 2007. His role grew to his SVP of Display and Video Advertising. He took over as Chief Product Officer of his YouTube in 2015. Since then, he has launched top-notch products and his UX team, playing a key role in the launch of some of the biggest products such as YouTube TV, YouTube Music, Premium and Shorts. He led the Trust and Safety team to ensure that YouTube lived up to its responsibilities as a global platform. He has a great taste for our product, our business, our community of creators and users, and our employees. Neil he would be a great leader on YouTube.

With all the work we’re doing through short videos, streaming, subscriptions, and the promise of AI, YouTube’s most exciting opportunities are right in front of us, and Neil is the right person to lead us.

Every YouTuber I’ve had the chance to work with has done a lot to make this platform better over the years. This amazing community of creators, artists, viewers and advertisers have not only been able to coexist, but thrive together. thank you!

In the short term, I plan to support Neal and help him through the transition, including continuing to work with the YouTube team, coaching team members, and meeting with creators. Over the long term, Sundar has agreed to act as an advisor to Google and Alphabet. This will allow us to draw on our many years of diverse experience to provide advice and guidance across the Google and Alphabet portfolios of companies. This is a very important time for Google. I remember the early days. It’s an era of incredible product and technology innovation, huge opportunities, and a healthy disregard for the impossible.

Beyond that, since I am still there, I have the opportunity to thank the thousands of people around the world who I have worked with and learned from. I would like to thank I would also like to thank Larry and Sergey for inviting me on the adventure of his life. I always dreamed of working for a company with a mission that could change the world for the better. Thanks to you and your vision, I got the chance to live that dream. It’s an absolute privilege to be a part of it, and I’m excited for what’s next.

I am always grateful for your help.

Susan



Source link

Meta and TikTok face congressional sights over child-protection laws


A massive bipartisan push against Big Tech in the new Congress looks set to protect children. Senate Majority Leader Chuck Schumer was quoted as saying that passing them was a priority for him. I’m here. President Joe Biden recently said the same thing.

If this week’s Senate Judiciary Committee hearings show any indication about protecting children online, they could pass. Witnesses testified about how children are being harmed by online content and platforms that help push it to audiences of mostly friendly senators. None of it has become law, but the new Congress seems keen to make it happen.

In recent years there has been a bipartisan and bipartisan consensus in Congress that something must be done about the power of big tech, but not what or how. There is disagreement about whether we are managing too much or not enough. Now they seem to have found their cause and the children who are their victims.

The desire to protect children from internet harm and abuse is stronger than ever in the 118th Congress, and it’s more likely that at least one law will actually be passed to do so. . But critics say these bills may not actually help children and may exist at the expense of free speech and privacy.



Source link