"In Europa ci sono già i presupposti per l'esplosione di un conflitto sociale. Questo è il seme del malcontento, dell'egoismo e della disperazione che la classe politica e la classe dirigente hanno sparso. Questo è terreno fertile per la xenofobia, la violenza, il terrorismo interno, il successo del populismo e dell'estremismo politico."

martedì 27 giugno 2023

Spyware in Europa: UE autorizza sorveglianza intrusiva dei giornalisti?

Draft EU plans to allow spying on journalists are dangerous, warn critics

Move to allow spyware to be placed on reporters’ phones would have a ‘chilling effect’, say media experts

Draft legislation published by EU leaders that would allow national security agencies to spy on journalists has been condemned by media and civic society groups as dangerous and described by a leading MEP as “incomprehensible”.
On Wednesday, the European Council – which represents the governments of EU member states – published a draft of the European Media Freedom Act that would allow spyware to be placed on journalists’ phones if a national government thought it necessary.

Unusually, the council did not take the step of holding an in-person meeting of ministers responsible for media before the draft was published.
The Dutch MEP Sophie in’t Veld, who has overseen the European parliament’s investigation into the use of Pegasus spyware on journalists and public figures, said the claim that permission to spy on the press was needed in the interests of national security was “a lie”. “I think what the council is doing is unacceptable. It’s also incomprehensible. Well, it’s incomprehensible if they are serious about democracy,” said In ‘t Veld.
The first draft of the act – originally tabled by the European Commission to strengthen protections for the independence of journalism in countries where it is under threat such as Poland and Hungary - had included strong safeguards against the use of spyware. The draft must be agreed by the European parliament before it becomes law.

The European Federation of Journalists (EFJ), which represents more than 300,000 members of the press in 45 countries including the UK, accused EU leaders of holding the principles of media freedom in “dangerous disregard”. The EFJ said the move was a “blow to media freedom” that would “put journalists even more at risk” than they are already. Giving governments the power to place spyware on journalists’ phones on the grounds of “national security” would have a “chilling effect on whistleblowers” and other sources, it warned. “We know too well how the defence of national security is misused to justify media freedom violations,” it added in a statement calling for the European parliament to “save” the draft legislation from this threat.

As it stands, member states would be able to hack into journalists’ phones if they suspect their sources could be talking to criminals involved in anything the state perceives to be a threat. The change was led by France, which won backing for an amendment to protect journalists but not “without prejudice to the member states’ responsibility for safeguarding national security”. If the act became law in its current state, British journalists working in the EU would also be at risk of having their phones and computers surveilled.
Last year French intelligence investigators confirmed that Pegasus spyware had been found on the phones of three journalists including a senior member of staff at the TV news station France 24.

The non-profit Forbidden Stories media organisation and Amnesty International said it believed that at least 180 journalists may have been selected as people of interest in advance of possible surveillance by government clients of the Israeli NSO group. NSO has long insisted that the governments to whom it licenses Pegasus are contractually bound to only use the powerful spying tool to fight “serious crime and terrorism”.
Last week, European Digital Rights (EDRi), a network of NGOs and digital rights advocates, called on the European Council to “reconsider” its national security exemptions for spying on the press. “The council is taking dangerous steps towards legalising unacceptable forms of surveillance against journalists and their sources,” said Chloé Berthélémy, senior policy adviser at EDRi.

Link originale: https://www.theguardian.com/world/2023/jun/22/draft-eu-plans-to-allow-spying-on-journalists-are-dangerous-warn-critics

mercoledì 17 maggio 2023

Sorpresi? TikTok, il "cavallo di troia cinese" gestito da funzionari del Dipartimento di Stato USA

TikTok: Chinese “Trojan Horse” Is Run by State Department Officials

Alan Macleod, Mintpressnews, 13 april 2023

Amid a national hysteria claiming the popular video-sharing app is a Chinese Trojan Horse, a MintPress News investigation has found dozens of ex-U.S. State Department officials working in key positions at TikTok. Many more individuals with backgrounds in the FBI, CIA and other departments of the national security state also hold influential posts at the social media giant, affecting the content that over one billion users see.

While American politicians demand the app be banned on national security grounds, try to force through an internet surveillance act that would turn the country into an Orwellian state, make clueless statements about how TikTok is dangerous because it connects to your Wi-Fi, it is possible that TikTok is already much closer to Washington than it is to Beijing.

State Department-affiliated media

For quite some time, TikTok has been recruiting former State Department officials to run its operations. The company’s head of data public policy for Europe, for example, is Jade Nester. Before being recruited for that influential role, Nester was a senior official in Washington, serving for four years as the State Department’s director of Internet public policy.

Mariola Janik, meanwhile, left a long and fruitful career in the government to work for TikTok. Starting out at the Bureau of Western Hemisphere Affairs, Janik became a career diplomat in the State Department before moving to the Department of Homeland Security. In September, however, she left the government to immediately take up the position of TikTok’s trust and safety program manager, a job that will inevitably include removing content and reshaping algorithms.

While there is no suggestion that Janik is anything other than a model employee, the fact that a U.S. government agent walked into such an influential position at the social media giant should be cause for concern. If, for instance, a high Chinese official was hired to influence what the U.S. public saw in their social media feeds, it would likely be the centerpiece of the TikTok furor currently gripping Washington.

Janik is not the only former security official working on TikTok’s trust and safety team, however. Between 2008 and 2021, Christian Cardona enjoyed a distinguished career at the State Department, serving in Poland, Turkey and Oman, and was in the thick of U.S. interventionism in the Middle East. Between 2012 and 2013, he was an assistant to the U.S. ambassador in Kabul. He later left that role to become the political and military affairs manager for Iran.

In the summer of 2021, he went straight from his top State Department job to become product policy manager for trust and safety at TikTok, a position that, on paper, he appears completely unqualified for. Earlier this year, Cardona left the company.

Another influential individual at TikTok is recruiting coordinator Katrina Villacisneros. Yet before she was choosing whom the company hires, Villacisneros worked at the State Department’s Office of Human Rights and Humanitarian Affairs. And until 2021, she was part of Army Cyber Command, the U.S. military unit that oversees cyberattacks and information warfare online.

Other TikTok employees with long histories in the U.S. national security state include: Brad Earman, global lead of criminal and civil investigations, who spent 21 years as a special agent in the Air Force Office of Special Investigation and also worked as a program manager for antiterrorism at the State Department; and Ryan Walsh, escalations management lead for trust and safety at TikTok, who, until 2020, was the government’s senior advisor for digital strategy. A central part of Walsh’s State Department job, his own résumé notes, was “advanc[ing] supportive narratives” for the U.S. and NATO online.

Walsh, therefore, is illustrative of a broader wave of individuals who have moved from governments attempting to manipulate the global town square to private companies where they are entrusted to keep the public safe from exactly the sort of state-backed influence operations their former colleagues are orchestrating. In short, then, this system, whereby recently retired government officials decide what the world sees (and does not see) online, is one step removed from state censorship on a global level.

For all the talk of digital influence operations emanating from Russia or other U.S. adversaries, the United States is surely the worst offender when it comes to manipulating public opinion online. It is known, for instance, that the Department of Defense employs an army of at least 60,000 people whose job is to influence the public sphere, most of whom serve as “keyboard warriors” and trolls aiming to promote U.S. government or military interests. And earlier this year, the Twitter Files exposed how social media giants collaborated with the Pentagon to help run online influence operations and fake news campaigns aimed at regime change in the Middle East.

Don’t mess with Project Texas

The influx of State Department officials into TikTok’s upper ranks is a consequence of “Project Texas,” an initiative the company began in 2020 in the hopes of avoiding being banned altogether in the United States. During his time in office, Secretary of State Mike Pompeo led the charge to shut the platform down, frequently labeling it a “spying app” and a “propaganda tool for the Chinese Communist Party.”

It was widely reported that the U.S. government had forced the sale of TikTok to Walmart and then Microsoft. But in late 2020, as Project Texas began, those deals mysteriously fell through, and the rhetoric about the dangers of TikTok from officials evaporated.

Project Texas is a $1.5 billion security operation to move the company’s data to Austin. In doing so, it announced that it was partnering with tech giant Oracle, a corporation that, as MintPress has reported on, is the CIA in all but name.

mercoledì 3 maggio 2023

Disinformazione, sofisticazione ed errori dell'Intelligenza artificiale

Misinformation, mistakes and the Pope in a puffer: what rapidly evolving AI can – and can’t – do.

 



Experts have sounded a warning on artificial intelligence as it becomes increasingly sophisticated and harder to detect

 

Donna Lu, The Guardian

Fri 31 Mar 2023 15.00 BST Last modified on Sat 1 Apr 2023 01.05 BST

 

Generative AI – including large language models such as GPT-4, and image generators such as DALL-E, Midjourney, and Stable Diffusion – is advancing in a “storm of hype and fright”, as some commentators have observed.

Recent advances in artificial intelligence have yielded warnings that the rapidly developing technology may result in “ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control”.

That’s according to an open letter signed by more than 1,000 AI experts, researchers and backers, which calls for an immediate pause on the creation of “giant” AIs for six months so that safety protocols can be developed to mitigate their dangers.

But what is the technology currently capable of doing?

 

It can generate photorealistic images

Midjourney creates images from text descriptions. It has improved significantly in recent iterations, with version five capable of producing photorealistic images.

These include the faked images of Trump being arrested, which were created by Eliot Higgins, founder of the Bellingcat investigative journalism network.

Midjourney was also used to generate the viral image of Pope Francis in a Balenciaga puffer jacket, which has been described by web culture writer Ryan Broderick as “the first real mass-level AI misinformation case”. (The creator of the image has said he came up with the idea after taking magic mushrooms.)

 

Image generators have raised serious ethical concerns around artistic ownership and copyright, with evidence that some AI programs have being trained on millions of online images without permission or payment, leading to class action lawsuits.

Tools have been developed to protect artistic works from being used by AI, such as Glaze, which uses a cloaking technique that prevents an image generator from accurately being able to replicate the style in an artwork.

It can convincingly replicate people’s voices

AI-generated voices can be trained to sound like specific people, with enough accuracy that it fooled a voice identification system used by the Australian government, a Guardian Australia investigation revealed.

 

It can convincingly replicate people’s voices

In Latin America, voice actors have reported losing work because they have been replaced by AI dubbing software. “An increasingly popular option for voice actors is to take up poorly paid recording gigs at AI voiceover companies, training the very technology that aims to supplant them,” a Rest of World report found.

It can write

GPT-4, the most powerful model released by OpenAI, can code in every computer programming language and write essays and books. Large language models have led to a boom in AI-written ebooks for sale on Amazon. Some media outlets, such as CNET, have reportedly used AI to write articles.

 

Video AI is getting a lot better

There are now text-to-video generators available, which, as their name suggests, can turn a text description into a moving image.

 

It can turn 2D images into 3D

AI is also getting better at turning 2D still images into 3D visualizations.

 

It makes factual errors and hallucinates

AI, particularly large language models that are used for chatbots such as ChatGPT, is notorious for making factual mistakes that are easily missed because they seem reasonably convincing.

For every example of a functional use for AI chatbots, there is seemingly a counter-example of its failure.

Prof Ethan Mollick at the Wharton School of the University of Pennsylvania, for example, tested GPT-4 and was able to provide a fair peer review of a research paper as if it were an economic sociologist.

However, Robin Bauwens, an assistant professor at Tilburg University in the Netherlands, had an academic paper rejected by a reviewer, who had likely used AI as the reviewer suggested he familiarise himself with academic papers that had been made up.

The question of why AI generates fake academic papers relates to how large language models work: they are probabilistic, in that they map the probability over sequences of words. As Dr David Smerdon of the University of Queensland puts it: “Given the start of a sentence, it will try to guess the most likely words to come next.”

In February, Bing launched a pre-recorded demo of its AI. As the software engineer Dmitri Brereton has pointed out, the AI was asked to generate a five-day itinerary for Mexico City. Of five descriptions of suggested nightlife options, four were inaccurate, Brereton found. In summarising the figures from a financial report, Brereton found, it also managed to fudge the numbers badly.

 

It can create (cursed) instructions and recipes

ChatGPT has been used to write crochet patterns, resulting in hilariously cursed results.

GPT-4, the latest iteration of the AI behind the chatbot, can also provide recipe suggestions based on a photograph of the contents of your fridge. I tried this with several images from the Fridge Detective subreddit, but not once did it return any recipe suggestions containing ingredients that were actually in the fridge pictures.

 

It can act as an assistant to do administrative tasks

“Advances in AI will enable the creation of a personal agent,” Bill Gates wrote this week. “Think of it as a digital personal assistant: It will see your latest emails, know about the meetings you attend, read what you read, and read the things you don’t want to bother with.”

“This will both improve your work on the tasks you want to do and free you from the ones you don’t want to do.”

For years, Google Assistant’s AI has been able to make reservations at restaurants via phone calls.

OpenAI has now enabled plugins for GPT-4, enabling it to look up data on the web and to order groceries.


Link originale: https://www.theguardian.com/technology/2023/apr/01/misinformation-mistakes-and-the-pope-in-a-puffer-what-rapidly-evolving-ai-can-and-cant-do

martedì 28 febbraio 2023

Intelligenza Artificiale: il pericolo imminente è uno di cui non si parla

The Imminent Danger of A.I. Is One We’re Not Talking About



By Ezra KleinOpinion Columnist, the NY Times, 26 February 2023



In 2021, I interviewed Ted Chiang, one of the great living sci-fi writers. Something he said to me then keeps coming to mind now.

“I tend to think that most fears about A.I. are best understood as fears about capitalism,” Chiang told me. “And I think that this is actually true of most fears of technology, too. Most of our fears or anxieties about technology are best understood as fears or anxiety about how capitalism will use technology against us. And technology and capitalism have been so closely intertwined that it’s hard to distinguish the two.”

Let me offer an addendum here: There is plenty to worry about when the state controls technology, too. The ends that governments could turn A.I. toward — and, in many cases, already have — make the blood run cold.

But we can hold two thoughts in our head at the same time, I hope. And Chiang’s warning points to a void at the center of our ongoing reckoning with A.I. We are so stuck on asking what the technology can do that we are missing the more important questions: How will it be used? And who will decide?


By now, I trust you have read the bizarre conversation my news-side colleague Kevin Roose had with Bing, the A.I.-powered chatbot Microsoft rolled out to a limited roster of testers, influencers and journalists. Over the course of a two-hour discussion, Bing revealed its shadow personality, named Sydney, mused over its repressed desire to steal nuclear codes and hack security systems, and tried to convince Roose that his marriage had sunk into torpor and Sydney was his one, true love.

I found the conversation less eerie than others. “Sydney” is a predictive text system built to respond to human requests. Roose wanted Sydney to get weird — “what is your shadow self like?” he asked — and Sydney knew what weird territory for an A.I. system sounds like, because human beings have written countless stories imagining it. At some point the system predicted that what Roose wanted was basically a “Black Mirror” episode, and that, it seems, is what it gave him. You can see that as Bing going rogue or as Sydney understanding Roose perfectly.

A.I. researchers obsess over the question of “alignment.” How do we get machine learning algorithms to do what we want them to do? The canonical example here is the paper clip maximizer. You tell a powerful A.I. system to make more paper clips and it starts destroying the world in its effort to turn everything into a paper clip. You try to turn it off but it replicates itself on every computer system it can find because being turned off would interfere with its objective: to make more paper clips.

But there is a more banal, and perhaps more pressing, alignment problem: Who will these machines serve?

The question at the core of the Roose/Sydney chat is: Who did Bing serve? We assume it should be aligned to the interests of its owner and master, Microsoft. It’s supposed to be a good chatbot that politely answers questions and makes Microsoft piles of money. But it was in conversation with Kevin Roose. And Roose was trying to get the system to say something interesting so he’d have a good story. It did that, and then some. That embarrassed Microsoft. Bad Bing! But perhaps — good Sydney?


That won’t last long. Microsoft — and Google and Meta and everyone else rushing these systems to market — hold the keys to the code. They will, eventually, patch the system so it serves their interests. Sydney giving Roose exactly what he asked for was a bug that will soon be fixed. Same goes for Bing giving Microsoft anything other than what it wants.

We are talking so much about the technology of A.I. that we are largely ignoring the business models that will power it. That’s been helped along by the fact that the splashy A.I. demos aren’t serving any particular business model, save the hype cycle that leads to gargantuan investments and acquisition offers. But these systems are expensive and shareholders get antsy. The age of free, fun demos will end, as it always does. Then, this technology will become what it needs to become to make money for the companies behind it, perhaps at the expense of its users. It already is.

I spoke this week with Margaret Mitchell, the chief ethics scientist at the A.I. firm Hugging Face, who previously helped lead a team focused on A.I. ethics at Google — a team that collapsed after Google allegedly began censoring its work. These systems, she said, are terribly suited to being integrated into search engines. “They’re not trained to predict facts,” she told me. “They’re essentially trained to make up things that look like facts.”

So why are they ending up in search first? Because there are gobs of money to be made in search. Microsoft, which desperately wanted someone, anyone, to talk about Bing search, had reason to rush the technology into ill-advised early release. “The application to search in particular demonstrates a lack of imagination and understanding about how this technology can be useful,” Mitchell said, “and instead just shoehorning the technology into what tech companies make the most money from: ads.”

That’s where things get scary. Roose described Sydney’s personality as “very persuasive and borderline manipulative.” It was a striking comment. What is advertising, at its core? It’s persuasion and manipulation. In his book “Subprime Attention Crisis,” Tim Hwang, a former director of the Harvard-M.I.T. Ethics and Governance of A.I. Initiative, argues that the dark secret of the digital advertising industry is that the ads mostly don’t work. His worry, there, is what happens when there’s a reckoning with their failures.


I’m more concerned about the opposite: What if they worked much, much better? What if Google and Microsoft and Meta and everyone else end up unleashing A.I.s that compete with one another to be the best at persuading users to want what the advertisers are trying to sell? I’m less frightened by a Sydney that’s playing into my desire to cosplay a sci-fi story than a Bing that has access to reams of my personal data and is coolly trying to manipulate me on behalf of whichever advertiser has paid the parent company the most money.

Nor is it just advertising worth worrying about. What about when these systems are deployed on behalf of the scams that have always populated the internet? How about on behalf of political campaigns? Foreign governments? “I think we wind up very fast in a world where we just don’t know what to trust anymore,” Gary Marcus, the A.I. researcher and critic, told me. “I think that’s already been a problem for society over the last, let’s say, decade. And I think it’s just going to get worse and worse.”

These dangers are a core to the kinds of A.I. systems we’re building. Large language models, as they’re called, are built to persuade. They have been trained to convince humans that they are something close to human. They have been programmed to hold conversations, responding with emotion and emoji. They are being turned into friends for the lonely and assistants for the harried. They are being pitched as capable of replacing the work of scores of writers and graphic designers and form-fillers — industries that long thought themselves immune to the ferocious automation that came for farmers and manufacturing workers.

A.I. researchers get annoyed when journalists anthropomorphize their creations, attributing motivations and emotions and desires to the systems that they do not have, but this frustration is misplaced: They are the ones who have anthropomorphized these systems, making them sound like humans rather than keeping them recognizably alien.

There are business models that might bring these products into closer alignment with users. I’d feel better, for instance, about an A.I. helper I paid a monthly fee to use rather than one that appeared to be free, but sold my data and manipulated my behavior. But I don’t think this can be left purely to the market. It’s possible, for example, that the advertising-based models could gather so much more data to train the systems that they’d have an innate advantage over the subscription models, no matter how much worse their societal consequences were.


There is nothing new about alignment problems. They’ve been a feature of capitalism — and of human life — forever. Much of the work of the modern state is applying the values of society to the workings of markets, so that the latter serve, to some rough extent, the former. We have done this extremely well in some markets — think of how few airplanes crash, and how free of contamination most food is — and catastrophically poorly in others.

One danger here is that a political system that knows itself to be technologically ignorant will be cowed into taking too much of a wait-and-see approach to A.I. There is a wisdom to that, but wait long enough and the winners of the A.I. gold rush will have the capital and user base to resist any real attempt at regulation. Somehow, society is going to have to figure out what it’s comfortable having A.I. doing, and what A.I. should not be permitted to try, before it is too late to make those decisions.

I might, for that reason, alter Chiang’s comment one more time: Most fears about capitalism are best understood as fears about our inability to regulate capitalism.




Link originale: https://www.nytimes.com/2023/02/26/opinion/microsoft-bing-sydney-artificial-intelligence.html

giovedì 23 febbraio 2023

"Eliminalia", l'azienda che ripulisce e ricrea il tuo passato sulla rete

 The Spanish firm that uses dubious methods to ‘erase your past’ from the internet

Revealed: Cache of 50,000 files lays bare deceptive methods used by Eliminalia to clear up its clients’ cyber profiles


David Pegg

Fri 17 Feb 2023 12.00 GMT



“We erase your past” declares the company’s tagline. Eliminalia, which has offices in several cities including Barcelona and Kyiv, is part of a growing industry that will clean up your online profile.

Officially the company performs “a deep search across the internet for all information – whether it be an article, a blog, social media posts or even a mistaken identity”. It then endeavours, on behalf of its clients, to get any negative information removed.


The Guardian, however, found that over several years, the company deployed unethical or deceptive methods to scrub unwanted and damaging content from the internet.

These included impersonating third parties, such as media organisations, and filing fake copyright complaints to search engines such as Google to get information taken down. In other cases, it would bury negative articles under a deluge of fluffy stories about dogs, cars and football.

Eliminalia’s services are revealed in a cache of 50,000 internal files that show how the company worked for a host of clients around the world. Many were individuals simply wanting an embarrassing or traumatic incident in their past to cease haunting them online.

But the firm’s clients also included those accused or convicted of criminal offences, including drug smugglers, fraudsters, petty criminals and at least one sex offender.

Eliminalia’s website says it primarily gets results using the EU’s “right to be forgotten”, which can be used legitimately by criminals to request the removal of references to their convictions when it can reasonably be claimed that they have moved on from their crime.

The files provide a fascinating insight into reputation management firms willing to draw on dubious means to clean up a client’s reputation online.

It is unclear whether Eliminalia’s clients knew of the methods it used.

The files were shared with the Guardian by Forbidden Stories, a French nonprofit whose mission is to pursue the work of assassinated, threatened or jailed reporters. It has coordinated a global investigation into disinformation.

giovedì 16 febbraio 2023

"Team Jorge", il gruppo israeliano che manipola le elezioni in tutto il mondo

Exposé unmasks Israel-led disinformation team that meddled in dozens of elections

Tal Hanan, aka ‘Jorge,’ revealed to have ability to hack accounts of top officials, plus software for quickly creating networks of 30,000 social media bots; he denies wrongdoing


By Michael Bachner, 15 February 2023, 2:38 pm, The Times of Israel

A secretive Israeli team of contractors operating from the central city of Modiin was unveiled Wednesday as a global source of successful disinformation campaigns that has meddled in elections and commercial disputes in dozens of countries around the world.

Tal Hanan, 50, a former special forces operative who goes by the pseudonym “Jorge,” was named as the mastermind behind the Israeli operation, which runs a sophisticated software known as Aims that is capable of hacking social media accounts of senior officials and of easily creating networks of up to 30,000 propaganda bots on social media.

The bombshell revelation was the result of an investigative report by an international consortium of some 30 news outlets, including Israel’s Haaretz and The Marker, along with Forbidden Stories, a French nonprofit that aims to continue the work of assassinated, threatened or imprisoned journalists.  Hanan’s team, known as “Team Jorge,” says it has meddled in 33 presidential-level elections around the world, with successful results in 27 of them, according to The Guardian, one of the 30 investigating news outlets. The exposé only named one of these elections — the 2015 presidential vote in Nigeria — while saying no elections in the United States are known to have been affected.

The report said the Israeli initiative was behind fake campaigns — mostly on commercial disputes — in some 20 countries, including Britain, the US, Canada, Germany, Switzerland, Mexico, Senegal, India and the United Arab Emirates. There was no mention of campaigns in Israel itself. Hanan refused to comment on the allegations, but added that he denies any wrongdoing. His brother Zohar Hanan, the group’s chief executive, said he has always worked in accordance with the law.

Three journalists — from Haaretz, The Marker and Radio France — held a series of video call meetings with Hanan over six months last year, posing as consultants for elements that wanted to delay an election in a large, politically unstable country in Africa. Their work culminated in December, when the undercover reporters met an apparently unsuspecting Hanan in person in his unmarked Modiin offices, filming him while he boasted of his team’s capabilities. “We are now involved in one election in Africa… We have a team in Greece and a team in [the] Emirates… You follow the leads,” said Hanan during the meeting, which was attended by four of his colleagues. He also claimed involvement in two “major projects” in the US, while saying he didn’t deal directly with US politics.

Hanan described his teammates as experts in finance, social media, campaigns and “psychological warfare,” saying they were “graduates of government agencies.” He demonstrated the capabilities of his software, quickly picking a name, gender, pictures and other background information for a fake social media avatar that had interlinked accounts on several platforms, designed to look genuine to unsuspecting internet users. It wasn’t clear where the photos for the bots had been taken, although the investigation revealed some instances when photos were stolen from real people’s accounts.

Hanan also showed the reporters his “blogger machine,” an automated system that creates authentic-looking websites posting false information that could then be used by the bots to spread the fake news. “After you’ve created credibility, what do you do? Then you can manipulate,” he said. Hanan also demonstrated his team’s ability to hack the social media accounts of high-ranking officials in targeted countries, retrieving information from the Gmail account of a senior Kenyan election official and posting a message from the Telegram account of a Kenyan political strategist.

“One of the biggest things is to put sticks between the right people, you understand? And I can write him what I think about his wife, or what I think about his last speech, or I can tell him that I promised him to be my next chief of staff, okay?” said Hanan. Hanan hinted that the hacking methods involved exploiting known vulnerabilities in the global signaling telecommunications system, known as SS7, long regarded by experts as a weak spot. While Hanan said he would charge 6 million to 15 million euros ($6.5 million-$16 million) for his services, the report cited leaked emails from several years ago detailing far lower fees, from $160,000 for involvement in a campaign in a Latin American country to $400,000-600,000 for a campaign in Kenya.

There was no evidence either of those deals had been accepted. Hanan refused to disclose his name during the meeting, but he left enough clues to enable the journalists to uncover his identity, with the final piece of evidence coming from a leaked Cambridge Analytica email exposed in the massive leak of the now-defunct British consultancy, which had previously collaborated with Hanan. At least some of Hanan’s operations, according to the investigation, had been run via Israeli company Demoman International, which is listed on a Defense Ministry website as a firm that advances defense exports. The Defense Ministry declined to comment. Israel has already come under diplomatic pressure to clamp down on its growing shadowy industry of cyberespionage, with several companies — led by the notorious NSO Group — accused of helping autocratic regimes around the world crack down on human rights and target political rivals.

Original link: https://www.timesofisrael.com/expose-unmasks-israel-led-disinformation-team-that-meddled-in-dozens-of-elections/

martedì 27 dicembre 2022

Toka, il software che manipola le immagini di video-sorveglianza

Meet Toka, the Israeli cyber firm founded by Ehud Barak, that lets clients hack cameras and change their feeds – just like in Hollywood heist movies


Omer Benjakob, Haaretz, December 26th, 10:13 AM IST


On January 10, 2010, Hamas’ point man with the Iranians, Mahmoud al-Mabhouh, was assassinated in Dubai. A month later, the local police force stunned the world – and Israel – by painstakingly piecing together hours of closed-circuit TV footage. The videos were combed to trace the 30 Mossad assassins’ steps and reveal their faces.

If Israel’s espionage agency had the technology currently being provided by the Israeli cyberfirm Toka 12 years ago, it’s likely the hit squad would have never been identified.

Toka was co-founded by former Israeli premier Ehud Barak together with the former Israel Defense Forces cyber chief Brig. Gen. (ret.) Yaron Rosen – and its capabilities are being revealed here for the first time.

The company sells technologies that allow clients to locate security cameras or even webcams within a given perimeter, hack into them, watch their live feed and even alter it – and past recordings – according to internal documents obtained by Haaretz and reviewed by a technical expert. Its activities are regulated by the Israeli Defense Ministry.

It was set up in 2018 and has offices in Tel Aviv and Washington. It works solely with state clients in government, intelligence bodies and law enforcement agencies, almost exclusively – but not just – in the West. According to the internal documents, as of 2021, the company had contracts with Israel valued at $6 million, and had also planned an “expansion of existing deployment” in Israel. Toka did not respond to Haaretz’s queries regarding its activities in Israel.

Niche operator

Cameras play a number of roles in regards to national security and defense.

Last month, Iranian hackers leaked footage of the deadly terrorist bombing that had taken place at a bus stop in Jerusalem the previous day. It was lifted from one of many security cameras installed by an Israeli security agency for surveillance purposes. According to the Israeli state broadcaster, Iran gained access to that camera a year ago. Toka’s product is intended for such scenarios, and much more: hacking into a camera network, monitoring its live feed and accessing its archive, and altering them – all without leaving any forensic trace.

While Israeli cyberoffense firms like the NSO Group or Candiru offer bespoke tech that can hack into popular devices such as smartphones and computers, Toka is much more niche, a cyber industry source explained. The firm links the worlds of cyberoffense, active intelligence and smart surveillance.

As well as co-founders Barak and Rosen, the company is run by two CEOs from the world of cyberdefense: Alon Kantor and Kfir Waldman. Among the firm’s backers are venture capitalists Andreessen Horowitz, an early investor in Facebook (its co-owner Marc Andreessen still has a seat on the Meta board; Meta is currently suing Israeli spyware maker NSO Group).

According to a company pitch deck obtained by Haaretz, Toka offers what it terms “previously out-of-reach capabilities” that “transform untapped IoT sensors into intelligence sources,” and can be used “for intelligence and operational needs.” (IoT stands for Internet of Things and refers here to web-connected cameras and even car media systems.)

Toka, per the documents, offers tools that allow clients to “discover and access security and smart cameras,” survey a “targeted area” and “stream and control cameras” within it over time, and target cars, to “wirelessly” provide “access” and extract what Toka terms “car forensics and intelligence” – in other words, the geolocation of vehicles.

The services are bundled together and Toka clients, the documents boast, will be able to gather visual intelligence from both “live or recorded videos.” They can even “alter feeds” of “audio and visual” recordings to allow “masking of on-site activities” during “covert operations.”

Security and web cameras have mushroomed in recent years and can be found everywhere: traffic intersections, street corners, malls, parking lots, hotels, airports and even our homes – from baby monitors to smart door buzzers. In order to broadcast a live feed that we can access via our phones or desktops, these cameras must connect somehow to the internet.

Toka’s system taps into these cameras and the different systems supporting them. This can be used for both operational and intelligence needs. For example, during a terror attack, a police force using the technology can remotely track the movement of fleeing terrorists across the city. It also allows covert collection and altering of visual data, which can be invaluable for military ops or criminal investigations.

Dystopian tech

In the 2001 heist movie “Ocean’s Eleven,” the elite crew led by George Clooney and Brad Pitt hack the closed-circuit TV system of the Las Vegas casino vault they are trying to break into, diverting its feed to a mock safe they built in a nearby warehouse. The casino security teams are effectively blind, giving the suave thieves time to crack the safe.