"In Europa ci sono già i presupposti per l'esplosione di un conflitto sociale. Questo è il seme del malcontento, dell'egoismo e della disperazione che la classe politica e la classe dirigente hanno sparso. Questo è terreno fertile per la xenofobia, la violenza, il terrorismo interno, il successo del populismo e dell'estremismo politico."

venerdì 3 maggio 2024

GUERRA AL PENSIERO - Trasmissione del 29 aprile 2024

Questo è il video integrale della trasmissione "GUERRA AL PENSIERO" andata in onda sul canale YouTube di #libertàdipensieroMDN il 29 aprile 2024.

Ospiti: Martina Pastorelli, Giuseppe Liturri, Marco Montanari e Filippo Burla.

GUERRA AL PENSIERO


Pochi giorni dopo la messa in onda di questa trasmissione, l'intero canale #libertàdipensieroMDN è stato cancellato da YouTube per motivi non specificati.

lunedì 22 aprile 2024

A VOLTE RITORNANO - Trasmissione del 22 aprile 2024

Video integrale della trasmissione "A VOLTE RITORNANO" sul canale #libertàdipensieroMDN.

Ospiti: Fabio Dragoni, Giuseppe Liturri, Sergio Giraldo.

A VOLTE RITORNANO 

martedì 27 giugno 2023

Spyware in Europa: UE autorizza sorveglianza intrusiva dei giornalisti?

Draft EU plans to allow spying on journalists are dangerous, warn critics

Move to allow spyware to be placed on reporters’ phones would have a ‘chilling effect’, say media experts

Draft legislation published by EU leaders that would allow national security agencies to spy on journalists has been condemned by media and civic society groups as dangerous and described by a leading MEP as “incomprehensible”.
On Wednesday, the European Council – which represents the governments of EU member states – published a draft of the European Media Freedom Act that would allow spyware to be placed on journalists’ phones if a national government thought it necessary.

Unusually, the council did not take the step of holding an in-person meeting of ministers responsible for media before the draft was published.
The Dutch MEP Sophie in’t Veld, who has overseen the European parliament’s investigation into the use of Pegasus spyware on journalists and public figures, said the claim that permission to spy on the press was needed in the interests of national security was “a lie”. “I think what the council is doing is unacceptable. It’s also incomprehensible. Well, it’s incomprehensible if they are serious about democracy,” said In ‘t Veld.
The first draft of the act – originally tabled by the European Commission to strengthen protections for the independence of journalism in countries where it is under threat such as Poland and Hungary - had included strong safeguards against the use of spyware. The draft must be agreed by the European parliament before it becomes law.

The European Federation of Journalists (EFJ), which represents more than 300,000 members of the press in 45 countries including the UK, accused EU leaders of holding the principles of media freedom in “dangerous disregard”. The EFJ said the move was a “blow to media freedom” that would “put journalists even more at risk” than they are already. Giving governments the power to place spyware on journalists’ phones on the grounds of “national security” would have a “chilling effect on whistleblowers” and other sources, it warned. “We know too well how the defence of national security is misused to justify media freedom violations,” it added in a statement calling for the European parliament to “save” the draft legislation from this threat.

As it stands, member states would be able to hack into journalists’ phones if they suspect their sources could be talking to criminals involved in anything the state perceives to be a threat. The change was led by France, which won backing for an amendment to protect journalists but not “without prejudice to the member states’ responsibility for safeguarding national security”. If the act became law in its current state, British journalists working in the EU would also be at risk of having their phones and computers surveilled.
Last year French intelligence investigators confirmed that Pegasus spyware had been found on the phones of three journalists including a senior member of staff at the TV news station France 24.

The non-profit Forbidden Stories media organisation and Amnesty International said it believed that at least 180 journalists may have been selected as people of interest in advance of possible surveillance by government clients of the Israeli NSO group. NSO has long insisted that the governments to whom it licenses Pegasus are contractually bound to only use the powerful spying tool to fight “serious crime and terrorism”.
Last week, European Digital Rights (EDRi), a network of NGOs and digital rights advocates, called on the European Council to “reconsider” its national security exemptions for spying on the press. “The council is taking dangerous steps towards legalising unacceptable forms of surveillance against journalists and their sources,” said Chloé Berthélémy, senior policy adviser at EDRi.

Link originale: https://www.theguardian.com/world/2023/jun/22/draft-eu-plans-to-allow-spying-on-journalists-are-dangerous-warn-critics

mercoledì 17 maggio 2023

Sorpresi? TikTok, il "cavallo di troia cinese" gestito da funzionari del Dipartimento di Stato USA

TikTok: Chinese “Trojan Horse” Is Run by State Department Officials

Alan Macleod, Mintpressnews, 13 april 2023

Amid a national hysteria claiming the popular video-sharing app is a Chinese Trojan Horse, a MintPress News investigation has found dozens of ex-U.S. State Department officials working in key positions at TikTok. Many more individuals with backgrounds in the FBI, CIA and other departments of the national security state also hold influential posts at the social media giant, affecting the content that over one billion users see.

While American politicians demand the app be banned on national security grounds, try to force through an internet surveillance act that would turn the country into an Orwellian state, make clueless statements about how TikTok is dangerous because it connects to your Wi-Fi, it is possible that TikTok is already much closer to Washington than it is to Beijing.

State Department-affiliated media

For quite some time, TikTok has been recruiting former State Department officials to run its operations. The company’s head of data public policy for Europe, for example, is Jade Nester. Before being recruited for that influential role, Nester was a senior official in Washington, serving for four years as the State Department’s director of Internet public policy.

Mariola Janik, meanwhile, left a long and fruitful career in the government to work for TikTok. Starting out at the Bureau of Western Hemisphere Affairs, Janik became a career diplomat in the State Department before moving to the Department of Homeland Security. In September, however, she left the government to immediately take up the position of TikTok’s trust and safety program manager, a job that will inevitably include removing content and reshaping algorithms.

While there is no suggestion that Janik is anything other than a model employee, the fact that a U.S. government agent walked into such an influential position at the social media giant should be cause for concern. If, for instance, a high Chinese official was hired to influence what the U.S. public saw in their social media feeds, it would likely be the centerpiece of the TikTok furor currently gripping Washington.

Janik is not the only former security official working on TikTok’s trust and safety team, however. Between 2008 and 2021, Christian Cardona enjoyed a distinguished career at the State Department, serving in Poland, Turkey and Oman, and was in the thick of U.S. interventionism in the Middle East. Between 2012 and 2013, he was an assistant to the U.S. ambassador in Kabul. He later left that role to become the political and military affairs manager for Iran.

In the summer of 2021, he went straight from his top State Department job to become product policy manager for trust and safety at TikTok, a position that, on paper, he appears completely unqualified for. Earlier this year, Cardona left the company.

Another influential individual at TikTok is recruiting coordinator Katrina Villacisneros. Yet before she was choosing whom the company hires, Villacisneros worked at the State Department’s Office of Human Rights and Humanitarian Affairs. And until 2021, she was part of Army Cyber Command, the U.S. military unit that oversees cyberattacks and information warfare online.

Other TikTok employees with long histories in the U.S. national security state include: Brad Earman, global lead of criminal and civil investigations, who spent 21 years as a special agent in the Air Force Office of Special Investigation and also worked as a program manager for antiterrorism at the State Department; and Ryan Walsh, escalations management lead for trust and safety at TikTok, who, until 2020, was the government’s senior advisor for digital strategy. A central part of Walsh’s State Department job, his own résumé notes, was “advanc[ing] supportive narratives” for the U.S. and NATO online.

Walsh, therefore, is illustrative of a broader wave of individuals who have moved from governments attempting to manipulate the global town square to private companies where they are entrusted to keep the public safe from exactly the sort of state-backed influence operations their former colleagues are orchestrating. In short, then, this system, whereby recently retired government officials decide what the world sees (and does not see) online, is one step removed from state censorship on a global level.

For all the talk of digital influence operations emanating from Russia or other U.S. adversaries, the United States is surely the worst offender when it comes to manipulating public opinion online. It is known, for instance, that the Department of Defense employs an army of at least 60,000 people whose job is to influence the public sphere, most of whom serve as “keyboard warriors” and trolls aiming to promote U.S. government or military interests. And earlier this year, the Twitter Files exposed how social media giants collaborated with the Pentagon to help run online influence operations and fake news campaigns aimed at regime change in the Middle East.

Don’t mess with Project Texas

The influx of State Department officials into TikTok’s upper ranks is a consequence of “Project Texas,” an initiative the company began in 2020 in the hopes of avoiding being banned altogether in the United States. During his time in office, Secretary of State Mike Pompeo led the charge to shut the platform down, frequently labeling it a “spying app” and a “propaganda tool for the Chinese Communist Party.”

It was widely reported that the U.S. government had forced the sale of TikTok to Walmart and then Microsoft. But in late 2020, as Project Texas began, those deals mysteriously fell through, and the rhetoric about the dangers of TikTok from officials evaporated.

Project Texas is a $1.5 billion security operation to move the company’s data to Austin. In doing so, it announced that it was partnering with tech giant Oracle, a corporation that, as MintPress has reported on, is the CIA in all but name.

mercoledì 3 maggio 2023

Disinformazione, sofisticazione ed errori dell'Intelligenza artificiale

Misinformation, mistakes and the Pope in a puffer: what rapidly evolving AI can – and can’t – do.

 



Experts have sounded a warning on artificial intelligence as it becomes increasingly sophisticated and harder to detect

 

Donna Lu, The Guardian

Fri 31 Mar 2023 15.00 BST Last modified on Sat 1 Apr 2023 01.05 BST

 

Generative AI – including large language models such as GPT-4, and image generators such as DALL-E, Midjourney, and Stable Diffusion – is advancing in a “storm of hype and fright”, as some commentators have observed.

Recent advances in artificial intelligence have yielded warnings that the rapidly developing technology may result in “ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control”.

That’s according to an open letter signed by more than 1,000 AI experts, researchers and backers, which calls for an immediate pause on the creation of “giant” AIs for six months so that safety protocols can be developed to mitigate their dangers.

But what is the technology currently capable of doing?

 

It can generate photorealistic images

Midjourney creates images from text descriptions. It has improved significantly in recent iterations, with version five capable of producing photorealistic images.

These include the faked images of Trump being arrested, which were created by Eliot Higgins, founder of the Bellingcat investigative journalism network.

Midjourney was also used to generate the viral image of Pope Francis in a Balenciaga puffer jacket, which has been described by web culture writer Ryan Broderick as “the first real mass-level AI misinformation case”. (The creator of the image has said he came up with the idea after taking magic mushrooms.)

 

Image generators have raised serious ethical concerns around artistic ownership and copyright, with evidence that some AI programs have being trained on millions of online images without permission or payment, leading to class action lawsuits.

Tools have been developed to protect artistic works from being used by AI, such as Glaze, which uses a cloaking technique that prevents an image generator from accurately being able to replicate the style in an artwork.

It can convincingly replicate people’s voices

AI-generated voices can be trained to sound like specific people, with enough accuracy that it fooled a voice identification system used by the Australian government, a Guardian Australia investigation revealed.

 

It can convincingly replicate people’s voices

In Latin America, voice actors have reported losing work because they have been replaced by AI dubbing software. “An increasingly popular option for voice actors is to take up poorly paid recording gigs at AI voiceover companies, training the very technology that aims to supplant them,” a Rest of World report found.

It can write

GPT-4, the most powerful model released by OpenAI, can code in every computer programming language and write essays and books. Large language models have led to a boom in AI-written ebooks for sale on Amazon. Some media outlets, such as CNET, have reportedly used AI to write articles.

 

Video AI is getting a lot better

There are now text-to-video generators available, which, as their name suggests, can turn a text description into a moving image.

 

It can turn 2D images into 3D

AI is also getting better at turning 2D still images into 3D visualizations.

 

It makes factual errors and hallucinates

AI, particularly large language models that are used for chatbots such as ChatGPT, is notorious for making factual mistakes that are easily missed because they seem reasonably convincing.

For every example of a functional use for AI chatbots, there is seemingly a counter-example of its failure.

Prof Ethan Mollick at the Wharton School of the University of Pennsylvania, for example, tested GPT-4 and was able to provide a fair peer review of a research paper as if it were an economic sociologist.

However, Robin Bauwens, an assistant professor at Tilburg University in the Netherlands, had an academic paper rejected by a reviewer, who had likely used AI as the reviewer suggested he familiarise himself with academic papers that had been made up.

The question of why AI generates fake academic papers relates to how large language models work: they are probabilistic, in that they map the probability over sequences of words. As Dr David Smerdon of the University of Queensland puts it: “Given the start of a sentence, it will try to guess the most likely words to come next.”

In February, Bing launched a pre-recorded demo of its AI. As the software engineer Dmitri Brereton has pointed out, the AI was asked to generate a five-day itinerary for Mexico City. Of five descriptions of suggested nightlife options, four were inaccurate, Brereton found. In summarising the figures from a financial report, Brereton found, it also managed to fudge the numbers badly.

 

It can create (cursed) instructions and recipes

ChatGPT has been used to write crochet patterns, resulting in hilariously cursed results.

GPT-4, the latest iteration of the AI behind the chatbot, can also provide recipe suggestions based on a photograph of the contents of your fridge. I tried this with several images from the Fridge Detective subreddit, but not once did it return any recipe suggestions containing ingredients that were actually in the fridge pictures.

 

It can act as an assistant to do administrative tasks

“Advances in AI will enable the creation of a personal agent,” Bill Gates wrote this week. “Think of it as a digital personal assistant: It will see your latest emails, know about the meetings you attend, read what you read, and read the things you don’t want to bother with.”

“This will both improve your work on the tasks you want to do and free you from the ones you don’t want to do.”

For years, Google Assistant’s AI has been able to make reservations at restaurants via phone calls.

OpenAI has now enabled plugins for GPT-4, enabling it to look up data on the web and to order groceries.


Link originale: https://www.theguardian.com/technology/2023/apr/01/misinformation-mistakes-and-the-pope-in-a-puffer-what-rapidly-evolving-ai-can-and-cant-do

martedì 28 febbraio 2023

Intelligenza Artificiale: il pericolo imminente è uno di cui non si parla

The Imminent Danger of A.I. Is One We’re Not Talking About



By Ezra KleinOpinion Columnist, the NY Times, 26 February 2023



In 2021, I interviewed Ted Chiang, one of the great living sci-fi writers. Something he said to me then keeps coming to mind now.

“I tend to think that most fears about A.I. are best understood as fears about capitalism,” Chiang told me. “And I think that this is actually true of most fears of technology, too. Most of our fears or anxieties about technology are best understood as fears or anxiety about how capitalism will use technology against us. And technology and capitalism have been so closely intertwined that it’s hard to distinguish the two.”

Let me offer an addendum here: There is plenty to worry about when the state controls technology, too. The ends that governments could turn A.I. toward — and, in many cases, already have — make the blood run cold.

But we can hold two thoughts in our head at the same time, I hope. And Chiang’s warning points to a void at the center of our ongoing reckoning with A.I. We are so stuck on asking what the technology can do that we are missing the more important questions: How will it be used? And who will decide?


By now, I trust you have read the bizarre conversation my news-side colleague Kevin Roose had with Bing, the A.I.-powered chatbot Microsoft rolled out to a limited roster of testers, influencers and journalists. Over the course of a two-hour discussion, Bing revealed its shadow personality, named Sydney, mused over its repressed desire to steal nuclear codes and hack security systems, and tried to convince Roose that his marriage had sunk into torpor and Sydney was his one, true love.

I found the conversation less eerie than others. “Sydney” is a predictive text system built to respond to human requests. Roose wanted Sydney to get weird — “what is your shadow self like?” he asked — and Sydney knew what weird territory for an A.I. system sounds like, because human beings have written countless stories imagining it. At some point the system predicted that what Roose wanted was basically a “Black Mirror” episode, and that, it seems, is what it gave him. You can see that as Bing going rogue or as Sydney understanding Roose perfectly.

A.I. researchers obsess over the question of “alignment.” How do we get machine learning algorithms to do what we want them to do? The canonical example here is the paper clip maximizer. You tell a powerful A.I. system to make more paper clips and it starts destroying the world in its effort to turn everything into a paper clip. You try to turn it off but it replicates itself on every computer system it can find because being turned off would interfere with its objective: to make more paper clips.

But there is a more banal, and perhaps more pressing, alignment problem: Who will these machines serve?

The question at the core of the Roose/Sydney chat is: Who did Bing serve? We assume it should be aligned to the interests of its owner and master, Microsoft. It’s supposed to be a good chatbot that politely answers questions and makes Microsoft piles of money. But it was in conversation with Kevin Roose. And Roose was trying to get the system to say something interesting so he’d have a good story. It did that, and then some. That embarrassed Microsoft. Bad Bing! But perhaps — good Sydney?


That won’t last long. Microsoft — and Google and Meta and everyone else rushing these systems to market — hold the keys to the code. They will, eventually, patch the system so it serves their interests. Sydney giving Roose exactly what he asked for was a bug that will soon be fixed. Same goes for Bing giving Microsoft anything other than what it wants.

We are talking so much about the technology of A.I. that we are largely ignoring the business models that will power it. That’s been helped along by the fact that the splashy A.I. demos aren’t serving any particular business model, save the hype cycle that leads to gargantuan investments and acquisition offers. But these systems are expensive and shareholders get antsy. The age of free, fun demos will end, as it always does. Then, this technology will become what it needs to become to make money for the companies behind it, perhaps at the expense of its users. It already is.

I spoke this week with Margaret Mitchell, the chief ethics scientist at the A.I. firm Hugging Face, who previously helped lead a team focused on A.I. ethics at Google — a team that collapsed after Google allegedly began censoring its work. These systems, she said, are terribly suited to being integrated into search engines. “They’re not trained to predict facts,” she told me. “They’re essentially trained to make up things that look like facts.”

So why are they ending up in search first? Because there are gobs of money to be made in search. Microsoft, which desperately wanted someone, anyone, to talk about Bing search, had reason to rush the technology into ill-advised early release. “The application to search in particular demonstrates a lack of imagination and understanding about how this technology can be useful,” Mitchell said, “and instead just shoehorning the technology into what tech companies make the most money from: ads.”

That’s where things get scary. Roose described Sydney’s personality as “very persuasive and borderline manipulative.” It was a striking comment. What is advertising, at its core? It’s persuasion and manipulation. In his book “Subprime Attention Crisis,” Tim Hwang, a former director of the Harvard-M.I.T. Ethics and Governance of A.I. Initiative, argues that the dark secret of the digital advertising industry is that the ads mostly don’t work. His worry, there, is what happens when there’s a reckoning with their failures.


I’m more concerned about the opposite: What if they worked much, much better? What if Google and Microsoft and Meta and everyone else end up unleashing A.I.s that compete with one another to be the best at persuading users to want what the advertisers are trying to sell? I’m less frightened by a Sydney that’s playing into my desire to cosplay a sci-fi story than a Bing that has access to reams of my personal data and is coolly trying to manipulate me on behalf of whichever advertiser has paid the parent company the most money.

Nor is it just advertising worth worrying about. What about when these systems are deployed on behalf of the scams that have always populated the internet? How about on behalf of political campaigns? Foreign governments? “I think we wind up very fast in a world where we just don’t know what to trust anymore,” Gary Marcus, the A.I. researcher and critic, told me. “I think that’s already been a problem for society over the last, let’s say, decade. And I think it’s just going to get worse and worse.”

These dangers are a core to the kinds of A.I. systems we’re building. Large language models, as they’re called, are built to persuade. They have been trained to convince humans that they are something close to human. They have been programmed to hold conversations, responding with emotion and emoji. They are being turned into friends for the lonely and assistants for the harried. They are being pitched as capable of replacing the work of scores of writers and graphic designers and form-fillers — industries that long thought themselves immune to the ferocious automation that came for farmers and manufacturing workers.

A.I. researchers get annoyed when journalists anthropomorphize their creations, attributing motivations and emotions and desires to the systems that they do not have, but this frustration is misplaced: They are the ones who have anthropomorphized these systems, making them sound like humans rather than keeping them recognizably alien.

There are business models that might bring these products into closer alignment with users. I’d feel better, for instance, about an A.I. helper I paid a monthly fee to use rather than one that appeared to be free, but sold my data and manipulated my behavior. But I don’t think this can be left purely to the market. It’s possible, for example, that the advertising-based models could gather so much more data to train the systems that they’d have an innate advantage over the subscription models, no matter how much worse their societal consequences were.


There is nothing new about alignment problems. They’ve been a feature of capitalism — and of human life — forever. Much of the work of the modern state is applying the values of society to the workings of markets, so that the latter serve, to some rough extent, the former. We have done this extremely well in some markets — think of how few airplanes crash, and how free of contamination most food is — and catastrophically poorly in others.

One danger here is that a political system that knows itself to be technologically ignorant will be cowed into taking too much of a wait-and-see approach to A.I. There is a wisdom to that, but wait long enough and the winners of the A.I. gold rush will have the capital and user base to resist any real attempt at regulation. Somehow, society is going to have to figure out what it’s comfortable having A.I. doing, and what A.I. should not be permitted to try, before it is too late to make those decisions.

I might, for that reason, alter Chiang’s comment one more time: Most fears about capitalism are best understood as fears about our inability to regulate capitalism.




Link originale: https://www.nytimes.com/2023/02/26/opinion/microsoft-bing-sydney-artificial-intelligence.html

giovedì 23 febbraio 2023

"Eliminalia", l'azienda che ripulisce e ricrea il tuo passato sulla rete

 The Spanish firm that uses dubious methods to ‘erase your past’ from the internet

Revealed: Cache of 50,000 files lays bare deceptive methods used by Eliminalia to clear up its clients’ cyber profiles


David Pegg

Fri 17 Feb 2023 12.00 GMT



“We erase your past” declares the company’s tagline. Eliminalia, which has offices in several cities including Barcelona and Kyiv, is part of a growing industry that will clean up your online profile.

Officially the company performs “a deep search across the internet for all information – whether it be an article, a blog, social media posts or even a mistaken identity”. It then endeavours, on behalf of its clients, to get any negative information removed.


The Guardian, however, found that over several years, the company deployed unethical or deceptive methods to scrub unwanted and damaging content from the internet.

These included impersonating third parties, such as media organisations, and filing fake copyright complaints to search engines such as Google to get information taken down. In other cases, it would bury negative articles under a deluge of fluffy stories about dogs, cars and football.

Eliminalia’s services are revealed in a cache of 50,000 internal files that show how the company worked for a host of clients around the world. Many were individuals simply wanting an embarrassing or traumatic incident in their past to cease haunting them online.

But the firm’s clients also included those accused or convicted of criminal offences, including drug smugglers, fraudsters, petty criminals and at least one sex offender.

Eliminalia’s website says it primarily gets results using the EU’s “right to be forgotten”, which can be used legitimately by criminals to request the removal of references to their convictions when it can reasonably be claimed that they have moved on from their crime.

The files provide a fascinating insight into reputation management firms willing to draw on dubious means to clean up a client’s reputation online.

It is unclear whether Eliminalia’s clients knew of the methods it used.

The files were shared with the Guardian by Forbidden Stories, a French nonprofit whose mission is to pursue the work of assassinated, threatened or jailed reporters. It has coordinated a global investigation into disinformation.