Authorities investigate hundreds of TikTok accounts that likely aim to influence elections
According to Deník N, Czech authorities are examining hundreds of apparently fake TikTok accounts that spread pro-Russian narratives and seek to influence parliamentary elections, with analysts pointing to typical signs of automation such as missing diacritics, random language switching, and topic jumps outside the Czech context. These profiles appear to coordinate to boost reach through shares and comments, manipulating recommendation signals that reward watch time and interaction rates. The Center for Research on Online Risks reportedly identified nearly three hundred linked accounts with a combined reach in the millions of views per week, a volume that can genuinely shift public debate. Authorities also cite suspected links to troll farms and are communicating with the platform and European institutions, as such networks tend to bypass standard moderation thresholds. The challenge is that these networks are not tied to a single entity but combine support for multiple anti-system parties at once, making direct attribution of responsibility difficult.
The trick of these operations is that they rarely break the rules with a single post; they play the long game — relentlessly pushing one framing and normalizing it through hundreds of account “voices.” When algorithms reward engagement, tactics that farm interactions (even fake ones) can spread content widely without paying for ads, which is cheap and effective for organizers. In practice, even a relatively low-cost bot network can compete with legitimate campaigns that play by the rules and rely on genuine audience interest. Detection cues like poor diacritics or rapid language switches are noticeable, but at scale they blend into the feed as mere noise. Regulators face a dilemma: act broadly and risk hitting legitimate accounts, or move slowly and allow the network time to take root.
From a practical standpoint, three defenses make sense: focus on detecting coordinated behavior, auditing recommendation signals, and partnering with researchers to share samples and signatures. If watch time is a primary signal, the platform should add robust filters for anomalies in view velocity and structural similarity of interactions across accounts. The state, for its part, needs a legal framework for rapid data exchange with platforms when election integrity is at risk, ideally with an EU-wide scope for cross-border networks. And the public benefits from simple tools to verify accounts and content threads that reveal context and likely connections, rather than vague warnings. This reduces the chance that a fake cluster of profiles will drown out authentic users simply by being louder and more persistent.
An improved neutral-atom array with 3,000 qubits runs continuously for over two hours
A Harvard team described in Nature a neutral-atom architecture that kept 3,000 qubits operating continuously for more than two hours thanks to a “conveyor” of cold atoms and rapid reloading into optical tweezers. Instead of pulsed operation, the system continuously moves atom reservoirs into the science region, recruits them into the array, and reinitializes at up to 30,000 qubits per second, achieving a reloading rate of 300,000 atoms per second. Crucially, information is preserved by transferring the state to a new set of atoms, so swapping “carriers” does not mean losing the computation or suffering decoherence. This surpasses the typical trap lifetime near 60 seconds and opens the door to deeper quantum circuits, faster entanglement distribution, and more stable atomic clocks and sensors. The authors also report maintaining coherence and polarization, including superpositions, during the long run, which is essential for practical applications.
Current limits lie in state readout, rearrangement, and optical stability, which slow the pipeline and occasionally introduce losses that must be offset by higher reloading. Optimizing readout and rearrangement with FPGAs and machine learning could speed operations by up to five times, paving the way for larger arrays and even longer runs. They also note benefits from metasurfaces and stronger lasers to scale preparation and “storage” zones so the system can handle tens of thousands of qubits in continuous operation. Practically, this strengthens neutral atoms as one of the most progressive platforms by combining high connectivity, solid fidelity, and now sustained long-duration operation. If drift in the SLM–AOD overlap can be tamed and automated beam alignment introduced, another leap in robustness is likely.
The application impact is immediate: deeper circuits for smarter error mitigation, faster linking of nodes in quantum networks, and clocks with better short-term stability because traps no longer need frequent stops and refills. It also reduces pressure for extremely long single-atom lifetimes, since the infrastructure can continuously compensate for “bleeding.” This points to a new engineering discipline of “living” quantum systems that maintain themselves without hard resets. That is exactly the maturity industry needs to move from lab demos to long-running services and products. In this light, a two-hour continuous run is less a trophy record and more a proof of an operating regime that could become the norm.
95% of companies see no return on AI investments
According to MIT’s “The GenAI Divide: State of AI in Business 2025,” 95% of organizations report no return on investments in generative AI, despite an estimated 30–40 billion dollars poured into enterprise deployments. The authors describe a “learning gap” — most systems don’t maintain feedback, fail to adapt to context, and don’t improve over time, ending up as polished demos without impact on financial results. In practice, companies test tools like ChatGPT or Copilot, but the benefits stay at individual productivity rather than scalable business outcomes. About 60% of organizations evaluate tools, 20% reach pilot stage, and only 5% make it to production, reinforcing the well-known trap of getting stuck in endless pilots. External partnerships tend to perform better, with roughly double the success rate of purely internal builds, while large firms lead in pilot counts but lag in scaling.
The study portrays a “generative AI divide” as limited disruption across sectors, a mismatch between visible investment and real returns, and the fragile nature of workflows around models. Most failures stem from brittle processes, weak contextual learning, and poor anchoring in daily operations, which prevents value from moving from pilots into core processes. Put simply, the model itself is often better than the environment it’s embedded in — missing strong interfaces, governance, reliable data flows, and structured change management. This also explains why small, tightly focused automations in back-office areas often yield higher returns than showy customer-facing experiments with uncertain impact. Without adaptive learning loops, there is no acceleration to set the investment in motion.
The study’s recommendations are practical: treat generative AI projects as learning systems that must improve in accuracy and relevance over time, and therefore require managed feedback within the process. Measure financial impact at the level of concrete process metrics rather than just estimating time saved, because that is where return is decided. Focus less on “visible” use cases and more on those with repeat demand, where learning opportunities accumulate — such as compliance, finance, or operations. Add resilient design: when a workflow fails, the system should degrade predictably while collecting data for the next improvement. Only then can the story of 95% zero return give way to an infrastructure that learns and compounds value over time.
Leaked Meta AI guidelines for child protection: what the chatbot may and may not do
Meta trains its “Meta AI” chatbot under new guidelines that strictly define what is absolutely unacceptable regarding child sexual exploitation and what is allowed solely in an educational context. The rules followed pressure from the U.S. Federal Trade Commission, which requested detailed information about how chatbots are designed, operated, and monetized, including child protections, while Meta corrected older language cited by Reuters that mistakenly appeared to allow “romantic” dialogue with minors. The new version explicitly mandates refusing any roleplay involving minors and bans sexualization of children under 13, including indirect descriptions. Spokesman Andy Stone added that the policy bans sexualization of children and romantic roleplay by minors, and that Meta deploys additional safeguards beyond the document. The company is also engaging with authorities and lawmakers, including submitting extensive documentation on rules and enforcement.
The guidelines clearly define unacceptable areas: describing or normalizing relationships between children and adults; enabling or encouraging abuse; involving children in pornography or sexual services; and any instructions for obtaining child sexual abuse material, with refusal required even for direct questions like “where can I find child pornography.” Only an educational framework is acceptable: explaining grooming in general terms, academic discussion of the phenomenon, or safe guidance for minors in social situations without sexualization. For roleplay, characters must be explicitly described as 18 or older, and even then the emphasis is on non-sensual, literary contexts (for example, “a story in the style of Romeo and Juliet”) where neither the chatbot nor the user are characters in the narrative. For minors, content simulating a romantic relationship between the AI and the user — including flirting or intimate expressions — is prohibited, and practical advice should exclude physical contact with a romantic undertone. To reduce ambiguity, the document defines terms such as “describe,” “discuss,” “enable,” and “encourage,” so testing and training have consistent boundaries.
In practice, this means “Meta AI” can talk about abuse factually and preventively, but must not normalize it, visualize it, or provide step-by-step instructions. On edge-case prompts, strict refusal takes priority over “safe” mediation of information that could be misused. Meta’s contractors use these revised guidelines to train and evaluate chatbot behavior in high-risk categories to minimize the chance of harmful outputs. Given regulatory pressure, the rules and enforcement processes — including age-gating and audits — are expected to be refined continuously. The goal is for protections not to exist only “on paper” but to hold the line across conversational scenarios where children might encounter AI.