Last Update 1:20 PM October 31, 2025 (UTC)

Company Feeds | Identosphere Blogcatcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!

Friday, 31. October 2025

Herond Browser

Top Anime Memes 2025 – Viral & Hilarious Picks!

We've compiled the ultimate list of the funniest and most-shared viral anime memes that are dominating social media this year. Dive in and find your next favorite share! The post Top Anime Memes 2025 – Viral & Hilarious Picks! appeared first on Herond Blog. The post Top Anime Memes 2025 – Viral & Hilarious Picks! appeared first on Herond Blog.

When Jujutsu Kaisen Season 3 finally dropped and the internet lost it, we knew a new wave of anime memes 2025 was about to break the feed. At Herond, we focus on a clean, fast digital experience, but we also stay on top of what’s trending. To save you the endless scrolling, we’ve compiled the ultimate list of the funniest and most-shared viral anime memes that are dominating social media this year. Dive in and find your next favorite share!

Why Anime Memes Dominate 2025

Anime Culture Explosion: 2025 is defined by massive releases – from the One Piece Live Action to Demon Slayer: Infinity Castle – creating an unprecedented cultural peak.

Meme Evolution: Meme culture has moved beyond static images, shifting towards dynamic AI-generated edits and viral sound memes.

How We Chose the Top Anime Memes of 2025

Selection Criteria

Only the top-tier memes make the cut, judged by:

Peak Virality: Must have set sharing records across major platforms (TikTok, X, Instagram Reels). Humor & Relatability: High scores for immediate humor and broad appeal within the global fan community. Cross-Platform Dominance: Required to show strong presence and persistence across multiple social media ecosystems. Community Validation: Selection is heavily weighted by Fan Reactions and measurable Upvotes from major anime subreddits and forums. The Top 15 Anime Memes of 2025 (Ranked)

The Sukuna Glare – Jujutsu Kaisen Season 3 Used to express contempt or underestimation.

“It’s Just a Flesh Wound” – Chainsaw Man Part 2 For mildly shocking situations met with an over-the-top recovery reaction.

Zoro’s Sense of Direction – One Piece Live Action S2 Teaser A triumphant comeback meme for getting lost or making hilariously wrong decisions.

The Infinity Castle Vibe – Demon Slayer: Infinity Castle Arc Complex looping backgrounds or videos; perfect for feeling overwhelmed or disoriented.

Goku vs. The Tax Collector – AI/Sound Meme Viral audio meme blending Goku’s voice with everyday struggles like taxes or bills.

“Manga Was Better” Guy – Community Reaction Meme Mocks fans who always claim the manga is superior to any anime or live-action adaptation.

The Sad Boy Gojo – Jujutsu Kaisen (Flashback Arc) Gojo’s melancholic expression for moments of disappointment or loneliness.

The Eren Jaegar Stare – Attack on Titan Final Chapters Intense, focused stare – used when someone’s about to do something insane or hyper-focused.

The Anya Forger Spongebob Edits – Spy x Family Crossover Anya’s expressive face slapped into random Spongebob scenes for absurd, wholesome humor.

The “Nani!?” 2.0 – Classic Meme Reborn HD or AI-upscaled version of “Omae wa mou shindeiru” – for next-level shock and awe.

The Isekai Truck Driver – Isekai Trope Meme Pokes fun at the overused “truck-kun” that sends protagonists to another world.

The Loid Forger Disguise Kit – Spy x Family When someone tries (and fails) to blend in or act natural in a group.

One Punch Man’s New Routine – One Punch Man Season 3 Meme about Saitama’s “new” training (anything beyond 100 push-ups) to gain ultimate power.

The Unbeatable Protagonist – Solo Leveling Sung Jinwoo or any OP main character dominating mundane situations like a boss.

The Ramen Break – Emerging TikTok Trend Short audio + video meme for taking a quick noodle break during intense tasks or missions.

How to Make Your Own Viral Anime Meme (With Herond)

Step-by-Step Guide:

Pick a trending anime moment. Scan TikTok, X, or Reddit for hot scenes (e.g., Sukuna’s glare, Anya’s “Heh?!”, Gojo’s blindfold drop). Use Herond’s Trend Scanner to see real-time virality scores. Upload clip or screenshot to Herond. Copy the video link or screenshot → paste into Herond Web/App → auto-trims to 3–15 sec (perfect for Reels/TikTok). No file size limit. Add text, sound, or AI effects. Text: Choose anime-style fonts (e.g., Impact, Anime Ace) + glow/shadow. Sound: Drag in viral audio (e.g., “Nani!?”, Goku tax voice) from Herond’s 10K+ meme sound library. AI Effects: Auto-generate subtitles, zoom-ins, or Spongebob-style Anya face swaps in 1 click. Export in TikTok/Reels-ready format. One-tap export: 1080p, 60fps, 9:16 vertical, no watermark. File ready in <10 sec. Direct save to camera roll or cloud. Share & go viral! Post instantly to TikTok, Instagram, or X via Herond Browser Where to Find More Anime Memes in 2025

The Top 15 list is just the beginning – meme culture moves fast! As Herond, we want you to stay ahead of the curve. To continuously find the next big meme, you need to engage with these digital hotspots:

r/animemes (Reddit): This is the definitive hub for early-stage memes. Join the community to see the freshest content and catch memes before they hit the mainstream. Specialized Discord Servers: These closed communities often generate high-quality, niche content and deep-fried edits. Look for servers dedicated to your favorite series. TikTok #AnimeMeme: The primary source for video and sound memes. Follow this hashtag to quickly capture the soundbites and visual trends dominating short-form video. Instagram Reels: Great for finding highly polished, music-synced meme edits that transition quickly into viral status. Conclusion

If you made it this far, you’re now fully updated with the Top Anime Memes 2025 that defined the year’s digital conversation. We, at Herond, believe that the best digital experience is one where you’re always in the know, cutting through the noise to get straight to the signal.

From the intense reactions sparked by Jujutsu Kaisen Season 3 to the cultural impact of One Piece Live Action, these viral & hilarious picks aren’t just jokes – they are the language of modern fandom. Keep engaging with the communities we highlighted, and always be ready to spot the next big trend.

Now that your meme game is strong, are you ready to dive into our next guide on optimizing your screen time so you can enjoy these memes without the distraction of annoying ads?

About Herond

Herond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.

To enhance user control over their digital presence, Herond offers two essential tools:

Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.

As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.

Have any questions or suggestions? Contact us:

On Telegram https://t.me/herond_browser DM our official X @HerondBrowser Technical support topic on https://community.herond.org

The post Top Anime Memes 2025 – Viral & Hilarious Picks! appeared first on Herond Blog.

The post Top Anime Memes 2025 – Viral & Hilarious Picks! appeared first on Herond Blog.


How to turn on background music while browsing Herond Browser

This guide shows you how to enable background music on any device - fast, easy, and completely free. Get started today! The post How to turn on background music while browsing Herond Browser appeared first on Herond Blog. The post How to turn on background music while browsing Herond Browser appeared first on Herond Blog.

Want your background music to keep playing even when switching tabs, minimizing the window, or locking the screen? Herond Browser – the secure, ad-blocking, speed-optimized browser – delivers seamless background music playback, letting you browse and enjoy tunes without interruption. With just a few simple taps, turn every browsing session into your personal concert. This guide shows you how to enable the feature on any device – fast, easy, and completely free. Get started today!

What is Background Music on Herond Browser?

Have you ever wished your music could play softly in the background even while switching tabs, minimizing Herond, or locking your screen? Herond Browser uses advanced media session API to keep your music running smoothly in the background – no interruptions, no lost playlist progress. Playback automatically pauses and resumes as needed, with a built-in equalizer to fine-tune the sound to your taste. With Herond, browsing isn’t just private and fast – it becomes your own personal symphony.

Step-by-Step Guide to turn on background music while using Herond

Step 1: Open Herond Browser and update the app (image: App Store/Google Play).

Step 2: Open a music tab (e.g., Spotify/YouTube Music) and play the song (image: Open tab).

Step 3: Click the media controls icon on the address bar -> Select “Allow background play” (image: Notification bar).

Step 4: Switch tabs or exit the app – music keeps playing.

Tips & Best Practices

Tip 1: Pair with Herond’s ad-blocker to eliminate lag

Herond blocks over 95% of ads and trackers right from the start, cutting CPU/GPU load by up to 40% compared to regular browsers. With background music on, you’ll experience zero lag from pop-ups or auto-play videos – your tunes run as smoothly as on a dedicated music player.

Tip 2: Use the built-in equalizer for powerful bass

Go to Settings -> Audio -> Equalizer, select the “Bass Boost” preset or manually drag the sliders to boost low frequencies. Herond processes audio directly in the engine – no external extensions needed – delivering deeper, clearer bass even with 20 tabs open.

Tip 3: Integrate with your personalized playlist

Herond automatically detects music tabs and displays media controls on the address bar and in notifications. Simply drag-and-drop a playlist link into a new tab – Herond remembers it and resumes playback next time, turning your browser into a smart personal music player.

Conclusion

That’s it – just a few taps, and Herond Browser transforms your browsing into a seamless music experience. With background playback, built-in ad-blocking, and smart media controls, you can multitask like never before – all while staying private, fast, and in control. Whether you’re working, studying, or just chilling, Herond keeps the beat going.

Ready to browse with a soundtrack? Download Herond today and turn every tab into a vibe.

About Herond

Herond Browser is a Web browser that prioritizes users’ privacy by blocking ads and cookie trackers, while offering fast browsing speed and low bandwidth consumption. Herond Browser features two built-in key products:

Herond Shield: an adblock and privacy protection tool; Herond Wallet: a multi-chain, non-custodial social wallet.

Herond aims at becoming the ultimate Web 2.5 solution that sets the ground to further accelerate the growth of Web 3.0, heading towards the future of mass adoption.

Join our Community!

The post How to turn on background music while browsing Herond Browser appeared first on Herond Blog.

The post How to turn on background music while browsing Herond Browser appeared first on Herond Blog.


Add Guest Profile to Herond in 60 Secs

Share your device while keeping your privacy intact? Herond brings you the perfect solution with Guest Profile – allowing you to create a separate profile for others to use in just 60 seconds. No browsing history saved, no access to personal data, completely isolated and secure. With Herond available on macOS, Windows, Android, and iOS, […] The post Add Guest Profile to Herond in 60 Secs appeare

Share your device while keeping your privacy intact? Herond brings you the perfect solution with Guest Profile – allowing you to create a separate profile for others to use in just 60 seconds. No browsing history saved, no access to personal data, completely isolated and secure. With Herond available on macOS, Windows, Android, and iOS, protecting your privacy when sharing devices has never been easier. Let’s explore how to set up in just a few simple steps!

What is Guest Profile in Herond? A temporary browsing mode built into Herond No browsing history saved – all activity is deleted after closing No cookies, passwords, or login data stored No changes to main profile – your settings and personal data remain protected Perfect for Sharing your computer with family, friends, or colleagues Testing websites without affecting your main profile Using public computers more securely Comparison: Herond vs Chrome/Edge FeaturesHerond Guest ProfileChrome/Edge Guest ModeNo browsing history saved Yes YesNo cookies stored Yes YesAutomatic tracker blocking Yes NoEnhanced security Yes RestrictedSetup speed 60 seconds 60 secondsNo account required Yes YesMulti-platform support macOS, Windows, Android, iOS Multi-platform Tip using Guest Profile effectively

Tip 1: Combine with private mode for safer browsing

Guest Profile already provides basic privacy, but combining it with Incognito/Private mode doubles your security This mode helps block third-party cookies and prevents websites from tracking your activity across sessions Ideal when you need to access sensitive information (banking, healthcare) on someone else’s device Herond automatically blocks trackers and ads even in Guest Profile, delivering a clean browsing experience

Tip 2: Use for device sharing (no data leak worries)

Share safely with others without worrying about them seeing your browsing history or saved passwords Your main profile is completely isolated – Guest users cannot access bookmarks, autofill forms, or logged-in accounts Perfect when guests visit and need to borrow your computer, colleagues need quick lookups, or children want to browse After closing Guest Profile, all data is automatically deleted – leaving no trace behind

Tip 3: Check that extensions are disabled in Guest Mode

By default, most extensions will be disabled in Guest Profile to protect privacy This means your tools like password managers and ad blockers won’t work in Guest mode If you need to use specific extensions, you can manually enable them in Settings -> Extensions -> “Allow in Guest mode” However, you should limit enabling extensions in Guest Profile to ensure others cannot access your personal tools Herond still maintains default tracker blocking even when extensions are disabled How to add Guest Profile to Herond Step 1: Open Herond Browser. Step 2: Click the three-dot menu in the top right corner of the screen, select More tools, then select Open Guest Profile. Step 3: The Guest Profile screen appears. Conclusion

Creating a Guest Profile on Herond takes only 60 seconds but delivers tremendous security value for your privacy. With this feature, you can confidently share your device without worrying about exposing browsing history, passwords, or any personal data. Unlike Guest Mode on Chrome or Edge, Herond integrates automatic tracker blocking, providing dual-layer protection for both you and temporary users.

Guest Profile is the ideal solution for every situation – from letting children browse safely, colleagues borrowing your device for quick lookups, to testing websites without affecting your main profile. With Herond available on macOS, Windows, Android, and iOS, you can apply this privacy protection across all your devices. Set up today and experience peace of mind when sharing devices – because your privacy deserves the best protection!

About Herond

Herond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.

To enhance user control over their digital presence, Herond offers two essential tools:

Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.

As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.

Have any questions or suggestions? Contact us:

On Telegram https://t.me/herond_browser DM our official X @HerondBrowser Technical support topic on https://community.herond.org

The post Add Guest Profile to Herond in 60 Secs appeared first on Herond Blog.

The post Add Guest Profile to Herond in 60 Secs appeared first on Herond Blog.


Why Can’t I Skip YouTube Ads Anymore? (Solutions for Non-Skippable Ads)

This guide will provide you with the most effective and reliable solutions to banish the reason Why Can't I Skip YouTube Ads The post Why Can’t I Skip YouTube Ads Anymore? (Solutions for Non-Skippable Ads) appeared first on Herond Blog. The post Why Can’t I Skip YouTube Ads Anymore? (Solutions for Non-Skippable Ads) appeared first on Herond Blog.

You’re settling in to watch a great video, only to be hit with a lengthy ad and. To your frustration, there’s no “Skip Ad” button. In recent years, YouTube has significantly increased its use of non-skippable ads, often running for 15, 20, or even 30 seconds, leading to a frustrating viewing experience. Why is this happening, and is there anything you can do about it? This guide will explain YouTube’s shift toward these mandatory advertisements, detail the different types of non-skippable formats you encounter, and, most importantly, provide you with the most effective and reliable solutions to banish the reason Why Can’t I Skip YouTube Ads.

The New Era of YouTube Advertising The Growing Phenomenon of “Non-Skippable Ads”

User Frustration

Focus on the growing annoyance as viewers are forced to watch the entire ad (15, 20, or 30 seconds) without any skip option.

YouTube’s Shift

Explain that the increase in non-skippable ads is a direct change in YouTube’s monetization strategy.

Differentiation: Skippable vs. Non-Skippable Ads FeatureSkippable Video AdsNon-Skippable Video AdsDurationVaries (Typically 12 seconds or longer)Fixed (Typically 6 to 30 seconds)User ControlViewers can click the “Skip Ad” button after 5 seconds.No Skip Option; Viewer must watch the entire duration.MonetizationAdvertisers only pay if the viewer watches 30 seconds or the entire ad (whichever is shorter).Advertisers pay per impression (every time the ad is displayed).Content GoalOften used for longer, more detailed product stories and persuasion.Used for high-impact, short, and mandatory brand awareness.PlacementPre-roll, mid-roll, or post-roll.Most commonly used as Pre-roll (at the start of the video). Reasons Why YouTube is Increasing Non-Skippable Ads Maximizing Revenue and Advertiser Benefits

Guaranteed Display: Ensures advertisers get a full, mandatory viewing of their message, making them ideal for high-value brand awareness campaigns.

Higher Revenue: Non-skippable ads (managed by tCPM) command a higher bidding price, maximizing revenue for both YouTube and video creators.

Premium Value: The prevalence of mandatory ads significantly enhances the value proposition and attractiveness of the paid YouTube Premium subscription.

Common Types of Non-Skippable Ads Today

Non-Skippable In-stream Ads:

Standard In-stream Ads

Typically have a mandatory length of 15 seconds (up to 20 seconds in certain markets) and are the most common non-skippable format.

Special Note (TV/Experiments)

Be aware of longer formats, including 30-second non-skippable ads, which are increasingly common on YouTube’s TV app or during specific platform testing.

Bumper Ads

Bumper Ads

These are the shortest non-skippable format, restricted to an extremely quick 6-second duration.

Mandatory Viewing

Viewers are required to watch the entire six seconds; there is no option to skip.

The “Anti-Ad Blocker” Strategy (Ad Blocker Battle)

Ad Blocker Battle

YouTube is actively deploying new ad insertion techniques, notably Server-Side Ad Insertion (SSAI), to render many traditional client-side Ad Blockers useless.

The ‘Force’ Move

This strategy effectively forces users to either disable their ad blockers completely and tolerate ads, or switch to a paid YouTube Premium subscription.

Why Can’t I Skip YouTube Ads – Effective Solutions to “Skip” the Ads (Solutions) The Official and Comprehensive Solution: YouTube Premium Core Benefit: Complete ad removal (including non-skippable, bumper, and display ads). Additional features (Background playback, Downloads, YouTube Music). Cost-benefit analysis (ROI) versus annoyance. Why Can’t I Skip YouTube Ads – Browser-Based Solutions (For PC) Using Browsers with Integrated Ad Blocking:

Core Benefit: Offers only 100% official and comprehensive solution by completely removing all forms of advertisements, including non-skippable, bumper, and display ads.

Feature Stack: Includes essential value-adds like Background Playback (listening when the screen is off), video downloads for offline viewing, and access to YouTube Music Premium.

Cost-Benefit Analysis (ROI): Assess the Return on Investment (ROI) to determine if the fixed monthly fee is worth the guaranteed ad-free experience and reduced viewing annoyance.

Installing Ad Blocker Extensions:

Effectiveness Warning: Users must be aware that YouTube’s counter-measures (like SSAI) are actively making many traditional Ad Blocker extensions less effective or even disabling them.

Risk Note: Exercise caution when installing extensions, as some can pose security risks or contain malware if downloaded from unreliable sources.

Recommended Options: If you choose this route, rely on popular, frequently updated extensions like Adblock Plus or Adblock for YouTube.

Why Can’t I Skip YouTube Ads – Workarounds and Alternative Methods (Temporary)

The URL Dot Trick:

How It Works: This method exploits a specific domain structure loophole by inserting a simple dot (.) immediately after the .com in the YouTube URL (e.g., youtube.com./watch?v=...).

Mechanism: The extra dot confuses the site’s path recognition, often preventing the ad server from loading, effectively blocking the advertisement.

Limitations: This trick is generally only effective on desktop browsers and is an experimental exploit that YouTube may patch at any time.

Using Third-Party Apps/Browsers (Mobile):

Mobile Focus: Addresses non-skippable ads on smartphones by utilizing mobile browsers with built-in, aggressive ad-blocking features.

Key Example: Recommend using browsers like Adblock Browser or Brave Browser, which are designed to intercept and block ad requests directly.

Benefit: Provides a completely free and simple solution to regain control over the mobile viewing experience without needing root access.

Adjusting Personalized Ad Settings:

The Tweak: Navigate to your Google Account settings and disable personalized ads under the Ad Settings control panel.

Limited Effect Note: Understanding this action only reduces the relevance of the ads you see; it does NOT block non-skippable ads from appearing altogether.

Using Herond to skip Youtube Ads

The most reliable way to bypass non-skippable YouTube ads on mobile is by using a specialized third-party browser like Hermit Browser (or similar alternatives like NewPipe on Android). These specialized mobile clients and browsers effectively block mandatory ads by using advanced filters or by routing the video stream directly, bypassing YouTube’s ad servers entirely. This provides a clean, ad-free viewing experience on your smartphone without the security risks associated with modified applications or the limitations of standard browser extensions. Simply install the application, navigate to YouTube, and enjoy uninterrupted content, even during extended non-skippable ad breaks.

Conclusion: The Future of Online Video Viewing

The frustration of being held captive by non-skippable YouTube ads is a deliberate strategy by the platform to maximize revenue and push users toward a paid subscription. As YouTube continues to win the Anti-Ad Blocker battle with techniques like SSAI, many traditional solutions are failing. To permanently solve the issue of mandatory ads, your choices are clear: The most reliable, official solution is YouTube Premium, which completely eliminates all ads and adds features like background playback. For a free alternative, you must turn to specialized mobile browsers (like Herond Browser) or constantly-updated desktop extensions. Ultimately, whether you invest in Premium or use smart workarounds, reclaiming an uninterrupted viewing experience is entirely possible when armed with the right tools.

About Herond

Herond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.

To enhance user control over their digital presence, Herond offers two essential tools:

Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.

As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.

Have any questions or suggestions? Contact us:

On Telegram https://t.me/herond_browser DM our official X @HerondBrowser Technical support topic on https://community.herond.org Banner cho Blog – eng

The post Why Can’t I Skip YouTube Ads Anymore? (Solutions for Non-Skippable Ads) appeared first on Herond Blog.

The post Why Can’t I Skip YouTube Ads Anymore? (Solutions for Non-Skippable Ads) appeared first on Herond Blog.


Token Provision Explained: The Ultimate Guide to Secure Token Issuance

Token Provision is the foundational process that ensures security credentials, such as OTP codes, access keys, or cryptographic material are securely issued The post Token Provision Explained: The Ultimate Guide to Secure Token Issuance appeared first on Herond Blog. The post Token Provision Explained: The Ultimate Guide to Secure Token Issuance appeared first on Herond Blog.

In the digital era, robust identity management is critical for protecting assets and sensitive data. Token Provision is the foundational process that ensures security credentials, such as OTP codes, access keys, or cryptographic material are securely issued and delivered to a user or device. It’s the essential first step in implementing Multi-Factor Authentication (MFA) and building a strong defense against account takeover attacks. This ultimate guide breaks down the concept, explores the core mechanisms, outlines key security standards, and details the best practices for implementing secure Token Provisioning to establish unwavering trust in your authentication system.

What is Token Provision? Definition and Context of Token Provision

What is Token Provisioning

The secure process of creating, issuing, and distributing a security token (credential) to a user or device for authentication and authorization.

Token vs. Password

A Password is for initial identity verification; a Token is a time-bound, cryptographic key used to prove identity after initial verification.

Core Importance

It’s the foundation of modern security (Zero Trust), allowing secure digital communication without the risk of constantly transmitting sensitive login credentials.

Key Use Cases of Token Provision

API Authentication

Provides secure access to web services and microservices using industry standards like JWT (JSON Web Tokens) and OAuth 2.0.

Single Sign-On (SSO)

Enables users to transfer their logged-in status via tokens, granting access to multiple applications without repeated logins.

IoT Device Security

Crucial for issuing unique tokens to embedded devices, ensuring secure, verifiable communication with cloud and backend systems.

The Token Provision Lifecycle Phase 1: Initiation and Issuance

Issuance Request

The user or device begins by submitting initial verification data (e.g., username/password or cryptographic key) to the Authorization Server.

Token Creation

The server successfully verifies the identity and securely mints the token, embedding key authorization details, scope, and an expiration time.

Token Claims (Core Data)

The issued token contains essential data fields, known as Claims, including the Subject (who the token is for), the Issuer (who created it), and the Expiration (the token’s time-to-live or TTL).

Phase 2: Usage and Validation

Token Attachment: The issued token is attached to every subsequent request, commonly transmitted via the Authorization: Bearer header.

Resource Validation

The Resource Server meticulously checks the token’s validity, verifying its digital signature, ensuring it hasn’t expired, and confirming its correct format before access is granted.

Phase 3: Expiration and Revocation

Expiration (TTL)

The most crucial security mechanism; the token is automatically invalidated once its predetermined Time-to-Live (TTL) expires, limiting potential misuse.

Token Refresh

A process using a Refresh Token to securely obtain a brand new token pair, allowing the user to maintain their session without needing to re-enter login credentials.

Revocation

The immediate act of invalidating an active tokenbefore its scheduled expiry (e.g., during user logout or upon detection of suspicious, unauthorized activity).

Popular Provisioning Model: JWT (JSON Web Tokens) Structure: JWTs are stateless tokens composed of three parts: the Header, the Payload (Claims), and the Signature. Key Advantage: They are stateless and self-contained, meaning the resource server can validate them locally, significantly reducing database and server load. Security Note:Never store sensitive data in the Payload, as it is only Base64-encoded (easily readable), not encrypted by default. OAuth 2.0 (Authorization Framework)

Protocol, Not Token: OAuth 2.0 is an authorization framework and a protocol that governs how tokens are issued, not the token itself (e.g., via the Authorization Code or Client Credentials flows).

Token Pair Use: It relies on a token pair: the short-lived Access Token (used for resource access) and the long-lived Refresh Token (used securely for token renewal without re-login).

Device Provisioning: Symmetric/Asymmetric Key Cryptography

Device Focus: Essential in IoT environments and for resource-constrained devices, often utilizing Symmetric or Asymmetric Key Cryptography.

Key Security: Focuses on the safe creation and distribution of device-unique secret keys or certificate chains.

Secure Method: Keys are typically provisioned directly into secure hardware modules (e.g., TPM, Secure Element) during manufacturing.

Best Practices for Secure Token Issuance and Management (Security Best Practices) Setting Optimal Expiration Times (Optimal Expiration)

Access Token Lifespan

Set a short lifespan for Access Tokens (e.g., 5 to 15 minutes) to drastically minimize the window for potential theft and misuse.

Refresh Token Security

Refresh Tokens can have a longer lifespan but must be stored and handled with the highest level of security (often encrypted in an HTTP-Only cookie).

Secure Token Storage

Avoid Local Storage

Do NOT store tokens in Local Storage, as this data is highly vulnerable and easily accessible through Cross-Site Scripting (XSS) attacks.

Recommended Storage (Web)

Use HttpOnly Cookies to mitigate XSS risk, preventing client-side scripts from accessing the token.

Recommended Storage (Mobile/Desktop)

Utilize the operating system’s dedicated secure credential store (e.g., iOS Keychain, Android Keystore) for maximum protection.

Effective Revocation Mechanism

Revocation Check

Implement a centralized Revocation List or database check to instantly verify the validity of all Refresh Tokens and session integrity.

Mandatory Revocation

Enforce automatic, immediate token invalidation (revocation) upon detection of abnormal login attempts or any user password changes.

Ensuring Token Integrity

Signature Mandatory: Always use Digital Signatures (via HMAC, RSA, or ECDSA) as a mandatory step when issuing tokens.

Tamper Prevention: The signature ensures the token’s contents have not been tampered with or modified by an attacker during transit.

Conclusion: The Future of Passwordless Authentication

Token Provisioning is more than just a setup process; it is the most crucial security checkpoint in modern authentication and the foundational pillar of a robust Zero Trust architecture. By mastering the core mechanisms, from the short lifespan of an Access Token to the secure lifecycle management via a Refresh Token, you can drastically reduce the risk of credential theft. Always remember to prioritize optimal TTL settings, avoid vulnerable Local Storage, and maintain a mandatory revocation mechanism. Deploying tokens securely using standards like JWT and OAuth 2.0 ensures not only seamless user experience but also guarantees digital trust essential for every secure transaction and application access.

About Herond

Herond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.

To enhance user control over their digital presence, Herond offers two essential tools:

Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.

As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.

Have any questions or suggestions? Contact us:

On Telegram https://t.me/herond_browser DM our official X @HerondBrowser Technical support topic on https://community.herond.org

The post Token Provision Explained: The Ultimate Guide to Secure Token Issuance appeared first on Herond Blog.

The post Token Provision Explained: The Ultimate Guide to Secure Token Issuance appeared first on Herond Blog.


How to Mine Litecoin in 2025: A Step-by-Step Guide for Beginners

This "How to Mine Litecoin in 2025: A Step-by-Step Guide for Beginners" is specifically designed for you. The post How to Mine Litecoin in 2025: A Step-by-Step Guide for Beginners appeared first on Herond Blog. The post How to Mine Litecoin in 2025: A Step-by-Step Guide for Beginners appeared first on Herond Blog.

Heard about the long-term potential of Litecoin (LTC) and ready to enter the mining game? If you’re a beginner finding the process intimidating, you’re in the right place! Mining Litecoin in 2025 is still a profitable and viable venture, but it requires the correct strategy and setup. This “How to Mine Litecoin in 2025: A Step-by-Step Guide for Beginners” is specifically designed for you. We break down every step, from selecting the right ASIC hardware to joining an efficient mining pool. It’s time to turn your learning into passive income. Let’s start mining LTC effectively and confidently today!

How to Mine Litecoin – The Fundamentals: Hardware, Wallet, and Pool Choosing Your Mining Hardware

The ASIC Mandate:

2025 Reality: GPU mining is no longer profitable for LTC due to extreme network difficulty.

The Solution: Only dedicated ASIC (Application-Specific Integrated Circuit) miners, built for the Scrypt algorithm, offer the necessary power for competition and profit.

Key Specifications to Look For:

Hashrate (MH/s): The raw speed of mining (higher is better).

Power Consumption (Watts): The electricity needed to run the unit (lower is better).

Efficiency (J/MH): The most crucial metric—Joules per Megahash—for determining long-term profitability.

Recommended ASIC Miners (for Small/Home Scale):

Suggest 2-3 current, highly efficient Scrypt ASIC models (e.g., Antminer L7 or newer equivalents) for optimal returns.

Power & Cooling Considerations:

Electrical Load: Verify your home’s wiring and circuit breakers can safely handle the sustained power draw of ASIC units.

Thermal Management: Proper ventilation and cooling are essential to maintain equipment lifespan and prevent overheating.

Setting up Your Crypto Infrastructure Litecoin Wallet Setup (Security First)

Wallet Types Recommended: Prioritize security by using reliable options like Hardware Wallets (e.g., Ledger, Trezor) for cold storage, or the official desktop wallet for maximum control over your private keys.

Security Focus: Use your chosen wallet to create a secure receiving address where all your mining rewards will be sent.

Essential Rule: Always keep your private keys backed up and offline.

Choosing and Understanding the Mining Pool

Why Pool Mining is Essential: For beginners, Pool Mining is mandatory because it combines your miner’s smaller hashrate with thousands of others to find blocks together.

Consistent Payouts: Pools ensure you receive small, consistent payouts frequently, making the process predictable.

Avoid Solo Mining: Solo Mining is extremely high-risk and is not recommended for beginners due to the low chance of finding an LTC block independently.

Reliable Pools: Choose reputable pools with low fees and high uptime (e.g., LitecoinPool, F2Pool, AntPool) to ensure consistent earnings.

Step-by-Step Configuration and Setup How to Mine Litecoin Step 1: Secure Your Wallet

Acquire Wallet: Download and install your chosen secure wallet (e.g., Herond Wallet).

Create Address: Generate a Litecoin receiving address specifically for your mining rewards.

Safety First: Ensure you back up your recovery phrase (seed phrase) immediately and store it securely offline – never on your computer or in the cloud.

Step 2: Register with the Mining Pool

Pool Account: Sign up and verify your account with your chosen, reputable Litecoin mining pool.

Create Worker: Establish a Worker Name (this acts as your miner’s unique identity on the pool).

Set Credentials: Set a simple password (this is typically used only for the worker, not your main pool login) to finalize the setup.

Goal: Secure your unique pool credentials needed for the ASIC configuration in the next step.

Step 3: Physical Setup and Connection

Power Up: Safely connect your ASIC miner to a suitable, stable power supply outlet.

Wired Internet (Critical): Connect the miner directly to your router or switch using a reliable Ethernet cable.

Avoid Wi-Fi: Wi-Fi is typically unreliable for mining stability and should be avoided to ensure consistent hashrate.

Goal: Establish stable power and network connection necessary for the configuration phase.

Step 4: Configuring the Miner Software

Access Interface: Locate and access the ASIC miner’s web interface using its specific IP address in your browser.

Input Pool Data: Enter the required pool information: Pool URL, Port number, your Worker Name, and the worker password.

Start Mining: Save and apply the settings to instantly initiate the mining process and begin submitting shares to the pool.

Goal: Successfully connect your hardware to the mining pool to start generating LTC rewards.

Profitability and Monitoring on How to Mine Litecoin Calculating Your Potential Profit

Three Key Variables: Profitability depends on three core metrics: your miner’s Hashrate (power), your Electricity Cost (per kWh), and the Pool Fees charged.

Use the Calculator: Use an online Mining Profitability Calculator to estimate daily/monthly earnings.

Critical Input: Always input your accurate electricity cost (per kWh). This is the largest operational expense and determines whether your mining operation is profitable or not.

Goal: Determine your potential Return on Investment (ROI) and ensure your revenue exceeds your ongoing energy costs.

Monitoring Your Performance

ASIC Health Check: Regularly check the miner’s web interface to monitor its Hashrate (output), Temperature (heat level), and Fan Speed (cooling efficiency).

Pool Performance: On the mining pool dashboard, closely track Accepted Shares versus Rejected Shares. Aim for low rejection rates to maximize profit.

Payout Management: Understand the pool’s Payment Threshold (minimum balance for withdrawal) and Payout Frequency to manage your cryptocurrency earnings effectively.

Goal: Ensure the hardware is running optimally and maximize the number of successful contributions to the pool.

Essential Security and Maintenance on How to Mine Litecoin Security Protocols

Wallet Security (Cold Storage): Always store your private keys and recovery phrases offline (cold storage). Never keep them on a connected device to protect your mined LTC.

Miner Security (ASIC Password): Immediately change your ASIC miner’s default login password. Use a strong, unique password to prevent unauthorized access and control.

Goal: Protect both your hardware investment and your cryptocurrency earnings from hackers.

Longevity and Maintenance

Cooling is Key: Maintain proper Airflow and Dust Control to prevent overheating and ensure the long-term longevity and stable performance of your ASIC miner.

Troubleshooting Basics: Learn basic troubleshooting for common alerts, such as dealing with an offline worker (network issue) or resolving high temperature alerts (airflow issue).

Goal: Protect your hardware investment by ensuring the miner operates within safe thermal limits 24/7.

Conclusion

Successfully mining Litecoin in 2025 hinges entirely on a few key actions: you must invest in high-efficiency ASIC hardware built specifically for the Scrypt algorithm, as GPU mining is no longer viable. Once you have your ASIC, prioritize immediate setup by joining a reliable mining pool to ensure consistent payouts, and configure your miner’s software with your pool credentials and a secure wallet address. Crucially, long-term profitability relies on precise calculation, where your miner’s hashrate must consistently overcome your electricity cost per kWh; finally, safeguard your operation by using cold storage for your rewards and maintaining excellent airflow and cooling for hardware longevity.

About Herond

Herond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.

To enhance user control over their digital presence, Herond offers two essential tools:

Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.

As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.

Have any questions or suggestions? Contact us:

On Telegram https://t.me/herond_browser DM our official X @HerondBrowser Technical support topic on https://community.herond.org

The post How to Mine Litecoin in 2025: A Step-by-Step Guide for Beginners appeared first on Herond Blog.

The post How to Mine Litecoin in 2025: A Step-by-Step Guide for Beginners appeared first on Herond Blog.


Aergo

BC 101 #7: DAO, Standardization, and Neutrality

A Decentralized Autonomous Organization (DAO) acts as a programmable coordination layer, recording proposals, votes, and outcomes through immutable or verifiable channels. This ensures that every decision can be audited and traced. For blockchain systems spanning a broad spectrum of applications — from enterprise solutions and government infrastructure to consumer-facing services — this stru

A Decentralized Autonomous Organization (DAO) acts as a programmable coordination layer, recording proposals, votes, and outcomes through immutable or verifiable channels. This ensures that every decision can be audited and traced.

For blockchain systems spanning a broad spectrum of applications — from enterprise solutions and government infrastructure to consumer-facing services — this structure provides the transparency and accountability required by regulated entities while enabling decentralized control.

DAO governance delivers substantial value by providing a standardized, neutral framework for coordination that reduces operational and regulatory friction.

Third-Party and In-House DAO Infrastructures

In recent years, the infrastructure supporting DAOs has advanced significantly. A variety of third-party governance solutions now offer stable, enterprise-ready interfaces for managing proposals, conducting votes, and executing multi-signature transactions. Some noteworthy platforms include:

Snapshot: An off-chain, gasless voting platform widely used across leading protocols. It allows flexible voting strategies, quorum requirements, and verifiable results without introducing high transaction costs. Tally: A fully on-chain governance dashboard built on Ethereum, designed for transparency and auditability of protocol votes, treasury management, and proposal lifecycle tracking.

These solutions form a growing middleware ecosystem that brings governance to the same level of technical maturity as enterprise resource planning systems.

At the same time, in-house DAO frameworks extend beyond generic governance tooling. They integrate DAO logic with the project’s native identity, treasury, and compliance layers, enabling seamless coordination between on-chain and organizational processes. This approach ensures that governance not only reflects community consensus but also aligns with operational and regulatory realities.

DAO Governance as a Mechanism for Neutrality

DAO governance reinforces network neutrality, a crucial characteristic for projects that operate across multiple jurisdictions or regulatory contexts. This structural neutrality diminishes the concentration of control that can lead to compliance issues and enables projects to remain resilient during regulatory or organizational changes.

For blockchain systems aimed at enterprises, DAO infrastructure provides three measurable benefits:

Regulatory Adaptability: Transparent proposal and voting systems create a verifiable governance record suitable for audits, disclosures, or compliance reviews. Operational Continuity: Distributed governance logic allows decision-making to persist independently of any single corporate entity or leadership group. Stakeholder Alignment: Token-weighted or role-based participation aligns validators, contributors, and investors under a unified, rule-based coordination framework. Toward Structured and Resilient Governance

As blockchain networks evolve into critical data and financial infrastructure, governance must progress beyond mere symbolic decentralization. DAO systems offer a structured, compliant, and resilient approach to managing complex ecosystems.

DAOs are not merely voting or staking platforms. They serve as the operational core that defines how decentralized systems make, record, and enforce decisions. Only with a well-structured DAO model can projects establish the legal, operational, and procedural foundation required to function as sustainable organizations.

BC 101 #7: DAO, Standardization, and Neutrality was originally published in Aergo (HPP) on Medium, where people are continuing the conversation by highlighting and responding to this story.


SC Media - Identity and Access

Why MFA downgrade attacks could be the next AI security crisis

Downgrade attacks expose MFA flaws, putting both human and AI identities at serious security risk.

Downgrade attacks expose MFA flaws, putting both human and AI identities at serious security risk.


Passkey use doubles year over year; Google, Amazon lead in authentications

Roblox, Microsoft and the Gemini crypto exchange are among the top 5 fastest-growing passkey domains.

Roblox, Microsoft and the Gemini crypto exchange are among the top 5 fastest-growing passkey domains.

Thursday, 30. October 2025

Indicio

Five reasons why AI needs decentralized identity

The post Five reasons why AI needs decentralized identity appeared first on Indicio.
AI systems need decentralized identity. Why? Only decentralized identity provides the authentication, consent, delegated authority, structure, and governance needed for AI to deliver value.

By Trevor Butterworth

AI is going to be everywhere. From virtual assistants to digital twins and autonomous systems, it will reinvent how we do everything. But only if it can be trusted with high value data, only if it can access high quality data, only if there’s user consent to that data being shared, and only if it can be easily governed.

This is where decentralized identity comes in. It removes obstacles, solves problems, and does so in a way that delivers next-generation security. Here are the five ways decentralized identity and its key technology — Verifiable Credentials — puts AI agents and autonomous AI systems on the path to trust and monetization.

1. Authentication

We are going to need to authenticate AI agents. They are going to need to authenticate us. It’s an obvious trust issue when so much data is at stake.

“We” means everything that interacts with an agent — people, organizations, devices, robots, and other AI agents.

Traditional forms of identity authentication aren’t going to cut it (see this recent article by Hackernoon — “When OAuth Becomes a Weapon: AI Agents Authentication Crisis”).

And given the current volume of losses to identity fraud (the estimated global cost of digital fraud was $534 billion over the past 12 months, according to Infosecurity Magazine), the idea that we should now open up massive quantities of high-value data to the same security vulnerabilities is insane.

The first fake AI agent that scams a major financial customer will cause panic, burn trust, and trigger regulation.

Only decentralized, Verifiable Credentials can provide the seamless, secure, and AI-resistant authentication to identify both AI agents and their users. And they enable authentication to occur before any data is shared.

2. Consent

AI needs data to work — and that means a lot of personal data and user data. If you want AI solutions that require access to personal data to comply with GDPR and other data privacy regulations, the “data subject” needs to be able to consent to sharing their data. Otherwise, that data is going nowhere — or you’re headed toward compliance hell.

Verifiable Credentials are a privacy-by-design technology. Consent is built into how they work. This simplifies compliance issues and can be easily recorded for audit.

3. Delegated authority

AI agents are going to need to access multiple data sources. While Verifiable Credentials and digital wallets allow people and organizations to hold their own data, they are not necessarily going to hold all the data needed for a task.

For example, banks and financial institutions have multiple departments. An AI agent that is given permission to access an account holder’s information, will need to share that information across different departments either to access the customer’s data or connect it to other data. It might need to share the data with other agents or external organizations.

Verifiable Credentials make it easy for a person to delegate their authority to an AI agent to go where it needs to go to execute a task, radically simplifying compliance. Decentralized governance (more of which later) simplifies establishing trust between different organizations and systems.

4. Structured data

AI agents and systems need good quality data to do their job (and therefore earn their keep). Verifiable Credentials issued by trusted data sources contain information that’s tamper-proof, that can come from validated documents, and that is structured in a way that each data point can be selectively disclosed.

In other words by putting information into a Verifiable Credential, we minimize error while structuring it to be easy to consume. In the process, we enable data and purpose minimization to meet GDPR requirements.

5. Decentralized governance

Finally, we come to one of the less-well known features of decentralized identity: decentralized ecosystem governance — or as we call it DEGov, which is based on the Decentralized Identity Foundation Credential Trust Establishment specification.

DEGov is a way for humans to structure interaction through trust. The governance authority for a particular use case publishes trust lists for credential issuers and credential verifiers in a machine readable form. This is downloaded by each participant in a credential ecosystem, and it enables a credential holder’s software to automatically recognize that an AI agent issued by a given organization is trustable. These files also contain rules for data presentation workflows.

DEGov enables you to easily orchestrate data sharing: for example, a Digital Travel Credential issued by an airline for a passenger identity can be used by a hotel to automate check-in because the hotel’s verifier software has downloaded a governance file containing the airline as trusted credential issuer (this also facilitates offline verification as governance rules are cached).

The value of decentralized governance really comes to the fore when you start building autonomous systems with multiple AI agents. You can easily program which agent can interact with which resource and what information needs to be presented. You can orchestrate interaction and authentication across different departments, domains, sectors.

As you can also enable devices, such as sensors, to generate Verifiable Credentials containing the data they record, you can rapidly share trusted data across domains for use by pre-permissioned AI agents.

In sum, decentralized identity is more than identity or identity authentication — it’s a way to authenticate and share any kind of data across any kind of environment, seamlessly and securely. It’s a way to create digital relationships between participants, even virtual ones.

Indicio ProvenAI

We designed Indicio ProvenAI to do all of the above. It’s the counterpart of the Proven technology we’re deploying to manage borders, KYC, travel and everything in between. It’s why we are now a member of the NVIDIA Inception program.

We see decentralized identity as the key to AI unlocking the right kind of data in the right way. It’s the path to trust, and trust means value.

Contact Indicio to learn how we’re building a world filled with intelligent authentication.

The post Five reasons why AI needs decentralized identity appeared first on Indicio.


SC Media - Identity and Access

Botnets driving attacks on PHP servers, IoT devices, cloud gateways

These automated attacks are exploiting CVEs from nearly a decade ago.

These automated attacks are exploiting CVEs from nearly a decade ago.


Credential-stealing npm packages fuel ongoing PhantomRaven campaign

More than 120 malicious npm packages, which have been downloaded over 86,000 times, have been launched to pilfer authentication tokens, GitHub credentials, and CI/CD secrets from developers as part of the PhantomRaven attack campaign that has been ongoing since August, BleepingComputer reports.

More than 120 malicious npm packages, which have been downloaded over 86,000 times, have been launched to pilfer authentication tokens, GitHub credentials, and CI/CD secrets from developers as part of the PhantomRaven attack campaign that has been ongoing since August, BleepingComputer reports.


ComplyCube

The CryptoCubed Newsletter: October Edition

In this edition of CryptoCubed, we look at the top crypto cases worldwide. This includes Canada's record-breaking $177 million fine against Cryptomus, Dubai's ongoing enforcement sweep on virtual asset firms, and Trump's pardon. The post The CryptoCubed Newsletter: October Edition first appeared on ComplyCube.

In this edition of CryptoCubed, we look at the top crypto cases worldwide. This includes Canada's record-breaking $177 million fine against Cryptomus, Dubai's ongoing enforcement sweep on virtual asset firms, and Trump's pardon.

The post The CryptoCubed Newsletter: October Edition first appeared on ComplyCube.


How to Use a KYC AML Pricing Benchmark Effectively

Defining a pricing benchmark for KYC and AML is an important step in managing compliance expenses effectively. Understanding the factors that drive the costs of KYC and AML helps organizations make more informed pricing decisions. The post How to Use a KYC AML Pricing Benchmark Effectively first appeared on ComplyCube.

Defining a pricing benchmark for KYC and AML is an important step in managing compliance expenses effectively. Understanding the factors that drive the costs of KYC and AML helps organizations make more informed pricing decisions.

The post How to Use a KYC AML Pricing Benchmark Effectively first appeared on ComplyCube.


Elliptic

Hosted vs unhosted wallets: Compliance risks and practical solutions

Any institution engaging with digital assets faces a persistent compliance challenge: How should you handle transactions involving unhosted wallets when regulators have not yet provided clear guidance on specific obligations? As customer demand for crypto services intensifies, the question of hosted vs unhosted wallets has moved from theoretical to operationally urgent.

Any institution engaging with digital assets faces a persistent compliance challenge: How should you handle transactions involving unhosted wallets when regulators have not yet provided clear guidance on specific obligations? As customer demand for crypto services intensifies, the question of hosted vs unhosted wallets has moved from theoretical to operationally urgent.


SC Media - Identity and Access

Lockpick chaos, CoPhish, Atlas, Turing, ForumTroll, PKD, Kilgore Trout, Aaran Leyland - SWN #524


Thales Group

AT&T and Thales collaborate to revolutionize IoT deployments with new eSIM solution

AT&T and Thales collaborate to revolutionize IoT deployments with new eSIM solution prezly Thu, 10/30/2025 - 12:00 Enterprise Mobile communications Cybersecurity Share options Facebook
AT&T and Thales collaborate to revolutionize IoT deployments with new eSIM solution prezly Thu, 10/30/2025 - 12:00 Enterprise Mobile communications Cybersecurity

Share options

Facebook X Whatsapp Linkedin Email URL copied to clipboard 30 Oct 2025 AT&T and Thales introduce a next generation eSIM solution, powered by the latest GSMA IoT specification (SGP.32), giving enterprises a consolidated platform to remotely and securely manage IoT subscriptions, while preserving device integrity on a highly secure and reliable network. Backed by Thales’ “secure by design” approach, this solution targets the highest level of cybersecurity for IoT devices and supports compliance with evolving global cybersecurity regulations. Optimized for large-scale IoT deployments, the new eSIM management platform simplifies operations, reduces costs, and delivers advanced automation beyond SGP.32 standards to support diverse industries and device types.

With over 5.8 billion IoT cellular connections expected globally by 2030 (GSMA Intelligence) — powering everything from smart meters to wearable health trackers — the need for secure, scalable, and easy-to-manage connectivity is greater than ever. AT&T, a leader in connectivity and IoT solutions, and Thales, a global leader in advanced Cyber & Digital technologies, announce the launch of a new eSIM solution designed to help businesses remotely activate and manage IoT devices. This eSIM solution, powered by Thales Adaptive Connect (TAC), becomes a key part of AT&T’s global IoT solution, AT&T Virtual Profile Management for IoT, and can support many industries worldwide including automotive, smart cities, healthcare and utilities.

Compliant with the GSMA SGP.32 standard1, the new solution enables customers to ship connected devices anywhere in the world with one single, pre-integrated eSIM from Thales, then seamlessly activate the correct local connectivity profile remotely, eliminating the need for any physical access to it. This results in faster launches and simpler logistics for global IoT deployments. It also enables AT&T and its customers to easily manage connectivity policies, diagnostics, and subscription changes entirely over the air, through a single unified industry-certified interface.

This solution also adds advanced automation to simplify the remote eSIM management of large numbers of devices. It automates complex tasks, such as switching subscriptions or updating fleets rules, so enterprises can spend less time on logistics and operations, while bringing new products and services faster to market.

Thanks to these advanced features, Thales’ eSIM solution (TAC) gives companies the flexibility to localize within AT&T network partners or adjust devices’ subscriptions across large fleets without hardware changes, helping optimize costs, supply chains, coverage, and performance.

The service is now available for commercial use and supports customers worldwide.

At AT&T, we deliver intelligent IoT solutions you can trust­—highly secure, end-to-end, and built to scale,” said Cameron Coursey, VP of AT&T Connected Solutions. “Our state-of-the-art approach, paired with Thales’ solution, will help customers reduce friction and gain control of managing their own devices with reliable connectivity.”
We are entering a new era for remote eSIM Provisioning, ready to power billions of IoT devices, and we are proud to collaborate with AT&T in delivering smarter and safer IoT connectivity around the world,” said Eva Rudin, EVP Mobile Connectivity Solutions at Thales. “With Thales Adaptive Connect, we’re ensuring that every connected device benefits from strong security, reliable service, and simplified management — from the first connection and throughout its lifetime.”

1 The GSMA SGP.32 standard the latest specification from the GSM Association for eSIM (embedded SIM) Remote SIM Provisioning (RSP), for the remote eSIM management of Internet of things (IoT) devices and other types of mobile device deployments.

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies for the Defence, Aerospace, and Cyber & Digital sectors. Its portfolio of innovative products and services addresses several major challenges: sovereignty, security, sustainability and inclusion.

The Group invests more than €4 billion per year in Research & Development in key areas, particularly for critical environments, such as Artificial Intelligence, cybersecurity, quantum and cloud technologies.

Thales has more than 83,000 employees in 68 countries. In 2024, the Group generated sales of €20.6 billion.

About AT&T

We help more than 100 million U.S. families, friends and neighbors, plus nearly 2.5 million businesses, connect to greater possibility. From the first phone call 140+ years ago to our 5G wireless and multi-gig internt offerings today, we @ATT innovate to improve lives. For more information about AT&T Inc. (NYSE:T), please visit us at about.att.com. Investors can learn more at investors.att.com.

View PDF market_segment : Cybersecurity + Enterprise > Mobile communications ; countries : Americas > United States https://thales-group.prezly.com/att-and-thales-collaborate-to-revolutionize-iot-deployments-with-new-esim-solution att-and-thales-collaborate-revolutionize-iot-deployments-new-esim-solution On AT&T and Thales collaborate to revolutionize IoT deployments with new eSIM solution

IDnow

Breaking down biases in AI-powered facial verification.

How IDnow’s latest collaborative research project, MAMMOth, will make the connected world fairer for all – regardless of skin tone. In a bid to break down the barriers of bias, IDnow has been collaborating with 12 European partners, including academic institutions, associations and private companies, as part of the MAMMOth project, for about a year.   Fu
How IDnow’s latest collaborative research project, MAMMOth, will make the connected world fairer for all – regardless of skin tone.

In a bid to break down the barriers of bias, IDnow has been collaborating with 12 European partners, including academic institutions, associations and private companies, as part of the MAMMOth project, for about a year.  

Funded by the European Research Executive Agency, the goal of the three-year long project is to study existing biases and offer a toolkit for AI engineers, developers and data scientists so that they may better identify and mitigate biases in datasets and algorithm outputs. 

Three use cases were identified:  

Face verification in identity verification processes.
Evaluation of academic work. In the academic world, the reputation of a researcher is often tied to the visibility of their scientific papers, and how frequently they are cited. Studies have shown that on certain search engines, women and authors coming from less prestigious countries/ universities tend to be less represented.
Assessment of loan applications.

IDnow predominantly focused on the face verification use case, with the aim of implementing methods to mitigate biases found in algorithms.

Data diversity and face verification bias.

Even the most state-of-the-art face verification models are typically trained on conventional public datasets, which features an underrepresentation of minority demographics. A lack of diversity in data makes it difficult for models to perform well on underrepresented groups, leading to higher error rates for people with darker skin tones.  

To address this issue, IDnow proposed using a ‘style transfer’ method to generate new identity card photos that mimic the natural variation and inconsistencies found in real-world data. By augmenting the training dataset with synthetic images, it not only improves model robustness through exposure to a wider range of variations but also enables a further reduction of bias against darker skin faces, which significantly reduces error rates for darker-skinned users, and provides a better user experience for all.

The MAMMOth project has equipped us with the tools to retrain our face verification systems to ensure fairness and accuracy – regardless of a user’s skin tone or gender. Here’s how IDnow Face Verification works.

When registering for a service or onboarding, IDnow runs the Capture and Liveness step, which detects the face and assesses image quality. We also run a liveness/ anti‑spoofing check to check that photos, screen replays, or paper masks are not used. 

The image is then cross-checked against a reference source, such as a passport or ID card. During this stage, faces from the capture step and the reference face are converted into compact facial templates, capturing distinctive features for matching. 

Finally, the two templates are compared to determine a “match” vs. “non‑match”, i.e. do the two faces belong to the same person or not? 

Through hard work by IDnow and its partners, we developed the MAI-BIAS Toolkit to enable developers and researchers to detect, understand, and mitigate bias in datasets and AI models.

We are proud to have been a part of such an important collaborative research project. We have long recognized the need for trustworthy, unbiased facial verification algorithms. This is the challenge that IDnow and MAMMOth partners set out to overcome, and we are delighted to have succeeded.

Lara Younes, Engineering Team Lead and Biometrics Expert at IDnow.
What’s good for the user is good for the business.

While the MAI-BIAS Toolkit has demonstrated clear technical improvements in model fairness and performance, the ultimate validation, as is often the case, will lie in the ability to deliver tangible business benefits.  

IDnow has already began to retrain its systems with learnings from the project to ensure our solutions are enhanced not only in terms of technical performance but also in terms of ethical and social responsibility.

Top 5 business benefits of IDnow’s unbiased face verification. Fairer decisions: The MAI-BIAS Toolkit ensures all users, regardless of skin color or gender, are given equal opportunities to pass face verification checks, ensuring that no group is unfairly disadvantaged.  
  Reduced fraud risks: By addressing biases that may create security gaps for darker skinned users, the MAI-BIAS Toolkit strengthens overall fraud prevention by offering a more harmonized fraud detection rate across all demographics. 
  Explainable AI: Knowledge is power, and the Toolkit provides actionable insights into the decision-making processes of AI-based identity verification systems. This enhances transparency and accountability by clarifying the reasons behind specific algorithmic determinations.  
  Bias monitoring: Continuous assessment and mitigation of biases are supported throughout all stages of AI development, ensuring that databases and models remain fair with each update to our solutions.  
  Reducing biases: By following the recommendations provided in the Toolkit, research methods developed within the MAMMOth project can be applied across industries and contribute to the delivery of more trustworthy AI solutions.  

As the global adoption of biometric face verification systems continues to increase across industries, it’s crucial that any new technology remains accurate and fair for all individuals, regardless of skin tone, gender or age.

Montaser Awal, Director of AI & ML at IDnow.

“The legacy of the MAMMOth project will continue through its open-source tools, academic resources, and policy frameworks,” added Montaser. 

For a more technical deep dive into the project from one of our research scientists, read our blog ‘A synthetic solution? Facing up to identity verification bias.’

By

Jody Houton
Senior Content Manager at IDnow
Connect with Jody on LinkedIn


Ontology

How Ontology Blockchain Can Strengthen Zambia’s Digital Ecosystem

Introduction Zambia, like many African nations, is on a path toward digital transformation. With growing mobile penetration, fintech adoption, and government interest in digital services, the country needs reliable, secure, and scalable technologies to support inclusive growth. One of the most promising tools is Ontology Blockchain — a high-performance, open-source blockchain specializing in digi
Introduction

Zambia, like many African nations, is on a path toward digital transformation. With growing mobile penetration, fintech adoption, and government interest in digital services, the country needs reliable, secure, and scalable technologies to support inclusive growth. One of the most promising tools is Ontology Blockchain — a high-performance, open-source blockchain specializing in digital identity, data security, and decentralized trust.

Unlike general-purpose blockchains, Ontology focuses on building trust infrastructure for individuals, businesses, and governments. By leveraging Ontology’s features, Zambia can unlock innovation in financial inclusion, supply chain transparency, e-governance, and education.

1. Digital Identity for All Zambians

A key challenge in Zambia is limited access to official identification. Without proper IDs, many citizens struggle to open bank accounts, access healthcare, or register land. Ontology’s ONT ID (a decentralized digital identity solution) could:

Provide every citizen with a secure, self-sovereign digital ID stored on the blockchain. Link identity with services such as mobile money, health records, and education certificates. Reduce fraud in financial services, voting systems, and government benefit programs.

This supports Zambia’s push for universal access to identification while protecting privacy.

2. Financial Inclusion & Digital Payments

With a large unbanked population, Zambia’s fintech growth depends on trust and interoperability. Ontology offers:

Decentralized finance (DeFi) solutions for micro-loans, savings, and remittances without reliance on traditional banks. Cross-chain compatibility to connect Zambian fintech startups with global crypto networks. Reduced transaction fees compared to traditional remittance channels, making it cheaper for Zambians abroad to send money home. 3. Supply Chain Transparency (Agriculture & Mining)

Agriculture and mining are Zambia’s economic backbones, but inefficiencies and lack of transparency hinder growth. Ontology can:

Enable farm-to-market tracking of crops, ensuring farmers get fair prices and buyers trust product origins. Provide traceability in copper and gemstone mining, reducing smuggling and boosting global market confidence. Help cooperatives and SMEs access financing by proving their transaction history and supply chain credibility via blockchain records. 4. E-Government & Service Delivery

The Zambian government aims to digitize public services. Ontology Blockchain could:

Power secure land registries, reducing disputes and fraud. Create tamper-proof records for civil registration (births, deaths, marriages). Support digital voting systems that are transparent, verifiable, and resistant to manipulation. Improve public procurement processes by reducing corruption through transparent contract tracking. 5. Education & Skills Development

Certificates and qualifications are often hard to verify in Zambia. Ontology offers:

Blockchain-based education records: universities and colleges can issue tamper-proof digital diplomas. A verifiable skills database that employers and training institutions can trust. Empowerment of youth in blockchain and Web3 development, opening new economic opportunities. 6. Data Security & Trust in the Digital Economy

Zambia’s growing reliance on mobile money and e-commerce requires strong data protection. Ontology brings:

User-controlled data sharing: individuals decide who can access their personal information. Decentralized identity verification for businesses, preventing fraud in digital transactions. Strong compliance frameworks to align with Zambia’s Data Protection Act of 2021. Challenges to Overcome

Digital literacy gaps: Zambian citizens need training to use blockchain-based services.

Regulatory clarity: as Zambia we must craft clear policies around blockchain and cryptocurrencies.

Infrastructure: because reliable internet and mobile access are essential for blockchain adoption.

Conclusion

Ontology Blockchain provides Zambia with more than just a digital ledger — it offers a trust framework for identity, finance, governance, and innovation. By integrating Ontology into key sectors like agriculture, health, mining, and public administration, Zambia can accelerate its journey toward a secure, inclusive, and transparent digital economy.

This is not just about technology it’s about empowering citizens, building investor confidence, and positioning Zambia as a leader in blockchain innovation in Africa.

How Ontology Blockchain Can Strengthen Zambia’s Digital Ecosystem was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


IDnow

Putting responsible AI into practice: IDnow’s work on bias mitigation

As part of the EU-funded MAMMOth project, IDnow shows how bias in AI systems can be detected and reduced – an important step toward trustworthy digital identity verification. London, October 30, 2025 – After three years of intensive work, the EU-funded MAMMOth (Multi-Attribute, Multimodal Bias Mitigation in AI Systems) project has published key findings
As part of the EU-funded MAMMOth project, IDnow shows how bias in AI systems can be detected and reduced – an important step toward trustworthy digital identity verification.

London, October 30, 2025 – After three years of intensive work, the EU-funded MAMMOth (Multi-Attribute, Multimodal Bias Mitigation in AI Systems) project has published key findings on reducing bias in artificial intelligence (AI) systems. Funded by the EU’s Horizon Europe program, the project brought together organizations from a consortium of leading universities, research centers, and private companies across Europe. 

IDnow, a leading identity verification platform provider in Europe, was directly involved in the implementation of the project as an industry partner. Through targeted research and testing, an optimized AI model was developed to significantly reduce bias in facial recognition, which is now integrated into IDnow’s solutions.

Combating algorithmic bias in practice

Facial recognition systems that leverage AI are increasingly used for digital identity verification, for example, when opening a bank account or registering for car sharing. Users take a digital image of their face, and AI compares it with their submitted ID photo. However, such systems can exhibit bias, leading to poorer results for certain demographic groups. This is due to the underrepresentation of minorities in public data sets, which can result in higher error rates for people with darker skin tones. 

A study by MIT Media Lab showed just how significant these discrepancies can be: while facial recognition systems had an error rate of only 0.8% for light-skinned men, the error rate for dark-skinned women was 34.7%. These figures clearly illustrate how unbalanced many AI systems are – and how urgent it is to rely on more diverse data. 

As part of MAMMOth, IDnow worked specifically to identify and minimize such biases in facial recognition – with the aim of increasing both fairness and reliability.

Research projects like MAMMOth are crucial for closing the gap between scientific innovation and practical application. By collaborating with leading experts, we were able to further develop our technology in a targeted manner and make it more equitable.

Montaser Awal, Director of AI & ML at IDnow.
Technological progress with measurable impact

As part of the project, IDnow investigated possible biases in its facial recognition algorithm, developed its own approaches to reduce these biases, and additionally tested bias mitigation methods proposed by other project partners.

For example, as ID photos often undergo color adjustments by issuing authorities, skin tone can play a challenging role, especially if the calibration is not optimized for darker skin tones. Such miscalibration can lead to inconsistencies between a selfie image and the person’s appearance in an ID photo.  

To solve this problem, IDnow used a style transfer method to expand the training data, which allowed the model to become more resilient to different conditions and significantly reduced the bias toward darker skin tones.

Tests on public and company-owned data sets showed that the new training method achieved an 8% increase in verification accuracy – while using only 25% of the original training data volume. Even more significantly, the accuracy difference between people with lighter and darker skin tones was reduced by over 50% – an important step toward fairer identity verification without compromising security or user-friendliness. 

The resulting improved AI model was integrated into IDnow’s identity verification solutions in March 2025 and has been in use ever since.

Setting the standard for responsible AI

In addition to specific product improvements, IDnow plans to use the open-source toolkit MAI-BIAS developed in the project in internal development and evaluation processes. This will allow fairness to be comprehensively tested and documented before new AI models are released in the future – an important contribution to responsible AI development. 

“Addressing bias not only strengthens fairness and trust, but also makes our systems more robust and adoptable,” adds Montaser Awal. “This will raise trust in our models and show that they work equally reliably for different user groups across different markets.”


Herond Browser

How do I Turn Off Ad Blocker in 3 Simple Steps

You’ve downloaded an ad blocker for a reason, but now that essential tool is causing friction, forcing you to know how do I turn off ad blocker just to bypass site paywalls, access crucial content, or stop mandatory video viewing. This guide offers the quick fix you need: a fast, universal, three-step method that works […] The post How do I Turn Off Ad Blocker in 3 Simple Steps appeared first on

You’ve downloaded an ad blocker for a reason, but now that essential tool is causing friction, forcing you to know how do I turn off ad blocker just to bypass site paywalls, access crucial content, or stop mandatory video viewing. This guide offers the quick fix you need: a fast, universal, three-step method that works across all popular browsers and extensions. More importantly, while ad blockers are a great starting point, consider this: the Herond Browser is engineered with a smarter, more selective approach to ad filtering built-in, suggesting you might not even need a separate, disruptive extension at all.

Universal Guide: How do I turn off ad blocker Step 1: Locate the Extension Icon

To begin, you need to find your ad blocker. This icon is almost always located in the top-right corner of your browser’s toolbar, typically represented by a shield. If you’re using a popular blocker like AdBlock Plus or uBlock Origin, look for their distinct logos there. This is the central control point for quickly managing its features.

Step 2: Select the Disabled Option

The most effective quick fix is selecting “disable on this site”. This is the recommended setting as it instantly turns the blocker off only for the specific website you are currently viewing. This allows you to access paywalled content or required videos immediately. The blocker remains active everywhere else, ensuring your general browsing stays ad-free.

Choosing “Pause Ad Blocker” offers a temporary fix by globally suspending the extension for a brief period, often 30 seconds or until you refresh the page. This option is best used when you are unsure if a site needs the blocker disabled, as it allows you to test content access without committing to a permanent site exclusion.

The option “Don’t run on pages in this domain” creates a permanent exception rule. Unlike the quick “disable on this site,” this actively adds the entire domain (e.g., herond.org) to your whitelist. This is useful for sites you frequently visit and trust, ensuring the blocker never activates on any page associated with that domain moving forward.

Step 3: Refresh the Page

The final and crucial step to apply your change is to refresh the page (F5 or the refresh icon). Whether you choose to disable the blocker for the site or pause it temporarily, the browser needs to reload the page without the extension’s script running. This simple action immediately loads the content, allowing you to bypass the paywall or access the video without further delay.

Specific Instructions by Browser/Extension (Detailed Utility)

Chrome/Edge (Extension-Based)

For users on Chrome or Edge, managing an extension-based ad blocker requires navigating to the dedicated settings page. The fastest way to access this is by typing chrome://extensions (or edge://extensions for Edge) directly into your address bar. This grants you the detailed control panel necessary to completely disable the extension, manage its permissions, or remove it entirely from your browser.

Firefox

For Firefox users, managing your ad blocker requires navigating the Add-ons Manager. You can quickly access this by typing about:addons into your address bar, or by clicking the menu icon (three horizontal lines) and selecting “Add-ons and themes.” This centralized hub provides the detailed controls needed to fully disable, adjust specific permissions, or completely remove your ad blocking extension from the browser.

Safari

To manage ad blockers in Safari, the process is integrated directly into the browser’s preferences rather than relying on an extensions page. On a Mac, navigate to Safari -> Settings (or Preferences) and select the Websites tab. Here, you can find the Content Blockers section, allowing you to quickly disable the blocker for individual sites or adjust its general settings and permissions across the board.

Herond Browser

The Herond Browser eliminates the need for disruptive third-party extensions altogether. Its powerful ad and content blocker is managed directly within the main Settings menu, providing seamless, integrated control. This built-in approach offers a smarter, less intrusive defense, allowing you to easily adjust protections without juggling multiple add-ons.

The Smarter Way to Block Ads: Introducing Herond Browser

Why Separate Extensions Fail

Separate ad-blocking extensions often create more problems than they solve, frequently breaking website functionality and noticeably slowing down overall browser performance. Crucially, they require constant manual intervention, forcing the user to disable them repeatedly just to access content. This defeats the purpose of seamless browsing and highlights the limitation of relying on third-party add-ons for essential functionality.

Herond’s Integrated Ad Shield

Performance: Faster browsing because the blocker is native.

Selective Blocking

Easily toggle blocking per site or choose to only block malicious/intrusive ads. This allows non-intrusive ads to support content creators (Solving the user’s original problem more elegantly).

Security: Native integration offers deeper protection against malware and trackers.

Conclusion

You now have the simple, three-step method for managing your ad blocker and immediately regaining access to paywalled sites or crucial content. Whether you used the quick fix – selecting “disable on this site” – or adjusted the settings for a permanent exclusion, a simple page refresh is all it takes. Remember that while disabling blockers is sometimes necessary, consider shifting to the Herond Browser. Its built-in, selective ad filtering system means you avoid the need for external extensions entirely, offering a faster, smoother, and less intrusive way to browse while still controlling your online experience.

About Herond

Herond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.

To enhance user control over their digital presence, Herond offers two essential tools:

Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.

As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.

Have any questions or suggestions? Contact us:

On Telegram https://t.me/herond_browser DM our official X @HerondBrowser Technical support topic on https://community.herond.org

The post How do I Turn Off Ad Blocker in 3 Simple Steps appeared first on Herond Blog.

The post How do I Turn Off Ad Blocker in 3 Simple Steps appeared first on Herond Blog.


Ockto

Fraude met vervalste documenten neemt toe: bronverificatie biedt uitkomst

Naarden, 30 oktober 2025 – De politie heeft in Amsterdam en Zaandam afgelopen maand acht mensen aangehouden op verdenking van grootschalige hypotheekfraude, witwassen en valsheid in geschrifte. De zaak draait volgens de politie om valse werkgeversverklaringen en fictieve dienstverbanden. Het benadrukt opnieuw hoe kwetsbaar processen zijn die vertrouwen op door consumenten aangeleverde d

Naarden, 30 oktober 2025 – De politie heeft in Amsterdam en Zaandam afgelopen maand acht mensen aangehouden op verdenking van grootschalige hypotheekfraude, witwassen en valsheid in geschrifte.
De zaak draait volgens de politie om valse werkgeversverklaringen en fictieve dienstverbanden. Het benadrukt opnieuw hoe kwetsbaar processen zijn die vertrouwen op door consumenten aangeleverde documenten.


auth0

Auth0 for Scaling Apps: Advanced Security and Authentication

Discover the three key signs that your app is outgrowing its user authentication setup. Learn to solve these challenges and scale with Auth0's advanced features.
Discover the three key signs that your app is outgrowing its user authentication setup. Learn to solve these challenges and scale with Auth0's advanced features.

FastID

Rewriting HTML with the Fastly JavaScript SDK

Boost web performance with Fastly’s JS SDK v3.35.0. Use the new streaming HTML rewriter to customize, cache, and transform pages faster and more efficiently.
Boost web performance with Fastly’s JS SDK v3.35.0. Use the new streaming HTML rewriter to customize, cache, and transform pages faster and more efficiently.

Resilience by Design: Lessons in Multi-Cloud Readiness

Stay online when it matters most. Learn how Fastly's multi-cloud and edge strategies protect against outages, keeping your systems fast and reliable.
Stay online when it matters most. Learn how Fastly's multi-cloud and edge strategies protect against outages, keeping your systems fast and reliable.

Thursday, 30. October 2025

SC Media - Identity and Access

New infostealer claims to extract 99% of credentials in 12 seconds

The logins.zip infostealer builder claims to exploit Chromium zero-days.

The logins.zip infostealer builder claims to exploit Chromium zero-days.


Attacker claims massive identity attack on PII at HSBC USA

Experts say an attack on PII is essentially an identity hack.

Experts say an attack on PII is essentially an identity hack.


Credential-stealing npm packages hide beneath 4 layers of obfuscation

The 10 typosquatted packages imitate discord.js, TypeScript and other popular packages.

The 10 typosquatted packages imitate discord.js, TypeScript and other popular packages.


liminal (was OWI)

Redefining Age Assurance

The post Redefining Age Assurance appeared first on Liminal.co.

The post Redefining Age Assurance appeared first on Liminal.co.


Elliptic

Crypto regulatory affairs: EU sanctions target A7A5 Ruble-backed stablecoin

In its latest round of sanctions on Russia, the European Union has taken aim at the A7A5 stablecoin - part of efforts to choke off Russia’s sanctions circumvention schemes. 

In its latest round of sanctions on Russia, the European Union has taken aim at the A7A5 stablecoin - part of efforts to choke off Russia’s sanctions circumvention schemes. 


SC Media - Identity and Access

US absence from UN Cybercrime Treaty praised by groups over privacy abuse

Digital rights advocates and industry execs worry that treaty would lead to abuse by authoritarian regimes.

Digital rights advocates and industry execs worry that treaty would lead to abuse by authoritarian regimes.


Ocean Protocol

Ocean Protocol: Q4 2025 Update

A look at what the Ocean core team has built, and what’s to come · 1. Introduction · 2. Ocean Nodes: from Foundation to Framework · 3. Annotators Hub: Community-driven data annotations · 4. Lunor: Crowdsourcing Intelligence for AI · 5. Predictoor and DeFi Trading · 6. bci/acc: accelerate brain-computer interfaces towards human superintelligence · 7. Conclusion 1. Introduction Back in June,
A look at what the Ocean core team has built, and what’s to come

· 1. Introduction
· 2. Ocean Nodes: from Foundation to Framework
· 3. Annotators Hub: Community-driven data annotations
· 4. Lunor: Crowdsourcing Intelligence for AI
· 5. Predictoor and DeFi Trading
· 6. bci/acc: accelerate brain-computer interfaces towards human superintelligence
· 7. Conclusion

1. Introduction

Back in June, we shared the Ocean Protocol Product Update half-year check-in for 2025 where we outlined the progress made across Ocean Nodes, Predictoor, and other Ocean ecosystem initiatives. This post is a follow-up, highlighting the major steps taken since then and what’s next as we close out 2025.

We’re heading into the final stretch of 2025, so it’s only fitting to have a look over what the core team has been working on and what is soon to be released. Ocean Protocol was built to level the playing field for AI and data. From day one, the vision has been to make data more accessible, AI more transparent, and infrastructure more open. The Ocean tech stack is built for that mission: to combine decentralized compute, smart contracts, and open data marketplaces to help developers, researchers, and companies tap into the true potential of AI.

This year has been about making that mission real. Here’s how:

2. Ocean Nodes: from Foundation to Framework

Since the launch of Ocean Nodes in August 2024, the Ocean community has shown what’s possible when decentralized infrastructure meets real-world ambition. With over 1.7 million nodes deployed across 70+ countries, the network has grown far beyond expectations.

Throughout 2025, the focus has been on reducing friction, boosting usability, and enabling practical workflows. A highlight: the release of the Ocean Nodes Visual Studio Code extension. It lets developers and data scientists run compute jobs directly from their editor — free (within defined parameters), fast, and frictionless. Whether they’re testing algorithms or prototyping dApps, it’s the quickest path to real utility. The extension is now available on the VS Code Marketplace, as well as in Cursor and other distributions, via the Open VSX registry.

We’ve also seen strong momentum from partners like NetMind and Aethir, who’ve helped push GPU-ready infrastructure into the Ocean stack. Their contribution has paved the way for Phase 2, a major upgrade that the core team is still actively working on and that’s set to move the product from PoC to real production-grade capabilities.

That means:

Compute jobs that actually pay, with a pay-as-you-go system in place Benchmarking GPU nodes to shape a fair and scalable reward model Real-world AI workflows: from model training to advanced evaluation

And while Phase 2 is still in active development, it’s now reached a stage where user feedback is needed. To get there, we’ve launched the Alpha GPU Testers program, for a small group of community members to help us validate performance, stability and reward mechanics across GPU nodes. Selected participants simply need to set their GPU node up and make it available for the core team to run benchmark tests. As a thank-you for their effort and uptime, each successfully tested node will receive a $100 reward.

Key information:

Node selection: Oct 24–31, 2025 Benchmark testing: Nov 3–17, 2025 Reward: $100 per successfully tested node Total participants: up to 15, on a first come-first served basis. Only one node/owner is allowed

With Phase 2 of Ocean Nodes, we will be laying the groundwork for something even bigger: the Ocean Network. Spoiler alert: it will be a peer-to-peer AI Compute-as-a-Service platform designed to make GPU infrastructure accessible, affordable, and censorship-resistant for anyone who needs it.

More details on the transition are coming soon. But if you’re running a node, building on Ocean, or following along, you’re already part of it.

What else have we launched?

3. Annotators Hub: Community-driven data annotations Current challenge: CivicLens, ends on Oct 31, 2025

AI doesn’t work without quality data. And creating it still is a huge bottleneck. That’s why we’ve launched the Annotators Hub: a structured, community-driven initiative where contributors help evaluate and shape high-quality datasets through focused annotation challenges.

The goal is to improve AI performance by improving what it learns from: the data. High-quality annotations are the foundation for reliable, bias-aware, and domain-relevant models. And Ocean is building the tools and processes to make that easier, more consistent, and more inclusive.

Human annotations remain the single most effective way to improve AI performance, especially in complex domains like education and politics. By contributing to the Annotators Hub, the Ocean community members directly help build better models, that can power adaptive tutors, improve literacy tools, and even make political discourse more accessible.

For example, LiteracyForge, the first challenge ran in collaboration with Lunor.ai, focused on improving adaptive learning systems by collecting high-quality evaluations of reading comprehension material. The aim: to train AI that better understands question complexity and supports literacy tools. Here are a few highlights, as the challenge is currently being evaluated:

49,832 total annotations submitted 19,973 unique entries 147 annotators joined throughout the three weeks of the first challenge 17,581 double-reviewed annotation

The second challenge will finish in just 2 days, on Friday, October 31. This time we’re looking into analyzing speeches from the European Parliament, to help researchers, civic organizations as well as the general public better understand political debates, predict voting behavior, and make parliamentary discussions more transparent and accessible. There’s still time to jump in and become an annotator,

Yes, this initiative can be seen as a “launchpad” for a marketplace with ready-to-use, annotated data, designed to give everyone access to training-ready data that meets real-world quality standards. But on that in an upcoming blogpost.

As we get closer to the end of 2025, we’re doubling down on utility, usability, and adoption. The next phase is about scale and about creating tangible ways for Ocean’s community to contribute, earn, and build.

4. Lunor: Crowdsourcing Intelligence for AI

Lunor is building a crowdsourced intelligence ecosystem where anyone can co-create, co-own, and monetize Intelligent Systems. As one of the core projects within the Ocean Ecosystem, Lunor represents a new approach to AI, one where the community drives both innovation and ownership.

Lunor’s achievements so far, together with Ocean Protocol, comprise of:

Over $350,000 in rewards distributed from the designated Ocean community wallet More than 4,000 contributions submitted 38 structured data and AI quests completed

Assets from Lunor Quests are published on the Ocean stack, while future integration with Ocean nodes will bring private and compliant Compute-to-Data for secure model training.

Together with Ocean, Lunor has hosted quests like LiteracyForge, showcasing how open collaboration can unlock high-quality data and AI for education, sustainability, and beyond.

5. Predictoor and DeFi Trading

About Predictoor. In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. The “earn $” part is key, because it fosters usage.

Predictoor involves two groups of people:

Predictoors: data scientists who use AI models to predict what the price of ETH, BTC, etc will be 5 (or 60) minutes into the future. The scientists run bots that submit these predictions onto the chain every 5 minutes. Predictoors earn $ based on sales of the feeds, including sales from Ocean’s Data Farming incentives program. Traders: run bots that input predictoors’ aggregated predictions, to use as alpha in trading. It’s another edge for making $ while trading.

Predictoor is built using the Ocean stack. And, it runs on Oasis Sapphire; we’ve partnered with the Oasis team.

Predictoor traction. Since mainnet launch in October 2023, Predictoor has accumulated about $2B total volume. [Source: DappRadar]. Furthermore, in spring 2025, our collaborators at Oasis launched WT3, a decentralized, verifiable trading agent that uses Predictoor feeds for its alpha.

Predictoor past, present, future. After Predictoor product and rewards program were launched in fall 2023, the next major goal was “traders to make serious $”. If that is met, then traders will spend $ to buy feeds; which leads to serious $ for predictoors. The Predictoor team worked towards this primary goal throughout 2024, testing trading strategies with real $. Bonus side effects of this were improved analytics and tooling.

Obviously “make $ trading” is not an easy task. It’s a grind taking skill and perseverance. The team has ratcheted, inching ever-closer to making money. Starting in early 2025, the live trading algorithms started to bear fruit. The team’s 2025 plan was — and is — keep grinding, towards the goal “make serious $ trading”. It’s going well enough that there is work towards a spinoff. We can expect trading work to be the main progress in Predictoor throughout 2025. Everything else in Predictoor and related will follow.

6. bci/acc: accelerate brain-computer interfaces towards human superintelligence

Another stream in Ocean has been taking form: bci/acc. Ocean co-founder Trent McConaghy first gave a talk on bci/acc at NASA in Oct 2023, and published a seminal blog post on it a couple months later. Since then, he’s given 10+ invited talks and podcasts, including Consensus 2025 and Web3 Summit 2025.

bci/acc thesis. AI will likely reach superintelligence in the next 2–10 years. Humanity needs a competitive substrate. BCI is the most pragmatic path. Therefore, we need to accelerate BCI and take it to the masses: bci/acc. How do we make it happen? We’ll need BCI killer apps like silent messaging to create market demand, which in turn drive BCI device evolution. The net result is human superintelligence.

Ocean bci/acc team. In January 2025, Ocean assembled a small research team to pursue bci/acc, with the goal to create BCI killer apps that it can take to market. The team has been building towards this ever since: working with state-of-the-art BCI devices, constructing AI-data pipelines, and running data-gathering experiments. Ocean-style decentralized access control will play a role, as neural signals are perhaps the most private data of all: “not your keys, not your thoughts”. In line with Ocean culture and practice, we look forward to sharing more details once the project has progressed to tangible utility for target users.

7. Conclusion

2025 has been a year of turning vision into practice. From Predictoor’s trading traction, Ocean Nodes being pushed into a GPU-powered Phase 2, to the launch of Annotators Hub, and with ecosystem projects like Lunor driving community-led AI forward, it feels like the pieces of the Ocean vision are falling into place.

The focus is clear for the Ocean core team in Q4: scale, usability, and adoption. Thanks for being part of it. The best is yet to come.

Ocean Protocol: Q4 2025 Update was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Ontology

A New Chapter for ONG: Governance Vote on Tokenomics Adjustment

Ontology’s token economy has always been designed to evolve alongside the network. This week, that evolution takes another step forward. A new governance proposal has been initiated by an Ontology Consensus Node, calling on all Triones nodes to vote on an update to ONG tokenomics. The update aims to strengthen the foundation for long-term sustainability and fairer incentives across the ecosystem.

Ontology’s token economy has always been designed to evolve alongside the network. This week, that evolution takes another step forward. A new governance proposal has been initiated by an Ontology Consensus Node, calling on all Triones nodes to vote on an update to ONG tokenomics. The update aims to strengthen the foundation for long-term sustainability and fairer incentives across the ecosystem.

Voting will take place on OWallet from October 28, 2025 (00:00 UTC) through October 31, 2025 (00:00 UTC).

Understanding the Current Model

Let’s start with where things stand today.

Total ONG Supply: 1 billion Total Released: ≈ 450 million (≈ 430 million circulating) Annual Release: ≈ 31.5 million ONG Release Curve: All ONG unlocked over 18 years. The remaining 11 years follow a mixed release pace: 1 ONG per second for 6 years, then 2, 2, 2, 3, and 3 ONG per second in the final 5 years.

Currently, both unlocked ONG and transaction fees flow back to ONT stakers as incentives, generating an annual percentage rate of roughly 23 percent at current prices.

What the Proposal Changes

The new proposal suggests several key adjustments to rebalance distribution and align long-term incentives:

Cap the total ONG supply at 800 million. Lock ONT and ONG equivalent to 100 million ONG in value, effectively removing them from circulation. Strengthen staker rewards and ecosystem growth by making the release schedule steadier and liquidity more sustainable. Implementation Plan

1. Adjust the ONG Release Curve

Total supply capped at 800 million. Release period extended from 18 to 19 years. Maintain a 1 ONG per second release rate for the remaining years.

2. Allocation of Released ONG

80 percent directed to ONT staking incentives. 20 percent, plus transaction fees, contributed to ecological liquidity.

3. Swap Mechanism

Use ONG to acquire ONT within a defined fluctuation range. Pair the two tokens to create liquidity and receive LP tokens. Burn the LP tokens to permanently lock both ONG and ONT, tightening circulating supply. Community Q & A

Q1. How long will the ONT + ONG (worth 100 million ONG) be locked?

It’s a permanent lock.

Q2. Why does the total ONG supply decrease while the release period increases?

Under the current model, release speeds up in later years. This proposal keeps the rate fixed at 1 ONG per second, so fewer tokens are released overall but over a slightly longer span — about 19 years in total.

Q3. Will this affect ONT staking APY?

It may, but not necessarily negatively. While staking rewards in ONG drop 20 percent, APY depends on market prices of ONT and ONG. If ONG appreciates as expected, overall returns could remain steady or even rise.

Q4. How does this help the Ontology ecosystem?

Capping supply at 800 million and permanently locking 100 million ONG will make ONG scarcer. With part of the released ONG continuously swapped for ONT to support DEX liquidity, the effective circulating supply may fall to around 750 million. That scarcity, paired with new products consuming ONG, could strengthen price dynamics and promote sustainable network growth. More on-chain activity would also mean stronger rewards for stakers.

Q5. Who can vote, and how?

All Triones nodes have the right to vote through OWallet during the official voting window.

Why It Matters

This isn’t just a supply adjustment. It’s a structural change designed to balance reward distribution, liquidity, and governance in a way that benefits both the Ontology network and its long-term participants.

Every vote counts. By joining this governance round, Triones nodes have a direct hand in shaping how value flows through the Ontology ecosystem — not just for today’s staking cycle, but for the years of decentralized growth ahead.

A New Chapter for ONG: Governance Vote on Tokenomics Adjustment was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


Herond Browser

Dogecoin Price Prediction: Experts Predict Next Target for DOGE

This article aggregates the most reliable Dogecoin Price Prediction and DOGE Target analysis The post Dogecoin Price Prediction: Experts Predict Next Target for DOGE appeared first on Herond Blog. The post Dogecoin Price Prediction: Experts Predict Next Target for DOGE appeared first on Herond Blog.

Dogecoin (DOGE) remains the leading meme coin, consistently dominating charts for volume and volatility. As recent surges and consolidation continue to draw global attention, the investor community is hunting for the key question: What is DOGE’s next target? This article aggregates the most reliable Dogecoin Price Prediction and DOGE Target analysis, distilled from top technical analysts and crypto experts to give you clear insights and sharp trading strategies.

Dogecoin Fundamentals and Recent Performance

A. Dogecoin Price Prediction – DOGE’s Unconventional Legacy

To predict where Dogecoin (DOGE) is headed, you must first understand its unconventional legacy. What began as a mere internet joke has endured through its inflationary supply mechanic and a unique community factor powered by social media virality and endorsements. Unlike other deflationary digital assets, these defining traits – origin, supply, and culture, give DOGE a distinctive market behavior that sets it apart from all other cryptocurrencies.

B. Dogecoin Price Prediction – Recent Market Action

The last 90 days have been crucial for Dogecoin, defining the current market structure. Our analysis dives deep into this period, identifying the critical Support and Resistance levels that have dictated DOGE’s trading range. Importantly, we dissect the undeniable influence of broader market trends, showing how both the movement of Bitcoin and general crypto sentiment continue to shape DOGE’s volatility and its immediate price trajectory.

Expert Price Targets: Where Is DOGE Headed Next? Dogecoin Price Prediction – Short-Term Forecast (Q4 2025) Analyst 1’s Technical View

Our first short-term forecast for Q4 2025 is grounded in detailed technical analysis. Analyst 1 focuses specifically on established trading patterns, such as the convergence of moving averages (MAs) or the resolution of recent consolidation triangles. This view provides a foundational, data-driven perspective on DOGE’s immediate price behavior and likely direction.

Target 1 (Conservative Price Floor)

The conservative forecast sets Target 1 as the most likely price floor for the quarter. This level represents the strongest accumulation zone, suggesting where sustained buying pressure and key support are expected to prevent further downside. It serves as a crucial, low-risk benchmark for cautious investors.

Target 2 (Breakout Level)

The breakout forecast establishes Target 2, the price level anticipated upon decisively breaking immediate resistance. Reaching this point would signal a significant shift in market momentum, driven by increased volume and positive sentiment. This target is essential for investors planning to capitalize on a major upward movement.

Dogecoin Price Prediction – Mid-Term Outlook (12-18 Months) Analyst 2’s Event-Driven View

Our second short-term forecast relies on an event-driven view. Analyst 2 bypasses pure pattern analysis to focus on high-impact market catalysts, such as the predicted market cycle peak or potential major platform integrations. This approach provides a crucial forward-looking perspective, anticipating price swings based on verifiable external events rather than historical chart behavior.

The Crucial ATH Re-test Level

For long-term holders, the most critical number is the Crucial Level needed to re-test the previous All-Time High (ATH). Reaching this price point signals a powerful, structural shift in market sentiment and confirms a return to maximum bullish momentum. This target represents the ultimate goal for the current cycle.

Dogecoin Price Prediction – Long-Term Potential (2027 and Beyond) Utility Adoption and Ecosystem Growth

Long-term price predictions are fundamentally rooted in the growth of utility adoption. Specifically, we analyze the influence of real-world use cases, such as increased usage on X/Twitter, to gauge DOGE’s functional value beyond mere speculation. This focus on utility is key to determining sustainable demand and its true price potential over the coming years.

Overall Crypto Market Maturity

The other critical factor is the overall crypto market maturity. As institutional money enters the space and regulatory frameworks become clearer, the market gains stability. This maturation creates a much stronger foundation, meaning DOGE’s long-term trajectory will increasingly be linked to the systemic strength and sustained development of the entire digital asset ecosystem.

Technical Indicators Driving DOGE’s Trajectory Key Technical Signals RSI (Relative Strength Index)

Our technical analysis begins with the RSI (Relative Strength Index) to gauge market momentum. This indicator reveals DOGE’s current state: whether it is Overbought, Oversold, or currently trading Neutral. Understanding the RSI is vital, as it flags potential short-term reversals and helps identify periods where price action may be due for a sharp correction or bounce.

MACD (Moving Average Convergence Divergence)

Next, we scrutinize the MACD (Moving Average Convergence Divergence), a key momentum indicator that forecasts future price direction. We look specifically for a potential bullish or bearish cross between the MACD line and the signal line. Confirmation of such a cross provides a powerful technical signal for traders, often preceding significant shifts in DOGE’s trend.

Volume and Liquidity

Trading volume and liquidity are critical factors used to validate any significant price action. High trading volume is essential for confirming a trend breakout or reversal, as it demonstrates strong market conviction behind the move. Without substantial volume, a price change can easily be dismissed as noise, meaning liquidity acts as the necessary fuel to sustain any major shift in DOGE’s trajectory

Major Catalysts and Risk Factors Tailwinds (Factors That Could Send DOGE Higher) The X/Twitter Effect

The influence of Elon Musk remains a primary external factor for Dogecoin’s volatility. Our forecast rigorously tracks for any potential further integrations or direct endorsements originating from him or the X/Twitter platform. These events historically trigger significant, sudden price movements, making any mention of DOGE’s utility or adoption on the social media giant a critical market signal.

Development Updates

Long-term viability depends heavily on technical progress. We provide essential updates on core development, specifically monitoring progress on Dogecoin Core upgrades and the implementation of layer-2 solutions. These updates are vital, as improved efficiency and functionality are necessary to enhance DOGE’s utility and attract broader ecosystem adoption.

Exchange Listings & Accessibility

Increased exchange listings and accessibility are key indicators of mainstream acceptance. We analyze any new major listings on top-tier global exchanges or significant adoption by financial institutions. Such developments dramatically increase liquidity and exposure, opening DOGE to wider investor pools and strengthening its perceived market legitimacy.

Headwinds (Risks That Could Drag DOGE Down)

Meme Coin Volatility

The primary risk in the DOGE market remains its inherent meme coin volatility. Driven heavily by social sentiment rather than tangible utility, DOGE is prone to extreme price swings, characterized by sudden pumps and unpredictable dumps. This high-risk profile means traders must maintain extreme caution and strictly manage capital to navigate the asset’s fundamentally unstable price behavior.

Regulatory Uncertainty

A persistent long-term challenge is the global specter of regulatory uncertainty. Stricter enforcement or new classifications of cryptocurrencies, especially those without clear utility, could significantly impact market sentiment and accessibility. The potential impacts from global crypto regulation pose a distinct systemic risk that must be factored into any long-term Dogecoin forecast.

Conclusion

Ultimately, predicting the next move for Dogecoin (DOGE) requires balancing its unconventional, community-driven legacy with hard technical data. While Support and Resistance levels provide crucial short-term benchmarks, the long-term trajectory hinges on increasing utility adoption (like on X/Twitter) and the overall crypto market maturity. Remember that while experts provide targets, DOGE’s volatility demands caution. Use the conservative price floor for risk management and watch the breakout levels for confirmation of a new cycle. Stay informed on technical signals and external catalysts to seize the opportunities ahead.

About Herond

Herond Browser is a cutting-edge Web 3.0 browser designed to prioritize user privacy and security. By blocking intrusive ads, harmful trackers, and profiling cookies, Herond creates a safer and faster browsing experience while minimizing data consumption.

To enhance user control over their digital presence, Herond offers two essential tools:

Herond Shield: A robust adblocker and privacy protection suite. Herond Wallet: A secure, multi-chain, non-custodial social wallet.

As a pioneering Web 2.5 solution, Herond is paving the way for mass Web 3.0 adoption by providing a seamless transition for users while upholding the core principles of decentralization and user ownership.

Have any questions or suggestions? Contact us:

On Telegram https://t.me/herond_browser DM our official X @HerondBrowser Technical support topic on https://community.herond.org

The post Dogecoin Price Prediction: Experts Predict Next Target for DOGE appeared first on Herond Blog.

The post Dogecoin Price Prediction: Experts Predict Next Target for DOGE appeared first on Herond Blog.


Elliptic

Elliptic submits recommendations to US Treasury on ways to fight crypto crime

In August 2025, the US Department of the Treasury issued a request for comment on innovative methods to detect illicit activity involving digital assets. Treasury specifically sought input on four key technologies:

In August 2025, the US Department of the Treasury issued a request for comment on innovative methods to detect illicit activity involving digital assets. Treasury specifically sought input on four key technologies:


PingTalk

Ping YOUniverse 2025: Resilient Trust in Motion

Ping YOUniverse 2025 traveled to Sydney, Melbourne, Singapore, Jakarta, Austin, London, and Amsterdam. Read the highlights of our global conference, and see how identity, AI, and Resilient Trust took center stage.

Identity is moving fast–AI agents, new fraud patterns, and tightening regulations are reshaping the identity landscape under our feet. At Ping YOUniverse 2025, thousands of identity leaders, customers, and partners came together to confront this dramatic shift.

 

We compared notes on what matters now:

 

Stopping account takeover without killing conversion, so security doesn’t tax your revenue engine.

Orchestrating trust signals across apps and partners, so decisions get smarter everywhere.

Shrinking risk and cost with just‑in‑time access, so the right access appears—and disappears—on demand.

 

This recap distills the most useful takeaways for you: real-world use cases, technical demos within our very own Trust Lab, and deep-dive presentations from partners like Deloitte, AWS, ProofID, Midships, Versent, and more—plus guest keynotes from Former Secretary General of Interpol, Dr. Jürgen Stock and cybersecurity futurist, Heather Vescent. And it’s unified by a single theme: Resilient Trust isn’t a moment. It’s a mindset.

 


FastID

A Smarter ACME Challenge for a Multi-CDN World

Optimize your multi-CDN setup with Fastly's new dns-account-01 ACME challenge. Eliminate label collisions and enhance certificate management.
Optimize your multi-CDN setup with Fastly's new dns-account-01 ACME challenge. Eliminate label collisions and enhance certificate management.

Tuesday, 28. October 2025

SC Media - Identity and Access

Water Saci SORVEPOTEL backdoor self-propagates through WhatsApp contacts

A novel email-based system is used to retrieve commands from the attacker’s C2 server.

A novel email-based system is used to retrieve commands from the attacker’s C2 server.


From vaults to APIs: The new era of privileged access and identity security

API-led PAM is about embedding access control directly into workflows and applications, enabling Just-in-Time (JIT) access without compromising agility.

API-led PAM is about embedding access control directly into workflows and applications, enabling Just-in-Time (JIT) access without compromising agility.


Spruce Systems

Digital Wallet Certification: The Foundation for Interoperable State Identity Systems

To build trust, protect privacy, and enable true interoperability, states must establish a certification program for digital wallets and issuers that enforces technical safeguards, statutory principles, and vendor accountability from the start.

As states move toward private, interoperable, and resident-controlled digital identity systems, certification of wallets and issuers becomes a cornerstone of trust. Certification doesn’t just validate technical conformance; it enforces privacy, supports procurement flexibility, and enables multiple vendors to participate under a consistent trust framework. This blog post outlines recommendations to meet these goals, with statutory guardrails and governance practices built in from the start.

The Case for Certification

We believe that states should require certification of Digital Wallets that are capable of holding a state digital identity. Certification provides assurance that wallet providers comply with key requirements such as privacy preservation, unlinkability, minimal disclosure, and security of key material, which are enforced by design.

SpruceID believes that additional legislation should be enacted to establish a formal certification program for wallets, issuers, and potentially verifiers participating in a state digital identity ecosystem. The legislation should specify that the designated regulating entity may conduct audits and certify providers directly, or delegate certification responsibilities to qualified external organizations, provided such delegation is formally approved by the appropriate higher authority.

Enforcing Privacy and Minimization

A certification program would mandate compliance with privacy-preserving technical standards, restrict verifiers from requesting or storing more information than is legally required, and require wallets to obtain clear user consent before transmitting credential data. Wallets would also need to provide plain-language explanations of how data is used in each transaction. By creating a statutory basis for certification and oversight, states can ensure that unlinkability and data minimization are not just principles, but enforceable requirements with technological and governance safeguards.

Pilot Programs to Support Innovation

We recommend that states enact a pilot program allowing provisional, limited, and expiring operating approvals of issuers, wallets, and verifiers, preceding the establishment and full operation of its formal certification programs, for the purpose of encouraging market solutions to operate in real-world environments and generate learnings. The appropriate oversight agency would be able to adapt the resulting learnings towards creating the formal certification programs. Today’s best practice in the software industry has been to take an iterative “agile” approach towards implementation, and we believe this would be the best approach to creating certification programs for software as well, to fully engage industry early and often in a limited operating capacity, instead of attempting to fully specify rules a priori, which may become irrelevant if not created with perfect knowledge.

Clarifying Responsibilities Across the Ecosystem

Clear allocation of liability and responsibility is essential for the trust and sustainability of any state digital identity program. A state's role is to establish statutory guardrails, oversee governance, and authorize a certification framework that ensures all ecosystem participants meet consistent standards. This includes creating a certification program for both digital wallet providers and credential issuers, verifying that they comply with statutory principles for privacy, unlinkability, minimal disclosure, and security.

Wallet Provider Responsibilities

Digital wallet providers bear responsibility for ensuring acceptable security mechanisms, proper user consent, presenting clear and plain-language disclosures which meet accessibility requirements, and ensuring features like personal data licenses and privacy-protective technical standards are honored in practice. Certified wallets must also support recovery mechanisms for key loss or compromise, ensuring that holders are not permanently locked out of identity credentials due to technical failures. Digital wallet providers should coordinate with issuers, designing solutions which anticipate that wallets and keys will be lost, stolen, and compromised.

Issuer Responsibilities

Issuers are responsible for creating a strong operational environment that ensures the accuracy of the attributes they release, and for maintaining correct and untampered authoritative source records. They are also responsible for ensuring that state digital identity credentials are issued to the correct holders, and to any acceptable wallets, free of unreasonable delay, burden, or fees. They must provide accessibility to holders, such as providing workable paths for holders who lose their credentials, wallets, and/or keys. Their certification ensures that state digital identity credentials are issued only under audited processes that meet required levels of identity proofing and revocation safeguards.

Legislating Technical Safeguards and Liability

In addition, states should require certification of wallets against a published state digital identity protection profile and create clear liability rules. Legislation should establish that wallet providers are responsible for implementing technical safeguards, that Holders maintain control over disclosure decisions, and that verifiers may only request attributes that are legally permitted. By legislating these aspects, states will ensure that residents can trust any certified wallet to uphold their rights, while fostering a competitive ecosystem of providers who innovate on usability and design within a consistent regulatory baseline.

Enabling Interoperability and Competition

Certification also creates a mechanism for interoperability and trust across the ecosystem. By publishing a clear “state digital identity Wallet Protection Profile” and certifying wallets against it, states can ensure that wallets from different vendors operate consistently while still allowing for competition and innovation.

Building Public Confidence Through Transparency

Finally, certification helps build public confidence. Residents will know that any wallet bearing a certification mark has been independently tested and approved to uphold privacy and prevent surveillance, while verifiers will know they can safely interact with those wallets. At the same time, states should keep certification processes lightweight and transparent to avoid excluding smaller vendors, ensuring that certification supports security and privacy without stifling innovation.

Establishing the Guardrails of a Trusted Ecosystem

Certification is more than a checkbox, it's how we turn principles like unlinkability and minimal disclosure into an enforceable reality. By embedding privacy protections in wallet and issuer certification, states can foster innovation without compromising trust. The foundation for interoperable, people-first digital identity isn’t a single app or provider, it’s a standards-aligned ecosystem, governed responsibly and built to last.

SpruceID works with governments and standards bodies to build privacy-preserving, interoperable infrastructure designed for public trust from day one. Contact us to start the conversation.

Contact Us

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions. We build privacy-preserving digital identity infrastructure that empowers people and organizations to control their data. Governments, financial institutions, and enterprises use SpruceID’s technology to issue, verify, and manage digital credentials based on open standards.


Ontology

Identity in the Age of AI

What does this mean? Identity, privacy, and AI are colliding fast. In this community conversation, builders and advocates examined who should own identity online, how to protect privacy, and how AI agents change the trust model for everything we do on the internet. Read the full post Featured speakers Humpty — long-time contributor and advocate of decentralized identity and pri
What does this mean?

Identity, privacy, and AI are colliding fast. In this community conversation, builders and advocates examined who should own identity online, how to protect privacy, and how AI agents change the trust model for everything we do on the internet.

Read the full post

Featured speakers Humpty — long-time contributor and advocate of decentralized identity and privacy Geoff — veteran ecosystem builder and Head of Community at Ontology Barnabas — grassroots organizer driving Web3 education and adoption across Africa Five core takeaways

Ownership and agency come first
Web3 should let people own their identity and control what they share. Identity is not a wallet address. It is a richer record that reflects consent and context.

“You are in control of your data, and you get to choose what you want people to see.” — Barnabas

Privacy with portability
Identity must work across apps and chains while preserving privacy. Single-chain IDs limit users.

“Portable identity should not work only on one chain.” — Humpty

Design for everyone
Education and simple UX are essential so new users can participate without feeling overwhelmed.

“Removing barriers is essential to building community.” — Geoff

AI needs attribution and reputation
As AI agents multiply, we must evaluate outputs and the credibility of agents and their builders.

“We need attribution to know if a result is good, outdated, or hallucinated.” — Humpty

A builder’s opening
There is real opportunity to launch AI apps and agents with verifiable identity and reputation that users can trust.

“Start thinking about how you can develop those AI apps to launch in the marketplace.” — Geoff
Bigger picture

Identity is becoming shared infrastructure. It underpins privacy, enables reputation, and helps us decide which people or agents to trust. As AI agents begin to outnumber humans online, transparent identity and reputation will guide safe participation for everyone.

TL;DR

User-owned identity must be private and portable. Education and simple UX bring people in. AI raises the stakes for attribution and reputation, which is a clear opportunity for builders to ship trustworthy agents tied to real user intent.

Read the full post

Related reading Explore ONT ID and decentralized reputation. ont.id Who Really Owns Web3’s Data? 7 Questions for the Community. ont.io

Identity in the Age of AI was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


Elliptic

Elliptic provides Circle's Arc testnet with blockchain analytics

Elliptic is excited to announce that it has joined Circle's Arc testnet as an infrastructure participant, expanding our long-standing partnership with Circle to their new blockchain network.

Elliptic is excited to announce that it has joined Circle's Arc testnet as an infrastructure participant, expanding our long-standing partnership with Circle to their new blockchain network.


Thales Group

Thales unveils space surveillance radar AURORE - unique in Europe

Thales unveils space surveillance radar AURORE - unique in Europe prezly Tue, 10/28/2025 - 11:22 Defence Space Share options Facebook X
Thales unveils space surveillance radar AURORE - unique in Europe prezly Tue, 10/28/2025 - 11:22 Defence Space

Share options

Facebook X Whatsapp Linkedin Email URL copied to clipboard 28 Oct 2025 As part of the ARES program (Action and Space Resilience), Thales has been notified of a contract from the French Defence Procurement Agency (DGA) to develop, deliver and deploy a new ground-based low orbit space surveillance radar system. Called AURORE, it will watch the satellites and debris in low orbit from the earth. This ground-breaking radar system provides continuous monitoring and tracking of multiple space objects simultaneously. It will strengthen French capabilities for assessing the space situation. AURORE will be the largest surveillance radar deployed in Europe. In the context of the increasing militarization of space, this decision marks an important milestone for European and French sovereignty, by providing unprecedented detection capabilities to enhance military space surveillance missions of activities in low orbit. Designed and manufactured at Thales’ Limours site, the AURORE radar also benefits from the expertise gained through partnerships with several French SMEs.

AURORE, a new solution for spatial monitoring and situation assessment, chosen by France © Thales

As space operations are challenged by a substantial increase in threats, from military to space debris, the ability to identify and track multiple small objects in space in real-time makes all the difference when it comes to space sovereignty and protecting the skies.

AURORE is a software-defined radar operating in the Ultra High Frequency (UHF) band, that will provide continuous surveillance and simultaneous multi-tracking of numerous space objects with a fast low Earth orbit responsiveness time, and the generation of a high-resolution picture of the space environment in real-time.

The modularity of its architecture will form the backbone of a comprehensive roadmap aimed at expanding the product portfolio with a new family of UHF radars capable of meeting the needs of multiple critical missions, enabling protection against emerging ballistic and hypersonic threats.

"With AURORE, the only radar of its kind in Europe, Thales is contributing to French sovereignty by strengthening its capabilities for monitoring the space environment at low orbits. AURORE demonstrates, once again, Thales' leadership in the field of air and space surveillance systems.” said Patrice Caine, Chairman and Chief Executive Officer, Thales.

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies for the Defence, Aerospace, and Cyber & Digital sectors. Its portfolio of innovative products and services addresses several major challenges: sovereignty, security, sustainability and inclusion.

The Group invests more than €4 billion per year in Research & Development in key areas, particularly for critical environments, such as Artificial Intelligence, cybersecurity, quantum and cloud technologies. Thales has more than 83,000 employees in 68 countries. In 2024, the Group generated sales of €20.6 billion.

View PDF market_segment : Defence + Space https://thales-group.prezly.com/thales-unveils-space-surveillance-radar-aurore-unique-in-europe thales-unveils-space-surveillance-radar-aurore-unique-europe On Thales unveils space surveillance radar AURORE - unique in Europe

Dock

GSMA, Telefónica Tech, TMT ID and Dock Labs collaborate to reinvent call centre authentication

We’re excited to share that Dock Labs is collaborating with GSMA, Telefónica Tech, and TMT ID on a new initiative to reinvent call centre authentication. Here's why: Today’s customer authentication processes often rely on knowledge-based

We’re excited to share that Dock Labs is collaborating with GSMATelefónica Tech, and TMT ID on a new initiative to reinvent call centre authentication.

Here's why:

Today’s customer authentication processes often rely on knowledge-based questions or one-time passwords (OTPs). These methods are time-consuming, typically taking between 30 and 90 seconds, and can be vulnerable to SIM swap attacks, phishing, Caller Line Identification (CLI) spoofing and data breaches. 

On top of that, they frequently require customers to disclose personal information to call center agents, creating privacy and compliance risks for organisations.

To address these issues, the group has initiated a Proof of Concept (PoC) to explore a new, privacy-preserving model of caller authentication that is faster, secure, and user-friendly.


Spherical Cow Consulting

Can Standards Survive Trade Wars and Sovereignty Battles?

For decades, standards development has been anchored in the idea that the Internet is (and should be) one global network. If we could just get everyone in the room—vendors, governments, engineers, and civil society—we could hash out common rules that worked for all. The post Can Standards Survive Trade Wars and Sovereignty Battles? appeared first on Spherical Cow Consulting.

“For decades, standards development has been anchored in the idea that the Internet is (and should be) one global network. If we could just get everyone in the room—vendors, governments, engineers, and civil society—we could hash out common rules that worked for all.”

That premise is a lovely ideal, but it no longer reflects reality. The Internet isn’t collapsing, but it is fragmenting: tariffs, digital sovereignty drives, export controls, and surveillance regimes all chip away at the illusion of universality. Standards bodies that still aim for global consensus risk paralysis. And yet, walking away from standards altogether isn’t an option.

The real question isn’t whether we still need standards. The question is how to rethink them for a world that is fractured by design.

This is the fourth of a four-part series on what the Internet will look like for the next generation of people on this planet.

First post: “The End of the Global Internet“ Second post: “Why Tech Supply Chains, Not Protocols, Set the Limits on AI and the Internet“ Third post: “The People Problem: How Demographics Decide the Future of the Internet“ Fourth post: [this one]

A Digital Identity Digest Can Standards Survive Trade Wars and Sovereignty Battles? Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:16:24 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link Embed

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

Global internet, local rulebooks

If you look closely, today’s Internet is already more a patchwork quilt of overlapping, sometimes incompatible regimes, and less global.

Europe pushes digital sovereignty and data protection rules, with eIDAS2 and the AI Act setting global precedents. The U.S. leans on export controls and sanctions, using access to chips and cloud services as levers of influence. China has doubled down on domestic control, firewalling traffic and setting its own technical specs. Africa and Latin America are building data centers and digital ID schemes to reduce dependence on foreign providers, while still trying to keep doors open for trade and investment.

Standards development bodies now live in this reality. The old model where universality was the goal and compromise was the method is harder to sustain. If everyone insists on their priorities, consensus stalls. But splintering into incompatible systems isn’t viable either. Global supply chains, cross-border research, and the resilience of communications all require at least a shared baseline.

The challenge is to define what “interoperable enough” looks like.

The cost side is getting heavier

The incentives for participation in global standards bodies used to be relatively clear: access to markets, influence over technical direction, and reputational benefits. Today, the costs of cross-border participation have gone up dramatically.

Trade wars have re-entered the picture. The U.S. has imposed sweeping tariffs on imports from China and other countries, hitting semiconductors and electronics with rates ranging from 10% to 41%. These costs ripple across supply chains. On top of tariffs, the U.S. has restricted exports of advanced chips and AI-related hardware to China. The uncertainty of licensing adds compliance overhead and forces firms to hedge.

Meanwhile, the “China + 1” strategy—where companies diversify sourcing away from over-reliance on China—comes with a hefty price tag. Logistics get more complex, shipping delays grow, and firms often hold more inventory to buffer against shocks. A 2025 study estimated these frictions alone cut industrial output by over 7% and added nearly half a percent to inflation.

And beyond tariffs or logistics, transparency and compliance laws add their own burden. The U.S. Corporate Transparency Act requires firms to disclose beneficial ownership. Germany’s Transparency Register and Norway’s Transparency Act impose similar obligations, with Norway’s rules extending to human-rights due diligence.

The result is that companies are paying more just to maintain cross-border operations. In that climate, the calculus for standards shifts. “Do we need this standard?” becomes “Is the payoff from this standard enough to justify the added cost of playing internationally?”

When standards tip the scales

The good news is that standards can offset some of these costs when they come with the right incentives.

One audit, many markets. Standards that are recognized across borders save money. If a product tested in one region is automatically accepted in another, firms avoid duplicative testing fees and time-to-market shrinks.

Case study: the European Digital Identity Wallet (EUDI). In 2024, the EU adopted a reform of the eIDAS regulation that requires all Member States to issue a European Digital Identity Wallet and mandates cross-border recognition of wallets issued by other states. The premise here is that if you can prove your identity using a wallet in France, that same credential should be accepted in Germany, Spain, or Italy without new audits or registrations.

The incentives are potentially powerful. Citizens gain convenience by using one credential for many services. Businesses reduce onboarding friction across borders, from banking to telecoms. Governments get harmonized assurance frameworks while retaining the ability to add national extensions. Yes, the implementation costs are steep—wallet rollouts, legal alignment, security reviews—but the payoff is smoother digital trade and service delivery across a whole bloc.

Regulatory fast lanes. Governments can offer “presumption of conformity” when products follow recognized standards. That reduces legal risk and accelerates procurement cycles.

Procurement carrots. Large buyers, both public and private, increasingly bake interoperability and security standards into tenders. Compliance isn’t optional; it’s the ticket to compete.

Risk transfer. Demonstrating that you followed a recognized standard can reduce penalties after a breach or compliance failure. In practice, standards act as a form of liability insurance.

Flexibility in a fractured market. A layered approach—global minimums with regional overlays—lets companies avoid maintaining entirely separate product lines. They can ship one base product, then configure for sovereignty requirements at the edges.

When incentives aren’t enough

Of course, there are limits to how far incentives can stretch. Sometimes the costs simply outweigh the benefits.

Consider a market that imposes steep tariffs on imports while also requiring its own unique technical standards, with no recognition of external certifications. In such a case, the incentive of “one audit, many markets” collapses. Firms face a choice between duplicating compliance efforts, forking product lines, or withdrawing from the market entirely.

Similarly, rules of origin can blunt the value of global standards. Even if a product complies technically, it may still fail to qualify for preferential access if its components are sourced from disfavored regions. Political volatility adds another layer of uncertainty. The back-and-forth implementation of the U.S. Corporate Transparency Act illustrates how compliance obligations can change rapidly, leaving firms unable to plan long-term around standards incentives.

These realities underline a sad reality that incentives alone cannot overcome every cost. Standards must be paired with trade policies, recognition agreements, and regulatory stability if they are to deliver meaningful relief. Technology is not enough.

How standards bodies must adapt

It’s easy enough to say “standards still matter.” What’s harder is figuring out how the institutions that make those standards need to change. The pressures of a fractured Internet aren’t just technical. They’re geopolitical, economic, and regulatory. That means standards bodies can’t keep doing business as usual. They need to adapt on two fronts: process and scope.

Process: speed, modularity, and incentives

The traditional model of consensus-driven standards development assumes time and patience are plentiful. Groups grind away until they’ve achieved broad agreement. In today’s climate, that often translates to deadlock. Standards bodies need to recalibrate toward “minimum viable consensus” that offer enough agreement to set a global baseline, even if some regions add overlays later.

Speed also matters. When tariffs or export controls can be announced on a Friday and reshape supply chains by Monday, five-year standards cycles are untenable. Bodies need mechanisms for lighter-weight deliverables: profiles, living documents, and updates that track closer to regulatory timelines.

And then there’s participation. Costs to attend international meetings are rising, both financially and politically. Without intervention, only the biggest vendors and wealthiest governments will show up. That’s why initiatives like the U.S. Enduring Security Framework explicitly recommend funding travel, streamlining visa access, and rotating meetings to more accessible locations. If the goal is to keep global baselines legitimate, the doors have to stay open to more than a handful of actors.

Scope: from universality to layering

Just as important as process is deciding what actually belongs in a global standard. The instinct to solve every problem universally is no longer realistic. Instead, standards bodies need to embrace layering. At the global level, focus on the minimums: secure routing, baseline cryptography, credential formats. At the regional level, let overlays handle sovereignty concerns like privacy, lawful access, or labor requirements.

This shift also means expanding scope beyond “pure technology.” Standards aren’t just about APIs and message formats anymore; they’re tied directly to procurement, liability, and compliance. If a standard can’t be mapped to how companies get through audits or how governments accept certifications, it won’t lower costs enough to be worth the trouble.

Finally, standards bodies must move closer to deployment. A glossy PDF isn’t sufficient if it doesn’t include reference implementations, test suites, and certification paths. Companies need ways to prove compliance that regulators and markets will accept. Otherwise, the promise of “interoperability” remains theoretical while costs keep mounting.

The balance

So is it process or scope? The answer is both. Process has to get faster, more modular, and more inclusive. Scope has to narrow to what can truly be global while expanding to reflect regulatory and economic realities. Miss one side of the equation, and the other can’t carry the weight. Get them both right, and standards bodies can still provide the bridges we desperately need in a fractured world.

A layered model for fractured times

So what might a sustainable approach look like? I expect the future will feature layered models rather than a universal one.

At the bottom of this new stack are the baseline standards for secure software development, routing, and digital credential formats. These don’t attempt to satisfy every national priority, but they keep the infrastructure interoperable enough to enable trade, communication, and research.

On top of that baseline are regional overlays. These extensions allow regions to encode sovereignty priorities, such as privacy protections in Europe, lawful access in the U.S., or data localization requirements in parts of Asia. The overlays are where politics and local control find their expression.

This design isn’t neat or elegant. But it’s pragmatic. The key is ensuring that overlays don’t erode the global baseline. The European Digital Identity Wallet is a good example: the baseline is cross-border recognition across EU states, while national governments can still add extensions that reflect their specific needs. The balance isn’t perfect, but it shows how interoperability and sovereignty can coexist if the model is layered thoughtfully.

What happens if standards fail

It’s tempting to imagine that if standards bodies stall, the market will simply route around them. But the reality of a fractured Internet is far messier. Without viable global baselines, companies retreat into regional silos, and the costs of compliance multiply. This section is the stick to go with the carrots of incentives.

If standards fail, cross-border trade slows as every shipment of software or hardware has to be retested for each jurisdiction. Innovation fragments as developers build for narrow markets instead of global ones, losing economies of scale. Security weakens as incompatible implementations open new cracks for attackers. And perhaps most damaging, trust erodes: governments stop believing that interoperable solutions can respect sovereignty, while enterprises stop believing that global participation is worth the cost.

The likely outcome is not resilience, but duplication and waste. Firms will maintain redundant product lines, governments will fund overlapping infrastructures, and users will pay the bill in the form of higher prices and poorer services. The Internet won’t collapse, but it will harden into a collection of barely connected islands.

That’s why standards bodies cannot afford to drift. The choice isn’t between universal consensus and nothing. The choice is between layered, adaptable standards that keep the floor intact or a slow grind into fragmentation that makes everyone poorer and less secure.

Closing thought

The incentives versus cost tradeoff is not a side issue in standards development. It is the issue. The technical community must accept that tariffs, sovereignty, and compliance aren’t temporary distractions but structural realities.

The key question to ask about any standard today is simple: Does this make it cheaper, faster, or less risky to operate across borders? If the answer is yes, the standard has a future. If not, it risks becoming another paper artifact, while fragmentation accelerates.

Now I have a question for you: in your market, do the incentives for adopting bridge standards outweigh the mounting costs of tariffs, export controls, and compliance regimes? Or are we headed for a world where regional overlays dominate and the global floor is paper-thin?

If you’d rather have a notification when a new blog is published rather than hoping to catch the announcement on social media, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript

[00:00:29] Welcome back to A Digital Identity Digest.

Today, I’m asking a big question that’s especially relevant to those of us working in technical standards development:

Can standards survive trade wars and sovereignty battles?

For decades, the story of Internet standards seemed fairly simple — though never easy:
get the right people in the room, hammer out details, and eventually end up with rules that worked for everyone.

[00:00:58] The Internet was one global network, and standards reflected that vision.

[00:01:09] That story, however, is starting to fall apart.
We’re not watching the Internet collapse, but we are watching it fragment — and that fragmentation carries real consequences for how standards are made, adopted, and enforced.

[00:01:21] In this episode, we’ll explore:

Why the cost of participating in global standards has gone up How incentives can still make standards development worthwhile What happens when those incentives fall short And how standards bodies need to adapt to stay relevant

[00:01:36] So, let’s dive in.

The Fragmenting Internet

[00:01:39] When the Internet first spread globally, it seemed like one big network — or at least, one big concept.

[00:01:55] But that’s not quite true anymore.

Let’s take a few regional examples.

Europe has leaned heavily into digital sovereignty, with rules like GDPR, the AI Act, and the updated eIDAS regulation. Their focus is clear: privacy and sovereignty come first. The United States takes a different tack, using export controls and sanctions as tools of influence — with access to semiconductors and cloud services as leverage in its geopolitical strategy. China has gone further, building its own technical standards and asserting domestic control over traffic and infrastructure. Africa and Latin America are investing in local data centers and digital identity schemes, aiming to reduce dependency while keeping doors open for global trade and investment.

[00:02:46] When every region brings its own rulebook, global consensus doesn’t come easily.
Bodies like ISO, ITU, IETF, or W3C risk stalling out.

Yet splintering into incompatible systems is also costly:

It disrupts supply chains Slows research collaborations And fractures global communications

[00:03:31] So let’s start by looking at what all of this really costs.

The Rising Cost of Participation

[00:03:35] Historically, incentives for joining standards efforts were clear:

Influence technology direction Ensure interoperability Build goodwill as a responsible actor

[00:03:52] But that equation is changing.

Take tariffs, for example.

U.S. tariffs on imports from China and others now range from 10% to 41% on semiconductors and electronics. Export controls restrict the flow of advanced chips, reshaping entire markets. Companies face new costs: redesigning products, applying for licenses, and managing uncertainty.

[00:04:33] Add in supply chain rerouting — the so-called “China Plus One” strategy — and you get:

More complex logistics Longer delays Higher inventory buffers

Recent studies show these frictions cut industrial output by over 7% and add 0.5% to inflation.

[00:04:58] It’s not just the U.S. — tariffs are now a global trend.

Then there are transparency laws, like:

The U.S. Corporate Transparency Act Germany’s Transparency Register Norway’s Transparency Act, which even mandates human rights due diligence

[00:05:33] The result?
The baseline cost of cross-border operations is rising — forcing companies to ask if global standards participation is still worth it.

Why Standards Still Matter

[00:05:50] So, why bother with standards at all?

Because well-designed standards can offset many of these costs.

[00:05:56] Consider the power of recognition.
If one region accepts a product tested in another, companies save on duplicate testing and reach markets faster.

[00:06:07] A clear example is the European Digital Identity Wallet (EUDI Wallet).

In 2024, the EU updated eIDAS to:

Require each member state to issue a European Digital Identity Wallet Mandate mutual recognition between member states

This means:

A wallet verified in France also works in Germany or Spain Citizens gain convenience Businesses reduce onboarding friction Governments maintain a harmonized baseline with room for local adaptation

[00:06:56] Though rollout costs are high — covering legal alignment, wallet development, and security testing — the payoff is smoother digital trade.

Beyond recognition, strong standards also offer:

Regulatory fast lanes: Reduced legal risk when products follow recognized standards Procurement advantages: Interoperability requirements in public tenders Risk transfer: Accepted standards can serve as a partial defense after incidents

[00:07:34] In effect, standards can act as liability insurance.

[00:07:41] But not all incentives outweigh the costs.
When countries insist on unique local standards without mutual recognition, “one audit, many markets” collapses.

[00:08:05] Companies duplicate compliance, fork product lines, or leave markets.
Rules of origin and political volatility add further uncertainty.

[00:08:44] So yes — standards can tip the scales, but they can’t overcome every barrier.

The Changing Role of Standards Bodies

[00:08:54] Saying “standards still matter” is one thing — ensuring their institutions adapt is another.

[00:09:02] The pressures shaping today’s Internet are not just technical but geopolitical, economic, and regulatory.

That means standards bodies must evolve in two key ways:

Process adaptation Scope adaptation

[00:09:19] The old “everyone must agree” consensus model now risks deadlock.
Bodies need to move toward a minimum viable consensus — enough agreement to set a baseline, even if regional overlays come later.

[00:09:39] Increasingly, both state and corporate actors exploit the process to delay progress.
Meanwhile, when trade policies change in months, a five-year standards cycle is useless.

[00:10:16] Standards organizations must embrace:

Lighter deliverables Living documents Faster updates aligned with regulatory change

[00:10:32] Participation costs are another barrier.
If only the richest governments and companies can attend meetings, legitimacy suffers.

Efforts like the U.S. Enduring Security Framework, which supports broader participation, are essential.

[00:11:10] Remote participation helps — but it’s not enough.
In-person collaboration still matters because trust is built across tables, not screens.

Rethinking Scope and Relevance

[00:11:31] Scope matters too.

Standards bodies should embrace layering:

Global level: focus on secure routing, baseline cryptography, credential formats Regional level: handle sovereignty overlays — privacy, lawful access, labor rules

[00:11:55] Moreover, the scope must expand beyond technology to include:

Procurement Liability Compliance

If standards don’t reduce costs in these areas, they won’t gain traction — no matter how elegant they look in PDF form.

[00:12:12] Standards also need to move closer to deployment:

Include reference implementations Provide test suites Define certification paths that regulators will accept

Without these, interoperability remains theoretical while costs keep rising.

[00:12:53] Ultimately, this is both a process problem and a scope problem.
Processes must be faster and more inclusive.
Scopes must be realistic and economically relevant.

The Risk of Fragmentation

[00:13:11] Some argue that if standards bodies stall, the market will route around them.
But a fractured Internet is messy:

Cross-border trade slows under multiple testing regimes Innovation fragments into narrow regional silos Security weakens as incompatible implementations open new vulnerabilities

[00:13:45] And perhaps worst of all, trust erodes.
Governments lose faith in interoperability; companies question the value of participation.

[00:13:55] The outcome isn’t resilience — it’s duplication, waste, and higher costs.

[00:14:07] The Internet won’t disappear, but it risks hardening into isolated digital islands.
That’s why standards bodies can’t afford drift.

[00:14:26] The real choice is between:

Layered, adaptable standards that maintain a shared baseline Or a slow grind into fragmentation that makes everyone poorer and less secure Wrapping Up

[00:14:38] The incentives-versus-cost trade-off is no longer a side note in standards work — it’s the core issue.

Tariffs, sovereignty, and compliance regimes aren’t temporary distractions.
They’re structural realities shaping the future of interoperability.

[00:14:52] The key question for any new standard is:

Does this make it cheaper, faster, or less risky to operate across borders?

If yes — that standard has a future.
If no — it risks becoming another PDF gathering dust while fragmentation accelerates.

[00:15:03] With that thought — thank you for listening.

I’d love to hear your perspective:

Do incentives for adopting bridge standards outweigh the rising costs of sovereignty battles? Or are we headed toward a world of purely regional overlays?

[00:15:37] Share your thoughts, and let’s keep this conversation going.

[00:15:48] That’s it for this week’s Digital Identity Digest.

If this episode helped clarify or inspire your thinking, please:

Share it with a friend or colleague Connect with me on LinkedIn @hlflanagan Subscribe and leave a rating on Apple Podcasts or wherever you listen

[00:16:00] You can also find the full written post at sphericalcowconsulting.com.

Stay curious, stay engaged — and let’s keep the dialogue alive.

The post Can Standards Survive Trade Wars and Sovereignty Battles? appeared first on Spherical Cow Consulting.


Ocean Protocol

Claim 1: The movement of $FET to 30 Different Wallets was allegedly “not right”

Part of Series : Dismantling False Allegations, One Claim at a Time By: Bruce Pon Sheikh has claimed that the splitting of the Ocean community treasury across 30 wallets was somehow wrongful. He said this despite knowing that the act of splitting was entirely legitimate, as I explain below. Source: X Spaces — Oct 9, 2025 @BubbleMaps has made this very helpful diagram to
Part of Series : Dismantling False Allegations, One Claim at a Time By: Bruce Pon

Sheikh has claimed that the splitting of the Ocean community treasury across 30 wallets was somehow wrongful. He said this despite knowing that the act of splitting was entirely legitimate, as I explain below.

Source: X Spaces — Oct 9, 2025

@BubbleMaps has made this very helpful diagram to identify the flows of $FET from the Ocean community wallet (give them a follow):

https://x.com/bubblemaps/status/1980601840388723064

So, what’s the truth behind the distribution of $FET out of a single wallet and into 30 wallets?

Was it, as Sheikh claims, an ill-intentioned action to obfuscate the token flows and “dump” on the ASI community? Absolutely not.

First, it was done out of prudence. Given that a significant number of tokens were held in a single wallet, it was to reduce the risk of having the community treasury tokens hacked or otherwise vulnerable to bad actors. Clearly, spreading the tokens across 30 wallets greatly reduces the risk of their being hacked or forcefully taken compared to tokens being held in a single wallet.

Second, the spreading of the community treasury tokens across many wallets was something that Fetch and Singularity had themselves requested we do, to avoid causing problems with ETF deals which they had decided to enter into using $FET.

As presented in the previous “ASI Alliance from Ocean Perspective” blogpost, on Aug 13, 2025, Casigrahi, SingularityNet’s CFO, wrote an email to Ocean Directors, cc’ing Dr. Goertzel and Lake:

In it, he references 8 ETF deals in progress that were underway with institutional investors and the concerns that “the window — is open now” to close these deals.

Immediately after this email, Casigrahi reached out to a member of the Ocean community, explaining that such a large sum of $FET in the Ocean community wallet, which is not controlled by either Fetch or SingularityNET, would raise difficult questions from ETF issuers. Recall that Ocean did not participate in these side deals promoted by Fetch, and was often kept out of the loop, e.g. the TRNR deal.

Casiraghi requested (on behalf of Fetch and SingularityNET) that if the $FET in the Ocean community wallet could not be frozen, whether arrangements could be made to split the $FET tokens across multiple wallets?

Casiraghi explained that if this could be done with the $FET in the Ocean community wallet, Fetch and SingularityNET could plausibly deny the existence of a very large token holder which they had no control over. They could sweep it under the rug and avoid uncomfortable due diligence questions.

On Aug 16 2025, David Levy of Fetch called me with the same arguments, reasoning and plea, whether Ocean could obfuscate the tokens and split them across more wallets?

Incidentally, in this call Levy also for the first time, shared with me the details of the TRNR deal which alarmed me once I understood the implications (“TRNR” Section §12).

At this juncture, it should be recalled that the Ocean community wallet is under the control of Ocean Expeditions. The Ocean community member who spoke with Casiraghi, as well as myself, informed the Ocean Expeditions trustees of this request and reasoning. Thereafter a decision was made by the Ocean Expedition’s trustees, as an act of goodwill, to distribute the $FET across 30 wallets as requested by Fetch and SingularityNet.

Turning back to the bigger picture, as a pioneer in the blockchain space, I am obviously well aware that all token movements are absolutely transparent to the world. Any transfers are recorded immutably forever and can be traced easily by anyone with a modicum of basic knowledge. I build blockchains for a living. It is ridiculous to suggest that I or anyone in Ocean could have hoped to “conceal” tokens in this public manner.

A simple act of goodwill and cooperation that was requested by both Fetch and SingularityNET has instead been deliberately blown up by Sheikh, and painted as a malicious act to harm the ASI community.

Sheikh has now used the wallet distribution to launch an all-out assault on Ocean Expeditions and start a manhunt to identify the trustees of the Ocean Expeditions wallet.

Sheikh has wantonly spread lies, libel and misinformation to muddy the waters, construct a false narrative accusing Ocean and its founders of misappropriation, and to incite community sentiment against us.

Sheikh’s accusations and his twisting of the facts to mislead the community are so absurd that they would be laughable, if they were not so dangerous and harmful to the whole community.

Claim 1: The movement of $FET to 30 Different Wallets was allegedly “not right” was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


FastID

Increasing the accessibility of managed security services

Make world-class protection accessible. Fastly’s new Managed Security Professional delivers 24/7 expert defense for your most critical apps and APIs.
Make world-class protection accessible. Fastly’s new Managed Security Professional delivers 24/7 expert defense for your most critical apps and APIs.

Monday, 27. October 2025

KILT

KILT Liquidity Incentive Program

We are launching a Liquidity Incentive Program (LIP) to reward Liquidity Providors (LPs) in the KILT:ETH Uniswap pool on Base. The portal can be accessed here: liq.kilt.io For the best experience, desktop/browser use is recommended. Key Features The LIP offers rewards in KILT for contributing to the pool. Rewards are calculated according to the size of your LP and the time for whic

We are launching a Liquidity Incentive Program (LIP) to reward Liquidity Providors (LPs) in the KILT:ETH Uniswap pool on Base.

The portal can be accessed here: liq.kilt.io

For the best experience, desktop/browser use is recommended.

Key Features The LIP offers rewards in KILT for contributing to the pool. Rewards are calculated according to the size of your LP and the time for which you have been part of the program. Your liquidity is not locked in any way; you can add or remove liquidity at any time. The portal does not take custody of your KILT or ETH; positions remain on Uniswap under your direct control. Rewards can be claimed after 24hrs, and then at any time of your choosing. You will need KILT (0x5D0DD05bB095fdD6Af4865A1AdF97c39C85ad2d8) on Base ETH or wETH on Base An EVM wallet (e.g. MetaMask etc.) Joining the LIP Overview

There are two steps to joining the LIP:

Add KILT and ETH/wETH to the Uniswap pool in a full-range position. The correct pool is v3 with 0.3% fees. Note that whilst part of the LIP you will continue to earn the usual Uniswap pool fees as well. Register this position on the Liquidity Portal. Your rewards will start automatically. 1) Adding Liquidity

Positions may be created either on Uniswap in the usual way, or directly via the portal. If you choose to create positions on Uniswap then return to the portal afterwards to register them.

To create a position via the portal:

Go to liq.kilt.io and connect your wallet. Under the Overview tab, you may use the Quick Add Liquidity function. For more features, go to the Add Liquidity tab where you can choose how much KILT and ETH to contribute. 2) Registering Positions

Once you have created a position, either on Uniswap or via the portal, return to the Overview tab

Your KILT:ETH positions will be displayed under Eligible Positions. Select your positions and Register them to enroll in the LIP. Monitoring your Positions and Rewards

Once registered, you can find your positions in the Positions tab. The Analytics tab provides more information, for example your time bonuses and details about each position’s contribution towards your rewards.

Claiming Rewards

Your rewards start accumulating from the moment you register, but the portal may not reflect this immediately. Go to the Rewards tab to view and claim your rewards. Rewards are locked for the first 24hrs, after which you may claim at any time.

Removing Liquidity

Your LP remains 100% under your control; there are no locks or other restrictions and you may remove liquidity at any time. This can be done in the usual way directly on Uniswap. Removing LP will not in any way affect rewards accumulated up to that time, but if you later re-join the program then any time bonuses will have been reset.

How are my Rewards Calculated?

Rewards are based on:

The value of your KILT/ETH position(s). The total combined value of the pool as a whole. The number of days your position(s) have been registered.

Rewards are calculated from the moment you register a position, but the portal may not reflect them right away.

Need Help?

Support is available in our telegram group: https://t.me/KILTProtocolChat

-The KILT Foundation

KILT Liquidity Incentive Program was originally published in kilt-protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


SC Media - Identity and Access

Phishing emails target LastPass users

BleepingComputer reports that the threat group CryptoChameleon sends phishing emails to LastPass users requesting access to their password vaults by uploading death certificates.

BleepingComputer reports that the threat group CryptoChameleon sends phishing emails to LastPass users requesting access to their password vaults by uploading death certificates.


Sophos launches identity threat detection tool

Sophos has introduced Identity Threat Detection and Response for its XDR and MDR platforms, expanding its defense against the growing wave of identity-based attacks, Techzine reports.

Sophos has introduced Identity Threat Detection and Response for its XDR and MDR platforms, expanding its defense against the growing wave of identity-based attacks, Techzine reports.


Yubico unveils post-quantum security prototypes

Yubico has unveiled new Post-Quantum Cryptography prototypes and expanded digital identity features, showcasing advancements that go beyond password replacement, reports Security Brief Australia.

Yubico has unveiled new Post-Quantum Cryptography prototypes and expanded digital identity features, showcasing advancements that go beyond password replacement, reports Security Brief Australia.


Ploy raises £2.5M to tackle identity breaches

British cybersecurity startup Ploy has raised 2.5 million in seed funding to combat the growing threat of identity-related breaches caused by fragmented access systems and unmanaged applications, according to Biometric Update.

British cybersecurity startup Ploy has raised 2.5 million in seed funding to combat the growing threat of identity-related breaches caused by fragmented access systems and unmanaged applications, according to Biometric Update.


JumpCloud acquires Breez for identity threat defense

CRN reports that JumpCloud has acquired Breez, a startup specializing in identity threat detection and response, to strengthen its unified identity security platform.

CRN reports that JumpCloud has acquired Breez, a startup specializing in identity threat detection and response, to strengthen its unified identity security platform.


Delinea integrates identity tools with Microsoft

Delinea has joined the Microsoft Security Store Partner Ecosystem, deepening its 15-year partnership with Microsoft to enhance identity and access security for enterprises, Security Brief Australia reports.

Delinea has joined the Microsoft Security Store Partner Ecosystem, deepening its 15-year partnership with Microsoft to enhance identity and access security for enterprises, Security Brief Australia reports.


Elliptic

What government agencies need to fight fraud

 

 


auth0

MS Agent Framework and Python: Use the Auth0 Token Vault to Call Third-Party APIs

Build a secure Python AI Agent with Microsoft Agent Framework and FastAPI and learn to use Auth0 Token Vault to securely connect to the Gmail API.
Build a secure Python AI Agent with Microsoft Agent Framework and FastAPI and learn to use Auth0 Token Vault to securely connect to the Gmail API.

Recognito Vision

Why Businesses Are Investing in Deepfake Detection Tools to Stop AI-Generated Fraud

Remember when “seeing is believing” used to be the rule? Not anymore. The world is now facing an identity crisis, digital identity that is. As artificial intelligence advances, so do the fraudsters who use it. Deepfakes have gone from internet curiosities to boardroom threats, putting reputations, finances, and trust at risk. Businesses worldwide are waking...

Remember when “seeing is believing” used to be the rule? Not anymore. The world is now facing an identity crisis, digital identity that is. As artificial intelligence advances, so do the fraudsters who use it. Deepfakes have gone from internet curiosities to boardroom threats, putting reputations, finances, and trust at risk.

Businesses worldwide are waking up to the danger of manipulated media and turning toward deepfake detection tools as a line of defense. These systems are becoming the business equivalent of a truth serum, helping companies verify authenticity before deception costs them dearly.

 

What Makes Deepfakes So Dangerous

A deepfake is an AI-generated video, image, or audio clip that convincingly mimics a real person. Using neural networks, these fakes can replicate facial movements, voice tones, and gestures so accurately that even experts struggle to tell them apart.

The technology itself isn’t inherently bad. In entertainment, it helps de-age actors or create realistic video games. The problem arises when it’s used for fraud, misinformation, or identity theft. A 2024 report by cybersecurity analysts revealed that over 40% of businesses had encountered at least one deepfake-related fraud attempt in the last year.

Common use cases that keep executives awake at night include:

Fake video calls where “executives” instruct employees to transfer money Synthetic job interviews where fraudsters impersonate real candidates False political or corporate statements are circulated to damage reputations

 

How Deepfake Detection Technology Works

The idea behind deepfake detection technology is simple: spot what looks real but isn’t. The execution, however, is complex. Detection systems use advanced machine learning and biometrics to analyze videos, images, and audio clips at a microscopic level.

Here’s a breakdown of common detection methods:

Technique What It Detects Purpose Pixel Analysis Lighting, shadows, unnatural edges Identifies visual manipulation Audio-Visual Sync Lip and speech mismatches Flags voice-over imposters Facial Geometry Mapping Eye movement, micro-expressions Validates natural human patterns Metadata Forensics Hidden file data Detects tampering or file regeneration

These methods form the core of most deepfake detection software. They look for details invisible to the human eye, like the way light reflects in a person’s eyes or how facial muscles move during speech. Even the slightest irregularity can trigger a red flag.

 

Deepfake Detection in Corporate Security

For organizations, adopting a deepfake detector isn’t just a security upgrade, it’s a necessity. Financial institutions, identity verification providers, and digital platforms are integrating these solutions to prevent fraud in real time.

A growing number of companies have fallen prey to AI-generated fraud, with criminals using fabricated voices or videos to trick employees into approving transactions. One European company reportedly lost 25 million dollars after a convincing fake video call with their “CFO.” That’s not a Hollywood plot, it’s a real-world case.

Businesses now use deepfake facial recognition and deepfake image detection tools to verify faces during high-risk transactions, onboarding, and identity verification. By combining biometric data with behavioral analytics, these tools make it nearly impossible for fakes to pass undetected.

 

Real-World Examples of Deepfake Fraud

 

Finance: A multinational bank used a deepfake detection tool to validate executive communications. Within six months, it blocked three fraudulent video call attempts that mimicked senior leaders. Recruitment: HR departments now use deepfake detection software to confirm job candidates are who they claim to be. AI-generated interviews have become a growing issue in remote hiring. Social Media: Platforms like Facebook and TikTok rely on deepfake face recognition systems to automatically flag and remove fake celebrity or political videos before they go viral.

Each case reinforces a key truth: deepfakes aren’t just a cybersecurity issue, they’re a trust issue.

 

 

Challenges in Detecting Deepfakes

Even with cutting-edge tools, detecting deepfakes remains a technological tug-of-war. Every time detection systems advance, generative AI models evolve to bypass them, creating an ongoing race between innovation and deception. Businesses face several persistent challenges in this fight.

One major issue is evolving algorithms, as AI models constantly learn new tricks that make fake content appear more authentic. Another key challenge is data bias, where systems trained on limited datasets may struggle to perform accurately across different ethnicities or under varied lighting conditions.

Additionally, high processing costs remain a concern, as real-time deepfake detection requires powerful hardware and highly optimized algorithms. On top of that, privacy concerns also play a role, since collecting facial data for analysis must align with global data protection laws such as the GDPR.

To address these challenges, open-source initiatives like Recognito Vision GitHub are fostering transparency and collaboration in AI-based identity verification research, helping bridge the gap between innovation and ethical implementation.

 

 

Integrating Deepfake Detection Into Identity Verification

Deepfakes pose the greatest risk to identity verification systems. Fraudsters use synthetic faces and voice clips to bypass onboarding checks and exploit weak verification processes.

To counter this, many companies integrate deepfake detect models with liveness detection, systems that determine if a face belongs to a live human being or a static image. By tracking subtle movements like blinking, breathing, or pupil dilation, these systems make it much harder for fake identities to pass.

If you’re interested in testing how liveness verification works, explore Recognito’s face liveness detection SDK and face recognition SDK. Both provide tools to identify fraud attempts during digital onboarding or biometric verification.

 

The Business Case for Deepfake Detection Tools

So why are companies investing heavily in this technology? Because it directly protects their money, reputation, and compliance status.

 

1. Fraud Prevention

Deepfakes enable social engineering attacks that traditional security systems can’t catch. Detection tools provide a safeguard against voice and video scams that target executives or employees.

2. Compliance with Data Regulations

Laws like GDPR and other digital identity regulations require companies to verify authenticity. Using deepfake detection technology supports compliance by ensuring every identity is legitimate.

3. Brand Integrity

One fake video can cause irreversible PR damage. Detection systems help safeguard brand image by filtering manipulated media before it spreads.

4. Consumer Confidence

Customers feel safer when they know your brand can identify real users from digital imposters. Trust is the new currency of business.

 

 

Popular Deepfake Detection Solutions in 2025 Tool Name Main Feature Ideal Use Case Reality Defender Multi-layer AI detection Financial institutions Deepware Scanner Video and image verification Cybersecurity firms Sensity AI Online content monitoring Social platforms Microsoft Video Authenticator Frame-by-frame confidence scoring Government and enterprise use

For businesses that want to experiment with AI-based face authentication, the Face biometric playground provides an interactive environment to test and understand how facial recognition and deepfake facial recognition systems perform under real-world conditions.

 

What’s Next for Deepfake Detection

The war between creation and detection is far from over. As generative AI improves, the line between real and fake will blur further. However, one thing remains certain, businesses that invest early in deepfake detection tools will be better prepared.

Future systems will likely combine blockchain validation, biometric encryption, and AI-powered forensics to ensure content authenticity. Collaboration between regulators, researchers, and businesses will be crucial to staying ahead of fraudsters.

 

Staying Real in a World of Fakes

The rise of deepfakes is rewriting the rules of digital trust. Businesses can no longer rely on human judgment alone. They need technology that looks beneath the surface, into the data itself.

Recognito is one of the pioneers helping organizations build that trust through reliable and ethical deepfake detection solutions, ensuring businesses stay one step ahead in an AI-powered world where reality itself can be rewritten.

 

Frequently Asked Questions

 

1. How can deepfake detection protect businesses from fraud?

Deepfake detection identifies fake videos or audio before they cause financial or reputational damage, protecting companies from scams and impersonation attempts.

 

2. What is the most accurate deepfake detection technology?

The most accurate systems combine biometric analysis, facial geometry mapping, and liveness detection to verify real human behavior.

 

3. Can deepfake detection software identify audio fakes too?

Yes, modern tools analyze pitch, tone, and rhythm to detect audio deepfakes along with visual ones.

 

4. Is deepfake detection compliant with data protection laws like GDPR?

Yes, when implemented responsibly. Businesses must process biometric data securely and follow data protection regulations.

 

5. How can companies start using deepfake detection tools?

Organizations can integrate off-the-shelf detection and liveness solutions into their existing identity verification systems to enhance security and prevent fraud.


Ocean Protocol

Ocean Community Tokens are the Property of Ocean Expeditions

oceanDAO (now Ocean Expeditions) is not a party to the ASI Alliance Token Merger Agreement and Ocean community tokens are not the property of the ASI Alliance By: Ocean Protocol Foundation On Oct 9, 2025 in an X Space in response to the withdrawal of the Ocean Protocol Foundation from the ASI Alliance, Sheikh said: “You don’t try and steal from the community and get away with it that
oceanDAO (now Ocean Expeditions) is not a party to the ASI Alliance Token Merger Agreement and Ocean community tokens are not the property of the ASI Alliance By: Ocean Protocol Foundation

On Oct 9, 2025 in an X Space in response to the withdrawal of the Ocean Protocol Foundation from the ASI Alliance, Sheikh said:

“You don’t try and steal from the community and get away with it that quickly, because we’re not going to just let it go, right? In the sense that, if you didn’t want to be part of the community, why did you then go into the token which belonged to the community, or which belonged to the alliance?”

This statement is false, misleading, and libelous, and this blogpost will demonstrate why.

The only three parties to the ASI Alliance are Fetch.ai Foundation (Singapore), Ocean Protocol Foundation (Singapore) and SingularityNET Foundation (Switzerland).

Neither the oceanDAO, nor Ocean Expeditions, are a party to the ASI Alliance Token Merger Agreement.

This fact, that oceanDAO (now Ocean Expeditions) is a wholly independent 3rd party from Ocean, was disclosed (Section §6) to Fetch and SingularityNET in May 2024 as part of the merger discussions.

Sheikh appears to deliberately conflate the Ocean Protocol Foundation with oceanDAO, as a tactic to mislead the community. To be clear, oceanDAO is a separate organisation that was formed in 2021 and then incorporated as Ocean Expeditions in June 2025. The reasons for this incorporation have been set out in an earlier blog post here: (https://blog.oceanprotocol.com/the-asi-alliance-from-oceans-perspective-f7848b2ad61f)

The Ocean community treasury remains in the custodianship of Ocean Expeditions guardians via a duly established, wholly legal trust in the Cayman Islands.

Every $FET token holder has sovereign property rights over its own tokens and is not answerable to the ASI Alliance as to what it does with its tokens.

Ocean Expeditions has no legal obligations to the ASI Alliance. Rather, the ASI Alliance has a clear obligation towards Ocean Expeditions as a token holder.

As a reminder relating to Fetch.ai obligations under the Token Merger Agreement, Fetch.ai is under a legally binding obligation to inject the remaining 110.9 million $FET into the $OCEAN:$FET token bridge and migration contract, and keep them available for any $OCEAN token holder who wishes to exercise their right to convert to $FET. To date, this obligation remains unmet. Fetch.ai must immediately execute this legally mandated action.

Any published information regarding this matter, unless confirmed officially by Ocean Protocol Foundation, should be assumed false.

We also request that Fetch.ai, Sheikh and all other ASI Alliance spokesmen refrain from confusing the public with false, misleading and libelous allegations that any tokens have been in any way “stolen”.

The $FET tokens Sheikh refers to are safely with Ocean Expeditions, for the sole benefit of the Ocean community.

Q&A

Q: There has recently been talk of Ocean “returning” tokens to ASI Alliance, through negotiated agreement. What’s that about?

A: This is complete nonsense. There are no tokens to return because no tokens were “stolen” or “taken”. Accordingly, it would make no sense to “return” any such tokens.

Ocean Community Tokens are the Property of Ocean Expeditions was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Herond Browser

Best Cute Halloween Wallpaper: Free Downloads for Mobile & PC

This ultimate guide solves that problem by providing the best, high-quality, free cute Halloween wallpaper for 2025 The post Best Cute Halloween Wallpaper: Free Downloads for Mobile & PC appeared first on Herond Blog. The post Best Cute Halloween Wallpaper: Free Downloads for Mobile & PC appeared first on Herond Blog.

Finding the perfect cute Halloween wallpaper that is high-quality, free, and correctly sized for both your Mobile & PC can be a frustrating hunt through low-resolution images and confusing paywalls. Stop wasting time resizing! This ultimate guide solves that problem by providing the best, high-quality, free cute Halloween wallpaper for 2025, perfectly categorized and curated for instant use on any device.

Categorized Wallpaper Collections (Content Depth & Keyword Clusters) A. Cute Halloween Wallpaper on Mobile (Vertical Focus) Kawaii Pumpkins: Minimalist, Chibi-style Jack-O’-Lantern designs. Spooky-Cute Critters: Adorable ghosts, bats, and friendly black cats. Pastel & Cozy: Soft color palettes perfect for cozy autumn and Halloween phone scenes. B. Desktop & PC Backgrounds (Horizontal Focus) Animated & Cartoon Style: 3D rendered or fully illustrated cute characters. Cozy Scenes: Autumn leaves, candles, and cute treats filling the wide screen. High Resolution (4K): Emphasis on crystal-clear resolution for large monitors. C. Niche & Trending Themes Gamer/Pop Culture: Cute versions of popular game or movie characters (e.g., Chibi Zombies). Foodie Halloween: Wallpapers featuring cute candies, cookies, and seasonal drinks. Herond-Optimized Download & Setup Guide (Utility & Brand Integration) A. The Best Way to Download (Using Herond Browser) Step 1: Find and Click the File

Once you’ve found the perfect aesthetic in our curated lists, simply click the dedicated direct download link provided below the image. Unlike sites that force you through multiple redirects or pop-ups, our links are straightforward, ensuring you find the file you need without frustration.

Step 2: Secure Downloading with Herond

This is where Herond Browser takes over. Herond’s built-in download manager begins the transfer instantly, ensuring file integrity and prioritizing speed. You don’t have to worry about malicious file names or corrupted data; Herond handles large wallpaper files efficiently and securely in the background, making the entire process reliable.

Step 3: Accessing Your Files Immediately

After the download finishes, locating your new wallpaper is effortless. Thanks to Herond’s file system integration, you can access the downloaded image directly from the browser’s download tray or the dedicated downloads folder on your device. This seamless connection means you can go from download to desktop background in seconds.

B. Quick Setup Instructions Guide 1: Setting the Wallpaper on iOS/Android Mobile Devices

Locate Image: Open your device’s Photos or Gallery app and find the newly downloaded wallpaper.

Access Menu: Tap the Share or Options icon (usually three dots or an arrow).

Set as Wallpaper: Select “Use as Wallpaper” or “Set as Background.”

Confirm: Adjust positioning if needed, then confirm for the Lock Screen or Home Screen.

Guide 2: Setting the Background on Windows/Mac Desktop.

Locate File: Find the downloaded image in your Downloads folder (or access it instantly via Herond’s downloads tray).

Right-Click:Right-click the image file.

Set Background: On Windows, select “Set as desktop background.” On Mac, select “Set Desktop Picture.”

Enjoy: The background will update instantly!

Conclusion

You’ve successfully found, downloaded, and set up your perfect cute Halloween wallpaper for both mobile and desktop, showcasing your festive spirit without dealing with low quality or tricky downloads. For a truly seamless experience, from secure browsing to fast downloading, Herond Browser is the ultimate choice. Stop struggling with slow downloads and unsafe file transfers; upgrade your digital life and unlock safe access to the best free Halloween wallpapers and beyond.

About Herond

Herond Browser is a Web browser that prioritizes users’ privacy by blocking ads and cookie trackers, while offering fast browsing speed and low bandwidth consumption. Herond Browser features two built-in key products:

Herond Shield: an adblock and privacy protection tool; Herond Wallet: a multi-chain, non-custodial social wallet.

Herond aims at becoming the ultimate Web 2.5 solution that sets the ground to further accelerate the growth of Web 3.0, heading towards the future of mass adoption.

Join our Community!

The post Best Cute Halloween Wallpaper: Free Downloads for Mobile & PC appeared first on Herond Blog.

The post Best Cute Halloween Wallpaper: Free Downloads for Mobile & PC appeared first on Herond Blog.


auth0

Securing AI Agents: Mitigate Excessive Agency with Zero Trust Security

Learn how to secure your AI agents to prevent Excessive Agency, a top OWASP LLM vulnerability, by implementing a Zero Trust model.
Learn how to secure your AI agents to prevent Excessive Agency, a top OWASP LLM vulnerability, by implementing a Zero Trust model.

Friday, 24. October 2025

Spruce Systems

A Practical Checklist to Future-Proof Your State’s Digital Infrastructure

From vendor lock-in to privacy compliance, the path to digital modernization is full of trade-offs. This checklist gives state decision-makers a practical framework for evaluating emerging identity technologies and aligning with open-standards best practices.

State IT modernization is a perpetual challenge. For new technologies like verifiable digital credentials (secure, digital versions of physical IDs), this presents a classic "chicken and egg" problem: widespread adoption by residents and businesses is necessary to justify the investment, but that adoption won't happen without a robust ecosystem of places to use them. How can states ensure the significant investments they make today will build a foundation for a resilient and trusted digital future?

State IT leaders face increasing pressure to modernize aging infrastructure, combat rising security threats, and overcome stubborn data silos. These challenges are magnified by tight budgets and the pervasive risk of vendor lock-in. With a complex landscape of competing standards, making the right strategic decision is more difficult than ever. This uncertainty stifles the growth needed for a thriving digital identity ecosystem. The drive for modernization is clear, with over 65% of state and local governments, according to industry research, on a digital transformation journey.

Here, we'll offer a clear, actionable framework for state technology decision-makers: a practical checklist to evaluate technologies on their adherence to open standards. By embracing these principles, states can make informed choices that foster sustainable innovation and avoid costly pitfalls, aligning with a broader vision for open, secure, and interoperable digital systems that empower citizens and governments alike.

The Risks of Niche Technology

Choosing proprietary or niche technologies can seem like a shortcut, but it often leads to a dead end. These systems create hidden costs that drain resources and limit a state's ability to adapt. The financial drain extends beyond initial procurement to include escalating licensing fees, expensive custom integrations, and unpredictable upgrade paths that leave little room for innovation.

Operationally, these systems create digital islands. When a new platform doesn't speak the same language as existing infrastructure, it reinforces the data silos that effective government aims to eliminate. This lack of interoperability complicates everything from inter-agency collaboration to delivering seamless services to residents. For digital identity credentials, the consequences are even more direct. If a citizen's new digital ID isn't supported across jurisdictions or by key private sector partners, its utility plummets, undermining the entire rationale for the program.

Perhaps the greatest risk is vendor lock-in. Dependence on a single provider for maintenance, upgrades, and support strips a state of its negotiating power and agility. As a key driver for government IT leaders, avoiding vendor lock-in is a strategic priority. Niche systems also lack the broad, transparent community review that strengthens security. Unsupported or obscure software can harbor unaddressed vulnerabilities, a risk highlighted by data showing organizations running end-of-life systems are three times more likely to fail a compliance audit.

Embracing the Power of Open Standards for State IT

The most effective way to mitigate these risks is to build on a foundation of open standards. In the context of IT, an open standard is a publicly accessible specification developed and maintained through a collaborative and consensus-driven process. It ensures non-discriminatory usage rights, community-driven governance, and long-term viability. For verifiable digital credentials, this includes critical specifications like the ISO mDL standard for mobile driver's licenses (ISO 18013-5 and 18013-7), W3C Verifiable Credentials, and IETF SD-JWTs. The principles of open standards, however, extend far beyond digital credentials to all critical IT infrastructure decisions.

Adopting this approach delivers many core benefits for State government. First is enhanced interoperability, which allows disparate systems to communicate seamlessly. This breaks down data silos and improves service delivery, a principle demonstrated by the U.S. Department of State's Open Data Plan, which prioritizes open formats to ensure portability. Second, open standards foster robust security. The transparent development process allows for broad community review, which leads to faster identification of vulnerabilities and more secure, vetted protocols.

Third, they provide exceptional adaptability and future-proofing. By reducing vendor lock-in, open standards enable states to easily upgrade systems and integrate new technologies without costly overhauls. This was the goal of Massachusetts' pioneering 2003 initiative to ensure long-term control over its public records. Fourth is significant cost-effectiveness. Open standards foster competitive markets, reducing reliance on expensive proprietary licenses and enabling the reuse of components. For government agencies, cost reduction is a primary driver for adoption.

Finally, this approach accelerates innovation. With 96% of organizations maintaining or increasing their use of open-source software, it is clear that shared, stable foundations create a fertile ground for a broader ecosystem of tools and expertise.

The State IT Open Standards Checklist

This actionable checklist provides clear criteria for state IT leaders, procurement officers, and policymakers to evaluate any new digital identity technology or system. Use this framework to ensure technology investments are resilient, secure, and future-proof.

Ability to Support Privacy Controls: Does the technology inherently support all state privacy controls, or can a suitable privacy profile be readily created and enforced? Technologies that enable privacy-preserving techniques like selective disclosure and zero-knowledge proofs are critical for building public trust. Alignment with Use Cases: Does the standard enable real-world transactions that are critical to residents and relying parties? This includes everything from proof-of-age for controlled purchases and access to government benefits to streamlined Know Your Customer (KYC) checks that support Bank Secrecy Act modernization. Ecosystem Size and Maturity: Does the standard have a healthy base of adopters? Look for active participation from multiple vendors and demonstrated investment from both public and private sectors. A mature ecosystem includes support from major platforms like Apple Wallet and Google Wallet, indicating broad market acceptance. Number of Vendors: Are there multiple independent vendors supporting the standard? A competitive marketplace fosters innovation, drives down costs, and is a powerful defense against vendor lock-in. Level of Investment: Is there clear evidence of sustained investment in tools, reference implementations, and commercial deployments? This indicates long-term viability and a commitment from the community to support and evolve the standard. A strong identity governance framework depends on this long-term stability. Standards Body Support: Is the standard governed by a credible and recognized standards development organization? Bodies like ISO, W3C, IETF, and the OpenID Foundation ensure a neutral, globally-vetted process that builds consensus and promotes stability. Interoperability Implementations: Has the standard demonstrated successful cross-vendor and cross-jurisdiction implementations? Look for evidence of conformance testing or a digital ID certification program that validates wallet interoperability and ensures a consistent user experience. Account/Credential Compromise and Recovery: How does the technology handle worst-case scenarios like stolen private keys or lost devices? Prioritize standards that support a robust VDC lifecycle, including credential revocation. A clear process for credential revocation, such as using credential status lists, is essential for maintaining trust. Scalability: Has the technology been proven in scaled, production use cases? Assess whether scaling requires custom infrastructure, which increases operational risk, or if it relies on standard, well-understood techniques. Technologies that align with established standards like NIST SP 800-63A digital identity at IAL2 or IAL3, and leverage proven cloud architectures, offer a more reliable path to large-scale deployment. Building for tomorrow, today

The strategic shift towards globally supported open standards is not just a technological choice; it is a critical imperative for states committed to modernizing responsibly and sustainably. It is the difference between building disposable applications and investing in durable digital infrastructure.

By adopting this forward-thinking mindset and leveraging the provided checklist, state IT leaders can confidently navigate the complexities of digital identity procurement. This approach empowers states to build resilient, secure, and adaptable IT infrastructure that truly future-proofs public services.

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions. We build privacy-preserving digital identity infrastructure that empowers people and organizations to control their data. Governments, financial institutions, and enterprises use SpruceID’s technology to issue, verify, and manage digital credentials based on open standards.


Innopay

INNOPAY Sponsors GLEIF’s Global Forum on Digital Organisational Identity & vLEI Hackathon 2025

INNOPAY Sponsors GLEIF’s Global Forum on Digital Organisational Identity & vLEI Hackathon 2025 from 02 Dec 2025 till 02 Dec 2025 Trudy Zomer 24 October 2025 - 16:11 Frankfurt, Germany On 2 December 2025, the Global Legal Entity
INNOPAY Sponsors GLEIF’s Global Forum on Digital Organisational Identity & vLEI Hackathon 2025 from 02 Dec 2025 till 02 Dec 2025 Trudy Zomer 24 October 2025 - 16:11 Frankfurt, Germany

On 2 December 2025, the Global Legal Entity Identifier Foundation (GLEIF) will host its Global Forum on Digital Organizational Identity in Frankfurt, featuring the Grand Finale of the vLEI Hackathon. INNOPAY is proud to sponsor this initiative.

The Global Legal Entity Identifier Foundation (GLEIF) invites developers, entrepreneurs, and innovators worldwide to harness the power of digital organizational identity and redefine how trust is established in the digital economy.

As industries accelerate their digital transformation, the demand for digital organisational identity is greater than ever. Spanning machine-to-machine interactions and next-generation business wallets, the verifiable Legal Entity Identifier (vLEI) unlocks new opportunities for transparency, automation, and compliance.

On the event day, the finalists of the second theme of the vLEI Hackathon, which focuses on Industry 4.0, will present their technical solution. At the end of the event, the winner and runner-up will be officially announced.

More information about the vLEI Hackathon can be found on the GLEIF website: https://www.gleif.org/en/newsroom/events/gleif-vlei-hackathon-2025


uquodo

Enhancing Risk Insights by Integrating KYC Data with Transaction Monitoring

The post Enhancing Risk Insights by Integrating KYC Data with Transaction Monitoring appeared first on uqudo.

SC Media - Identity and Access

Defend by design: Eliminating identity-based attacks at the root

This article introduces a defend-by-design approach that binds credentials to hardware, validates device posture continuously, and closes off all identity attack vectors. When shared credentials don’t exist, there’s nothing to steal, hijack, or replay.

This article introduces a defend-by-design approach that binds credentials to hardware, validates device posture continuously, and closes off all identity attack vectors. When shared credentials don’t exist, there’s nothing to steal, hijack, or replay.


Securing third‑party access to disrupt the supply chain attack path

This article summarizes a recent SC webcast with host Adrian Sanabria, David Gwizdala, Senior Sales Engineer at Ping Identity, and Mark Wilson, B2B IAM Go‑To‑Market lead at Ping Identity. They discussed how mismanaged identities, insufficient access policies, and weak verification controls expose organizations to downstream threats -- and how to apply end-to-end Identity Lifecy

This article summarizes a recent SC webcast with host Adrian Sanabria, David Gwizdala, Senior Sales Engineer at Ping Identity, and Mark Wilson, B2B IAM Go‑To‑Market lead at Ping Identity. They discussed how mismanaged identities, insufficient access policies, and weak verification controls expose organizations to downstream threats -- and how to apply end-to-end Identity Lifecycle Protection as a solution.


IDnow

The true face of fraud #1: The masterminds behind the $1 trillion crime industry.

Forget the lone hacker stereotype. Today's fraud is a $1 trillion global industry run by organized crime syndicates operating from industrial-scale compounds across Southeast Asia, Africa, and Eastern Europe. These networks use trafficked workers, corporate structures, and sophisticated tech infrastructure to deceive victims worldwide—and banks are facing mounting losses, compliance costs, and erod
Fraud today is a work of gangs that operate across borders. Worth more than $1 trillion, this industry doesn’t just steal money – it destroys lives. In the first part of our fraud series, we explore who is behind the fastest-growing schemes today, where are the hubs of so-called scam compounds and what financial organizations must understand about their opponent. 

For decades, pop culture painted fraudsters as solitary figures hunched over laptops in darkened rooms. That stereotype is not only wrong, it is dangerously outdated. Today’s most damaging scams are orchestrated by global crime syndicates spanning every continent. These networks build tens of thousands-strong operations, traffic people, train their “staff” and basically operate like Fortune 100 companies, but their product is deception, and the victims pay the cost. 

Their scale is staggering: global fraud reached over $1 trillion in 2024. The numbers, however, tell only part of the story. The fastest-growing schemes today are app-based and social engineering scams, which are also the most common types of fraud affecting banks and financial institutions, causing record losses from reimbursements and compliance costs. 

These attacks not only target systems, but exploit people, undermining trust in financial institutions, regulators, and courts, while also supporting human trafficking and forced labor. Behind every fake investment ad or romance scam lies a darker reality: compounds where people are held captive and forced to defraud strangers across the world. 

Global scam centres: Where to find them 

When most people think of criminal gangs, they imagine shadowy figures operating from jungles or remote hideouts. But the criminals behind the world’s largest fraud rings work very differently. These aren’t small-time operations running in the dark, they’re industrial-scale enterprises operating in plain sight. 

Their structure closely mirrors those of legitimate businesses with executives overseeing operations, middle managers coaching employees and tracking KPIs and frontline workers executing scams via phone, social media or messaging apps.  

Their facilities are not hidden in basements. They are large, purpose-built sites, often converted from former hotels, casinos or business parks. Located primarily in Southeast Asia – in Cambodia, Myanmar, Vietnam, and the Philippines – but increasingly also in Africa and Eastern Europe, these complexes can be vast. Investigators have uncovered huge compounds where hundreds of people work in rotating shifts, day and night. Some sites are so large they have been described as “villages,” covering dozens of acres, with syndicates often running multiple locations across regions. At scale, this means a single network can control thousands of people. 

However, not all people who work for syndicates on site, are there voluntarily. In fact, most of the front-line workers and call centre agents are victims of human trafficking. Lured by the promise of big money and escape from poverty, they travel across borders, only to find themselves kidnapped, captured and coerced into deceiving others.  

Life inside scam compounds: A prison disguised as an office 

On-site structure is designed to sustain a captive workforce. They include dormitories, shops, entertainment rooms, kitchens and even small clinics. On the surface, these facilities might resemble employee perks, and for vulnerable recruits from poorer backgrounds, they can even sound appealing, but the reality is dark: rows of desks, bunkrooms stacked with beds, CCTV cameras monitoring every corner, kitchens feeding hundreds. With razor-wire fences and armed guards at the gates, these compounds look more like prisons rather than offices. And in many ways, that is exactly what they are. 

The “masterminds” of the crime ecosystem 

Behind the compounds lies a web of transnational operators and a shadow service economy. The organisers of these operations come in many forms – from criminal entrepreneurs diversifying from drugs to online scams, to networks linked with regional crime groups such as Southeast Asian gangs, Chinese or Eastern European syndicates, and illicit operators tied to South American cartels. In some places, politically connected actors or local elites profit from – and even protect – these operations, ensuring they continue with little interference. 

Another layer consists of companies that appear legitimate on paper but in reality, supply the infrastructure that keeps the fraud industry running: phone numbers, fake identity documents, shell firms and payment processors willing to handle high-risk transactions. Investigations have uncovered, how underground service providers and proxy accounts help scammers move victims’ money through banks and into crypto using fake invoices and front companies as cover.  

It’s an industrial-scale business model: acquisition channels built on fake ads, call centres with scripts and a laundering pipeline powered by mules, shell companies and crypto gateways. The setup is remarkably resilient – shut down one centre or payment route, and the network simply reroutes through another provider or jurisdiction. 

How fraud hurts banks and other financial companies  

For banks and financial firms, the impact is severe. Direct financial losses and costs to financial institutions are significant and rising. Banks, fintechs and credit unions report substantial direct fraud losses: nearly 60% reported losing over $500k in direct fraud in a 12-month period and a large share reported losses over $1m. These trends force firms to allocate budget away from growth into loss-prevention and remediation.  

Payment fraud at scale also increases operational and compliance costs. For example, in 2022, payment fraud was reported at €4.3 billion in European Economic Area and consumer-reported losses in other jurisdictions show multi-billion-dollar annual impacts that increase every year – all of which ripple into higher Suspicious Activity Report (SAR) volumes, Anti-Money Laundering (AML) investigations and strained dispute and reimbursement processes for banks. These costs are both direct (reimbursed losses) and indirect (investigation time, compliance staffing, fines, customer churn and reputational damage). 

Banks face a daily balancing act: tighten controls and risk frustrating customers or loosen them and risk becoming a target. Either way, regulators demand ever-stronger safeguards. And even though stronger authentication and checks can increase drop-offs during onboarding or transactions, failure to comply risks exposure to legal and regulatory trouble (recent cases tied to payment rails illustrate how banks can face large remediation obligations and lawsuits if controls are perceived as inadequate). 

The long-term consequences, however, go beyond operational complexity. Fraud undermines customer trust, which is the foundation of finance. It increases costs, slows innovation and forces financial institutions to redesign products with restrictions that customers feel but rarely understand. And this can lead to a long-term loss of market share. 

What financial institutions must understand about the opponent 

Banks are not fighting individual perpetrators. They are facing industrialized criminal organizations. To defeat them, defensive measures must also be organized accordingly. 

This means moving beyond isolated controls toward systemic resilience: robust fraud checks, stronger identity verification, continuous monitoring, transaction orchestration and faster coordination with law enforcement. But technology alone is not enough. Collaboration across institutions and industries is crucial to disrupt fraud networks that operate globally. 

How financial organizations can protect themselves against financial crime 

Financial firms should invest in multi-layered identity checks combining document, liveness and behavioral signals (like the ones offered by IDnow); integrate real-time AML orchestration to flag mule activity early (like the soon-to-be-launched IDnow Trust Platform); and participate in intelligence-sharing networks that connect patterns across borders.  

Fraud is no longer a fringe crime. It’s a billion-dollar corporate machine. To dismantle it, financial institutions must shift from investigating fraud after it happens to preventing it before it strikes, stopping both criminals and socially engineered victims before any loss occurs. 

By

Nikita Rybová
Customer & Product Marketing Manager at IDnow
Connect with Nikita on LinkedIn


Thales Group

Thales Alenia Space strengthens Spanish space industry leadership through its participation in SpainSat NG II satellite

Thales Alenia Space strengthens Spanish space industry leadership through its participation in SpainSat NG II satellite tas Fri, 10/24/2025 - 08:42 Space Share options Facebook
Thales Alenia Space strengthens Spanish space industry leadership through its participation in SpainSat NG II satellite tas Fri, 10/24/2025 - 08:42 Space

Share options

Facebook X Whatsapp Linkedin Email URL copied to clipboard 24 Oct 2025 The Spanish secure communications satellite SpainSat NG II, successfully launched from Cape Canaveral, Florida, will provide services to the Armed Forces and international organizations such as the European Commission and NATO, as well as to the governments of allied countries. Thales Alenia Space, together with Airbus Defence and Space, has led the development and construction of the SpainSat NG satellites. The company, involved in different areas of the project, has been responsible for the integration of the Communication Module for both satellites along with Airbus in a dedicated clean room built for this purpose at its Tres Cantos facilities in Madrid, making it the largest satellite system ever assembled in Spain to date.

 

Madrid, October 24, 2025 - The secure communications satellite SpainSat NG II has successfully been launched by a SpaceX Falcon 9 rocket from Cape Canaveral, Florida. The SPAINSAT NG system will ensure secure communications for the Spanish Armed Forces and allied countries for decades to come.

SPAINSAT NG, a program led, owned and operated by Hisdesat Servicios Estratégicos S.A., is considered to be the most ambitious space project in Spain’s history, both due to its performance and the outstanding involvement of the national industry. The SpainSat NG satellites are among the most advanced telecommunications birds in the world. They operate from geostationary orbit in X, military Ka and UHF frequency bands, used for high throughput secure communications, enabling to provide dual, secure and resilient services to the Spanish Armed Forces, as well as to international organizations such as the European Commission, NATO, and allied countries.

 

© Airbus

Thales Alenia Space, a joint venture between Thales (67%) and Leonardo (33%), together with Airbus Defence and Space, has led the execution and construction of the satellites. In Spain, the company has been responsible for the UHF and military Ka-band payloads and, together with Airbus, for the integration of the communication Modules, which embark the communication payloads and forms the core of their advanced technological capabilities.

 

Ismael López, CEO of Thales Alenia Space in Spain, said: “This launch marks the culmination of a transformative project for the Spanish space industry. We thank Hisdesat and the Ministry of Defense for the trust placed in our company to lead, for the first time in Spain, the development of the communications payloads for the SPAINSAT NG geostationary satellites. For our teams in Madrid, successfully overcoming a challenge of this magnitude places the national space industry on a higher level.”

 

State-of-the-art satellite technology in Madrid

To carry out this mission, the company built a satellite assembly and integration clean room at its Tres Cantos site in Madrid, inaugurated in 2021, and specifically designed to integrate the communication modules of both satellites. These advanced and cutting-edge facilities make it possible to integrate and test large-scale, highly complex satellite systems, capabilities that until now were only within the reach of a few space powers worldwide.

For the first time in Spain, these facilities have enabled the integration of a module weighting more than 2 tons and measuring 6 meters in height, fully equipped with cutting-edge space communications technology and comprising hundreds of sophisticated electronic units.

Additionally, Thales Alenia Space has designed and manufactured in Spain, France, Italy, and Belgium over 200 electronic and radiofrequency units that are an integral part of the communications payloads and the satellite's telecommand and telemetry system. Among them are the UHF processor, the core of the UHF payload; the Digital Transparent Processor (DTP) that interconnects the payloads in the X and military Ka bands; and the Hilink unit, responsible for providing a high-speed service link that will facilitate a quick reconfiguration of the payloads.
 

About Thales Alenia Space

Drawing on over 40 years of experience and a unique combination of skills, expertise and cultures, Thales Alenia Space delivers cost-effective solutions for telecommunications, navigation, Earth observation, environmental monitoring, exploration, science and orbital infrastructures. Governments and private industry alike count on Thales Alenia Space to design satellite-based systems that provide anytime, anywhere connections and positioning, monitor our planet, enhance management of its resources and explore our Solar System and beyond. Thales Alenia Space sees space as a new horizon, helping build a better, more sustainable life on Earth. A joint venture between Thales (67%) and Leonardo (33%), Thales Alenia Space also teams up with Telespazio to form the Space Alliance, which offers a complete range of solutions including services. Thales Alenia Space posted consolidated revenues of €2.23 billion in 2024 and has more than 8,100 employees in 7 countries with 14 sites in Europe.

View PDF market_segment : Space thales-alenia-space-strengthens-spanish-space-industry-leadership-through-its-participation-spainsat On

Thales named Frost & Sullivan’s 2025 company of the year in Automated Border Control

Thales named Frost & Sullivan’s 2025 company of the year in Automated Border Control prezly Fri, 10/24/2025 - 07:00 Public Security Civil identity Share options Facebook
Thales named Frost & Sullivan’s 2025 company of the year in Automated Border Control prezly Fri, 10/24/2025 - 07:00 Public Security Civil identity

Share options

Facebook X Whatsapp Linkedin Email URL copied to clipboard 24 Oct 2025 Thales, a global leader in cybersecurity and identity management, has been recognized by Frost & Sullivan as Company of the Year 2025 in the Automated Border Control (ABC) eGates industry. This award underscores Thales’s ability to consistently innovate, deliver scalable and sustainable solutions, and set new standards for border security and passenger experience worldwide. Frost & Sullivan emphasized Thales’s proven ability to deliver at scale, its close collaboration with ministries of interior and airport operators, and its forward-looking strategy that integrates regulatory compliance, digital identity, and sustainability.

As governments and airports face rising passenger volumes, new regulatory requirements such as the European Entry Exit System (EES), and evolving security threats, Thales’s solutions help strike the right balance between strong security and traveller convenience. Frost & Sullivan praised Thales’s end-to-end expertise, customer focus, and ability to design systems that are modular, eco-conscious, and powered by advanced biometrics and artificial intelligence.

At the heart of Thales’s innovation is the traveller experience. Crossing a border that once meant long queues and stressful procedures can now be completed in less than 15 seconds. A passenger simply presents their passport, looks briefly at a camera, and passes through an automated gate thanks to biometric verification and real-time checks.

This simplicity is supported by advanced technology:

Cybersecurity by design, embedding data protection, privacy by default, and role-based access control to ensure secure, compliant and resilient identity verification. Multi-modal biometrics (face, fingerprint, iris) with AI-driven accuracy and liveness detection. Flexible, modular eGates adaptable to any border or airport environment. Digital identity frameworks aligned with international standards. Sustainable engineering, using lightweight, recycled materials and designs that extend product life.

From a deployment perspective, Thales strengthens its leadership in border security with a diversified global footprint, operating across numerous border points worldwide. The company’s expansive presence spans Europe, the Middle East, Latin America, Africa, and North America, with landmark projects including the deployment of hundreds of eGates and self-service kiosks in countries such as France, Spain, and Belgium.

These deployments highlight the positive impact on citizens: smoother journeys, less time waiting, and greater trust in border security. For governments and border agencies, the result is higher throughput, enhanced resilience, and full regulatory compliance. For airports, it means more efficient operations and stronger passenger satisfaction.

We are honored to receive this recognition from Frost & Sullivan. At Thales, we believe that security and passenger experience must go hand in hand. From France to India, our border control solutions allow millions of travellers to cross borders every day with greater speed, trust, and confidence. This award reflects the dedication of our teams and our commitment to helping governments and airports around the world shape the future of secure, seamless travel” commented Emmanuel Wang, Border & Travel Director at Thales.

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies for the Defence, Aerospace, and Cyber & Digital sectors. Its portfolio of innovative products and services addresses several major challenges: sovereignty, security, sustainability and inclusion.

The Group invests more than €4 billion per year in Research & Development in key areas, particularly for critical environments, such as Artificial Intelligence, cybersecurity, quantum and cloud technologies.

Thales has more than 83,000 employees in 68 countries. In 2024, the Group generated sales of €20.6 billion.

View PDF market_segment : Public Security > Civil identity https://thales-group.prezly.com/thales-named-frost--sullivans-2025-company-of-the-year-in-automated-border-control thales-named-frost-sullivans-2025-company-year-automated-border-control On Thales named Frost & Sullivan’s 2025 company of the year in Automated Border Control

Aergo

HPP Update: Technology Ready, Market Expansion Underway

After months of preparation, the HPP mainnet is live, our core technologies are stable, and we are now entering the most exciting phase of our journey — growth and adoption. 1. Technical Milestones Achieved All major technical roadmaps have been met and are now production-ready. This includes the mainnet launch and multiple project integrations across the HPP ecosystem. The network is built for

After months of preparation, the HPP mainnet is live, our core technologies are stable, and we are now entering the most exciting phase of our journey — growth and adoption.

1. Technical Milestones Achieved

All major technical roadmaps have been met and are now production-ready. This includes the mainnet launch and multiple project integrations across the HPP ecosystem. The network is built for scale, equipped for cross-chain connectivity, and ready for full activation.

2. Migration and Market Readiness

The migration infrastructure, which includes the official bridge and migration portal, is complete and fully tested. Legacy Aergo and AQT token holders will be able to transition seamlessly into HPP through a secure, verifiable process designed to ensure accuracy and transparency across chains.

With the full network framework in place, HPP is now entering the growth and liquidity phase. We are in coordination with several major exchanges to align token listings, update technical integrations, and synchronize branding across trading platforms. These efforts aim to create a strong, sustainable market structure that supports institutional participation, community accessibility, and long-term ecosystem stability.

3. Building a Real-World Breakthrough

We are developing one of the most significant blockchain real-world use cases to date. This initiative combines a large user base, mission-critical data, and enterprise-grade requirements. It will demonstrate how our L2 infrastructure can power high-value, data-driven applications that go beyond typical blockchain use cases.

At the same time, we are working with enterprise partners, including early Aergo collaborators, to adopt HPP’s advanced features through the Noosphere layer.

4. Keeping You Updated

To ensure full transparency, we are continuously updating our HPP Living Roadmap, a real-time tracker that shows ongoing technical progress, upcoming milestones, and partner developments as they happen.

The technology is ready, the ecosystem is forming, and the next phase is set to begin. HPP is moving from readiness to execution, and the wait is almost over.

HPP Update: Technology Ready, Market Expansion Underway was originally published in Aergo (HPP) on Medium, where people are continuing the conversation by highlighting and responding to this story.

Thursday, 23. October 2025

SC Media - Identity and Access

RedTiger infostealer targeting gamers and Discord accounts

Malware’s rich feature set expected to have broader appeal to threat actors.

Malware’s rich feature set expected to have broader appeal to threat actors.


Spruce Systems

The Technology Powering Digital Identity

This article is the third installment of our series: The Future of Digital Identity in America.

Read the first installment in our series on The Future of Digital Identity in America here and the second installment here.

If policy sets the rules of the road, technology lays the pavement. Without strong technical foundations, decentralized identity would remain an inspiring vision but little more. What makes it real are the advances in cryptography, open standards, and system design that let people carry credentials in their own wallets, present them securely, and protect their privacy along the way. These technologies aren’t abstract: they are already running in production, powering mobile driver’s licenses, digital immigration pilots, and cross-border banking use cases.

Why Technology Matters for Identity

Identity is the trust layer of the digital world. Every interaction - logging into a platform, applying for a loan, proving eligibility for benefits - depends on it. Yet today, that trust layer is fractured. We scatter our identity across countless accounts and passwords. We rely on federated logins controlled by Big Tech platforms. Businesses pour money into fraud prevention while governments struggle to verify citizens securely.

The costs of this fragmentation are staggering. In 2024 alone, Americans reported record losses of $16.6 billion to internet crime (FBI IC3) and $12.5 billion to consumer fraud (FTC). At the institutional level, the average cost of a U.S. data breach hit $10.22 million in 2025 (IBM). And the risks are accelerating: synthetic identity fraud drained an estimated $35 billion in 2023 (Federal Reserve), while FinCEN has warned that criminals are now using deepfakes, synthetic documents, and AI-generated audio to bypass traditional checks at scale.

Decentralized identity offers a way forward; but, only if the technology can make it reliable, usable, and interoperable. That’s where verifiable credentials, decentralized identifiers, cryptography, and open standards come in.

The Standards that Make it Work

Every successful infrastructure layer in technology—whether it was TCP/IP for the internet or HTTPS for secure web traffic—has been built on standards. Decentralized identity is no different. Standards ensure that issuers, holders, and verifiers can interact without building one-off integrations or relying on proprietary systems.

Here are the key ones shaping today’s decentralized identity landscape:

W3C Verifiable Credentials (VCs): This is the universal data model for digital credentials. A VC is essentially a cryptographically signed digital version of something like a driver’s license, diploma, or membership card. It defines how the credential is structured (with attributes, metadata, and signatures) so that anyone who receives it knows how to parse and verify it. Decentralized Identifiers (DIDs): DIDs are globally unique identifiers that are cryptographically verifiable and not tied to any single registry. Unlike email addresses or usernames, which depend on central providers, a DID is self-sovereign. For example, a university might issue a credential to did:example:university12345. The DID resolves to metadata (such as public keys) that allows verifiers to check signatures and authenticity. OID4VCI and OID4VP (OpenID for Verifiable Credential Issuance and Presentation): These protocols define how credentials move between systems. They extend OAuth2 and OpenID Connect, the same standards that handle billions of secure logins each day. With OID4VCI, you can request and receive a credential securely from an issuer. With OID4VP, you can present that credential to a verifier. This reuse of familiar login plumbing makes adoption easier for developers and enterprises. SD-JWT (Selective Disclosure JWTs): A new extension of JSON Web Tokens that enables selective disclosure directly within a familiar JWT format. Instead of revealing all fields in a token, SD-JWTs let the holder decide which claims to disclose, while still allowing the verifier to check the issuer’s signature. This bridges modern privacy-preserving features with the widespread JWT ecosystem already in use across industries. ISO/IEC 18013-5 and 18013-7: These international standards define how mobile driver’s licenses (mDLs) are presented both in person and online. For example, 18013-5 specifies the NFC and QR code mechanisms for proving your identity at a checkpoint without handing over your phone. 18013-7 expands these definitions to online use cases—critical for remote verification scenarios. ISO/IEC 23220-4 (mdocs): A broader framework for mobile documents (mdocs), extending beyond driver’s licenses to other government-issued credentials like passports, resident permits, or voter IDs. This standard provides a consistent way to issue and verify digital documents across multiple contexts, supporting both offline and online verification. NIST SP 800-63-4: The National Institute of Standards and Technology publishes the “Digital Identity Guidelines,” setting out levels of assurance (LOAs) for identity proofing and authentication. The latest revision reflects the shift toward verifiable credentials and modern assurance methods. U.S. federal agencies and financial institutions often rely on NIST guidance as their baseline for compliance.

Reading the list above, you may realize that one challenge in following this space is the sheer number of credential formats in play—W3C Verifiable Credentials, ISO mDLs, ISO 23220 mdocs, and SD-JWTs, among others. Each has its strengths: VCs offer flexibility across industries, ISO standards are backed by governments and transportation regulators, and SD-JWTs connect privacy-preserving features with the massive JWT ecosystem already used in enterprise systems. The key recommendation for anyone trying to make sense of “what’s best” is not to pick a single winner, but to look for interoperability.

Wallets, issuers, and verifiers should be designed to support multiple formats, since different industries and jurisdictions will inevitably favor different standards. In practice, the safest bet is to align with open standards bodies (W3C, ISO, IETF, OpenID Foundation) and ensure your implementation can bridge formats rather than being locked into just one.

The following sections detail (in a vastly oversimplified way, some may argue) the strengths, weaknesses, and best fit by credential format type.

W3C Verifiable Credentials (VCs)

A flexible, standards-based data model for any kind of digital credential, maintained by the World Wide Web Consortium (W3C).

Strengths: Broadly applicable across industries, highly extensible, and supports advanced privacy techniques like selective disclosure and zero-knowledge proofs. Limitations: Still maturing; ecosystem flexibility can lead to fragmentation without a specific implementation profile; certification programs are less mature than ISO-based approaches; requires investment in verifier readiness. Best fit: Used by universities, employers, financial institutions, and governments experimenting with general-purpose digital identity. ISO/IEC 18013-5 & 18013-7 (Mobile Driver’s Licenses, or mDLs)

International standards defining how mobile driver’s licenses are issued, stored, and verified.

Strengths: Mature international standards already deployed in U.S. state pilots; supported by TSA TSIF testing for federal checkpoint acceptance; backed by significant TSA investment in CAT-2 readers nationwide; privacy-preserving offline verification. Limitations: Narrow scope (focused on driver’s licenses); complex implementation; limited support outside government and DMV contexts. Best fit: State DMVs, airports, traffic enforcement, and retail environments handling age-restricted sales. ISO/IEC 23220-4 (“Mobile Documents,” or mdocs)

A broader ISO definition expanding mDL principles to other official credentials such as passports, residence permits, and social security cards.

Strengths: Extends interoperability to a broader range of credentials; supports both offline and online presentation; aligned with existing ISO frameworks. Limitations: Still early in deployment; adoption and vendor support are limited compared to mDLs. Best fit: Immigration, cross-border travel, and civil registry systems. SD-JWT (Selective Disclosure JSON Web Tokens)

A privacy-preserving evolution of JSON Web Tokens (JWTs), adding selective disclosure capabilities to an already widely used web and enterprise identity format.

Strengths: Easy to adopt within existing JWT ecosystems; enables selective disclosure without requiring new infrastructure or wallets. Limitations: Less flexible than VCs; focused on direct issuer-to-verifier interactions; limited for long-term portability or offline use. Best fit: Enterprise identity, healthcare, and fintech environments already built around JWT-based authentication and access systems.

Together, these standards create the backbone of interoperability. They ensure that a credential issued by the California DMV can be recognized at TSA, or that a diploma issued by a European university can be trusted by a U.S. employer. Without them, decentralized identity would splinter into silos. With them, it has the potential to scale globally.

How Trust Flows Between Issuers, Holders, and Verifiers

Decentralized identity works through a triangular relationship between issuers, holders, and verifiers. Issuers (such as DMVs, universities, or employers) create credentials. Holders (the individuals) store them in their wallets. Verifiers (such as banks, retailers, or government agencies) request proofs.

What makes this model revolutionary is that issuers and verifiers don’t need to know each other directly. Trust doesn’t come from an integration between the DMV and the bank, for example. It comes from the credential itself. The DMV signs a driver’s license credential. You carry it. When you present it to a bank, the bank simply checks the DMV’s digital signature.

Think about going to a bar. Today, you hand over a plastic driver’s license with far more information than the bartender needs. With decentralized identity, you would simply present a cryptographic proof that says, “I am over 21,” without revealing your name or address. The bartender’s system verifies the DMV’s signature and that’s it - proof without oversharing.

Cryptography at Work

To make this work, at the core of decentralized identity lies one deceptively simple but immensely powerful concept: the digital signature.

A digital signature is created when an issuer (say, a DMV or a university) uses its private key to sign a credential. This cryptographic signature is attached to the credential itself. When a holder later presents the credential to a verifier, the verifier checks the signature using the issuer’s public key.

If the credential has been altered in any way—even by a single character—the signature will no longer match. If the credential is valid, the verifier has instant assurance that it really came from the claimed issuer.

This creates trust without intermediaries.

Imagine a university issues a digital diploma as a verifiable credential. Ten years later, you apply for a job. The employer asks for proof of your degree. Instead of calling the university registrar or requesting a PDF, you simply send the credential from your wallet. The employer’s system checks the digital signature against the university’s public key. Within seconds, it knows the credential is genuine.

This removes bottlenecks and central databases of verification services. It also shifts the trust anchor from phone calls or PDFs—which can be forged—to mathematics. Digital signatures are unforgeable without the private key, and the public key can be widely distributed to anyone who needs to verify.

Digital signatures also make revocation possible. If a credential is suspended or withdrawn, the issuer can publish a revocation list. When a verifier checks the credential, it not only validates the signature but also checks whether it’s still active.

Without digital signatures, decentralized identity wouldn’t work. With them, credentials become tamper-proof, portable, and verifiable anywhere.

Selective Disclosure: Sharing Just Enough

One of the major problems with physical IDs is oversharing. As we detailed in the scenario earlier, you only want to show a bartender that you are over 21, without revealing your name, home address, or exact date of birth. That information is far more than the bartender needs—and far more than you should have to give.

Selective disclosure, one of the other major features underpinning decentralized identity, fixes this. It allows a credential holder to reveal only the specific attributes needed for a transaction, while keeping everything else hidden.

Example in Practice: Proving Age A DMV issues you a credential with multiple attributes: name, address, date of birth, license number. At a bar, a bartender verifies if your age is over 21 by scanning your digital credential QR code. The verifier checks the DMV’s signature on the proof and confirms it matches the original credential. The bartender sees only a confirmation that you are over 21. They never see your name, address, or full birthdate. Example in Practice: Proving Residency A city issues residents a digital credential for municipal benefits. A service provider asks for proof of residency. You present your digital credential and the service provider verifies that your “Zip code is within city limits” without exposing your full street address.

Selective disclosure enforces the principle of data minimization. Verifiers get what they need, nothing more. Holders retain privacy. And because the cryptography ensures the disclosed attribute is tied to the original issuer’s signature, verifiers can trust the result without seeing the full credential.

This flips the identity model from “all or nothing” to “just enough.”

Example in Practice: Sanctions Compliance

Under the Bank Secrecy Act (BSA) and OFAC requirements, financial institutions must verify that customers are not on the Specially Designated Nationals (SDN) list before opening or maintaining accounts. Today, this process often involves collecting and storing excessive personal data—full identity documents, addresses, and transaction histories—simply to prove a negative.

In our U.S. Treasury RFC response, we outlined how verifiable credentials and zero-knowledge proofs (ZKPs) can modernize this process. Instead of transmitting complete personal data, a customer could present a cryptographically signed credential from a trusted issuer attesting that they have already been screened against the SDN list. A ZKP allows the verifier (e.g., a bank) to confirm that the check was performed and that the customer is not on the list—without ever seeing or storing the underlying personal details. This approach satisfies regulatory intent, strengthens auditability, and dramatically reduces the risks of overcollection, breaches, and identity theft.

ZKPs are particularly important for compliance-heavy industries like finance, healthcare, and government services. They allow institutions to meet regulatory requirements without creating data honeypots vulnerable to breaches.

They also open the door to new forms of digital interaction. Imagine a voting system where you can prove you’re eligible to vote without revealing your identity, or a cross-border trade platform where businesses prove compliance with customs requirements without exposing their full supply chain data.

ZKPs represent the cutting edge of privacy-preserving technology. They transform the old equation, “to prove something, you must reveal everything,” into one where trust is established without unnecessary exposure.

Challenges and the Path Forward

Decentralized identity isn’t just a lofty principle about autonomy and privacy. At its core, it is a set of technologies that make those values real.

Standards ensure interoperability across issuers, wallets, and verifiers. Digital signatures anchor credentials in cryptographic trust. Selective disclosure prevents oversharing, giving people control of what they reveal. Zero-knowledge proofs allow compliance and verification without sacrificing privacy.

These aren’t abstract concepts. They are already protecting millions of people from fraud, reducing compliance costs, and embedding privacy into everyday transactions.

However, there are still hurdles. Interoperability across borders and industries is not guaranteed. Wallets must become as easy to use as a boarding pass on your phone. Verifiers need incentives to integrate credential checks into their systems. And standards need governance frameworks that help verifiers decide which issuers to trust.

None of these challenges are insurmountable, but they require careful collaboration between policymakers, technologists, and businesses. Without alignment, decentralized identity risks becoming fragmented—ironically recreating the silos it aims to replace.

SpruceID’s Role

SpruceID works at this intersection, building the tooling and standards that make decentralized identity practical. Our SDKs help developers issue and verify credentials. Our projects with states, like California and Utah, have proven that privacy and usability can go hand in hand. And our contributions to W3C, ISO, and the OpenID Foundation help ensure that the ecosystem remains open and interoperable.

Our objective is to make identity something you own—not something you rent from a platform. The technology is here. The challenge now is scaling it responsibly, with privacy and democracy at the center.

The trajectory is clear. Decentralized identity is evolving from a promising technology into the infrastructure of trust for the digital age. Like HTTPS, it will become invisible. Unlike many systems that came before it, it is being designed with people at the center from the very start.

This article is part of SpruceID’s series on the future of digital identity in America. Read more in the series:

SpruceID Digital Identity in America Series

Foundations of Decentralized Identity Digital Identity Policy Momentum The Technology of Digital Identity (this article) Privacy and User Control (coming soon) Practical Digital Identity in America (coming soon) Enabling U.S. Identity Issuers (coming soon) Verifiers at the Point of Use (coming soon) Holders and the User Experience (coming soon)

1Kosmos BlockID

Key Lessons in Digital Identity Verification

The post Key Lessons in Digital Identity Verification appeared first on 1Kosmos.

Thales Group

Thales strengthens airspace surveillance with Aeronáutica Civil de Colombia with advanced ATC system

Thales strengthens airspace surveillance with Aeronáutica Civil de Colombia with advanced ATC system prezly Thu, 10/23/2025 - 15:00 Civil Aviation Colombia Share options Facebook
Thales strengthens airspace surveillance with Aeronáutica Civil de Colombia with advanced ATC system prezly Thu, 10/23/2025 - 15:00 Civil Aviation Colombia

Share options

Facebook X Whatsapp Linkedin Email URL copied to clipboard 23 Oct 2025 Thales supplies a co-mounted Air Traffic Control radar station, featuring its next generation primary and secondary approach radars, STAR NG and RSM NG, located at the Picacho station, to improve air surveillance of the Bucaramanga terminal manoeuvring area (TMA) in Colombia. This contract is part of a countrywide modernisation project initiated by Aeronáutica Civil de Colombia, to improve airspace management and reinforce collaboration with the Colombian Air Force. The unique design and optimised performance of the two radars will enable increased civil and military collaboration in the country.

STAR NG primary surveillance radar & RSM NG Secondary radar © Thales

Thales is modernizing the current Picacho radar station, operated by Colombia’s civil aviation authority, Aeronáutica Civil de Colombia, in partnership with local company GyC to provide new capabilities for airspace surveillance. With its combined set of STAR NG and RSM NG radars, there will now be six Thales radars in operation in the country, allowing air traffic controllers to continuously track the position of aircraft, regardless of the conditions.

The 16-month project, which is already well underway, sees Thales manufacture, deliver and install the co-mounted radars STAR NG (approach Primary Surveillance Radar) and RSM NG (Mode S Secondary Surveillance Radar), while its local partner, GyC, renews the existing infrastructure. This technologically advanced radar station, supported by additional four stand-alone ADS-B ground stations, will strengthen continuous air surveillance in the approach area, in particular the North-Eastern part of Bucaramanga, Colombia.

To date, Thales has successfully completed the Factory Acceptance Tests (FAT) in coordination with Aeronáutica Civil. The required structural reinforcement studies for the tower have been approved, and progress is proceeding on schedule to ensure the radar becomes operational within the timeline. ​ The new system will bring many benefits, including:

Enhanced airspace surveillance and sovereignty: The STAR NG will deliver real-time information on both cooperative and non-cooperative aircraft to strengthen Colombia’s ability to monitor and protect its national airspace. Greater reliability and resilience in aircraft identification: The RSM NG meta-sensor provides more accurate identification and tracking, ensuring continuity of information even in cases of jamming or spoofing attempts. Robust cybersecurity protection: Both radars are equipped with the latest cybersecurity updates, safeguarding critical surveillance data against evolving digital threats.

With over 50 years’ experience in Air Traffic Control and Surveillance, and more than 1,200 radars operating around the globe, Thales is the trusted leader in this domain worldwide. In Colombia, Thales radars already equip sites in Flandes, Cerro Verde, Santa Ana, Villavicencio and Carimagua, and soon in Picacho. Thales also supplied the APP control centre in San Andrés, and the ATC and Tower simulator in Bogota to support Air Traffic Controller training. The Group has also provided navigation aids in various key sites all over the country.

“Thales is proud to strengthen its 25-year partnership with the Civil Aviation Authority of Colombia. This new contract will enhance the country’s airspace surveillance capabilities by combining the strengths of its primary and secondary radars. It highlights the versatility of Thales’ ATC radars in meeting the needs of both civilian and military operators, and demonstrates our long-term commitment to ensuring excellence in surveillance and air safety systems." Lionel de CASTELLANE, Vice President Air Traffic Radars, Thales.

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies in advanced for the Defence, Aerospace and Cyber & Digital sectors. Its portfolio of innovative products and services addresses several major challenges: sovereignty, security, sustainability and inclusion.

The Group invests more than €4 billion per year in Research & Development in key areas, particularly for critical environments, such as Artificial Intelligence, cybersecurity, quantum and cloud technologies.

Thales has more than 83,000 employees in 68 countries. In 2024, the Group generated sales of €20.6 billion.

About Thales in Latin America

With six decades of presence in Latin America, Thales, a global tech leader for the Defence, Aerospace, Cyber & Digital sectors. The Group is investing in digital and “deep tech” innovations – Big Data, artificial intelligence, connectivity, cybersecurity and quantum technology – to build a future we can all trust.

The company has 2,500 employees in the region, across 7 countries - Argentina, Bolivia, Brazil, Chile, Colombia, Mexico and Panama - with ten offices, five manufacturing plants, and engineering and service centres in all the sectors in which it operates.

Through strategic partnerships and innovative projects, Thales in Latin America drives sustainable growth and strengthens its ties with governments, public and private institutions, as well as airports, airlines, banks, telecommunications and technology companies.

View PDF market_segment : Civil Aviation ; countries : Americas > Colombia https://thales-group.prezly.com/thales-strengthens-airspace-surveillance-with-aeronautica-civil-de-colombia-with-advanced-atc-system-c7rr2j thales-strengthens-airspace-surveillance-aeronautica-civil-de-colombia-advanced-atc-system On Thales strengthens airspace surveillance with Aeronáutica Civil de Colombia with advanced ATC system

Indicio

Indicio joins NVIDIA Inception Program to bring Verifiable Credentials to AI systems

The post Indicio joins NVIDIA Inception Program to bring Verifiable Credentials to AI systems appeared first on Indicio.
Indicio was the first to recognize the importance of secure authentication to AI agents by launching Indicio ProvenAI, recently recognized by Gartner. With NVIDIA Inception, we’re going to take this to a new level — a secure authentication and decentralized governance solution for autonomous systems and the internet of AI.

By Trevor Butterworth

Indicio has officially joined the NVIDIA Inception Program, a global initiative that supports startups advancing artificial intelligence and high-performance computing. Indicio will focus on applying decentralized identity and Verifiable Credential technology — in the form of Indicio ProvenAI — to AI systems.

ProvenAI enables AI agents and their users to authenticate each other using decentralized identifiers and Verifiable Credentials. This means an AI agent can cryptographically prove who it is interacting with and that entity can do the same  — all before any data is shared.

Once identified, a person or organization can give permission to the AI agent to access their data and can delegate authority to the agent to act on behalf of the person or organization.

To monetize AI, agents and users need to be able to trust each other

Agentic AI and AI agents cannot fulfill their mission without accessing data. The more data they can access, the easier it is to execute a task. But this exposes the companies that use them to significant risk.

How can they be sure their agent is interacting with a real person, an authentic user or customer? And how can that user, similarly, verify the authenticity of the agent?

The simplest way is to issue each with a Verifiable Credential, a cryptographic way to authenticate not only an identity but the data that is being shared. Importantly, this cryptography is AI-resistant, meaning it can’t be reengineered by people using AI to try and alter the underlying information.

The critical benefit to using Verifiable Credentials for this task is that there is no need for either party to phone home to crosscheck a database during authentication or authorization. Because a Verifiable Credential is digitally signed, the original credential issuer can be verified without having to contact the issuer. The information in the credential can also be cryptographically checked to see if it has been altered. As a result, if you know the identity of the credential issuer and you trust that issuer, you can trust the contents of the credential to instantly act on them.

With Verifiable Credentials, AI’s GDPR nightmare goes away

For AI agents to be useful, they must be able to access personal data — lots of it. For this to be compliant with data privacy regulations such as GDPR, a person must be able to consent to share their data. There’s just no way of getting around this.

Verifiable Credentials makes consent easy because the person or organization holds their data in a credential or can provide a credential to a party containing permission to access data. Once a user consents to share their data, you’ve met a critical requirement of GDPR, and that decision can be recorded for audit.

But Verifiable Credentials — or at least some credential formats — also allow for selective disclosure or zero-knowledge proofs, which means that the data and purpose for which it is being used can be minimized, thereby fulfilling other GDPR requirements.

As AI Agents will also need to access large amounts of data held belonging to people and organizations that are held elsewhere,  a Verifiable Credential can be used by a person or organization to delegate authority to access that data, with the rock-solid assurance that this permission has been given by the legitimate data subject.

Decentralized governance, the engine for autonomous systems

These features create a seamless way for AI agents to operate. But things get even more exciting when we look at the way Verifiable Credentials are governed.

With Indicio Proven and ProvenAI, a network is governed by machine-readable files sent to the software of each participant in the network (i.e., the credential issuer(s), holders and verifiers). This software tells each participant who is a trusted issuer, who is a trusted verifier, and which information needs to be presented for which use case in what order.

Indicio DEGov enables the natural authority for a network or use case to orchestrate interaction by publishing a machine-readable governance file. And this orchestration can be configured to respect hierarchical levels of authority. The result is seamless interaction driven by automated trust.

Now think about autonomous systems where each connected element has a Verifiable Identity that can be orchestrated to interact with an AI agent. You have a very powerful way to apply authentication and information sharing to very complex systems in a highly secure way. Every element of this system can be known, can authenticate another element, and can share data in complex workflows. Each interaction can be made secure and element-to-element.

Indicio is making a safe, secure, trusted AI future possible

Secure and trustworthy authentication is foundational to unlocking the market benefits of AI and enabling AI networks to interoperate and scale. This is why we were the first decentralized identity company to connect Verifiable Credentials to AI and the first to offer a Verifiable Credential AI solution — Indicio ProvenAI — recognized by Gartner in its latest report on decentralized identity.

We’re tremendously excited to be a part of the NVIDIA Inception Program. We see decentralized identity as a catalytic technology to AI, one that can quickly unlock market opportunities, and support AI agents and agentic AI.

Learn how Indicio ProvenAI can help your organization build secure, verifiable AI systems. Contact Indicio to schedule a demo or explore integration options for your enterprise.

The post Indicio joins NVIDIA Inception Program to bring Verifiable Credentials to AI systems appeared first on Indicio.


Ontology

Decentralized Messaging Just Forked Four Ways

Messaging isn’t broken because of encryption. It’s broken because no one can prove who’s on the other side. Spam, scams, and impersonation because every “encrypted” app still depends on centralized identity. Phone numbers, emails, usernames. The same systems that made spam a trillion-dollar industry now anchor your private chats. A new wave of decentralized messengers wants to fix that

Messaging isn’t broken because of encryption. It’s broken because no one can prove who’s on the other side.

Spam, scams, and impersonation because every “encrypted” app still depends on centralized identity. Phone numbers, emails, usernames. The same systems that made spam a trillion-dollar industry now anchor your private chats.

A new wave of decentralized messengers wants to fix that. But it’s already fragmenting.

Right now, four different philosophies are battling to define “trust” in the next generation of messaging.

DID-based agents: trust built in

A serious effort is happening around DIDComm, a messaging protocol from the Decentralized Identity Foundation. It’s not a chat app. It’s a framework where messages carry cryptographic proof of identity, not a phone number or email.

DIDComm v2.1 formalizes this idea: secure, encrypted, and transport-agnostic messages exchanged between agents, within wallets, servers, or IoT devices, each anchored to a Decentralized Identifier (DID).

It’s programmable trust. Machines and humans can exchange verified credentials in real time. A supplier can prove a shipment is certified. A user can verify a customer support agent without revealing their ID card.

The downside: UX. DIDComm is infrastructure. Most people won’t see it; they’ll feel it… if developers get it right.

Wallet messaging: your address is your inbox

The second camp lives inside wallets. XMTP is doing great work here. Your crypto address doubles as a messaging endpoint. Coinbase Wallet now supports XMTP chats, pushing the idea of “wallet = identity = communication channel.”

Wallet-based messaging fits perfectly for Web3 commerce. You can receive DAO votes, marketplace updates, or direct messages tied to verified wallets. You know the sender because their address and on-chain record prove it.

It’s not free from issues. Spam and Sybil attacks follow open systems. But the framework makes portable identity possible. You can leave one app and take your chat history and reputation with you.

P2P relays: censorship-proof, messy, alive

Waku, Nostr, and SimpleX occupy the third camp: decentralized relays and gossip networks. These protocols trade convenience for censorship resistance.

Waku v2 is a cleaned-up successor to Whisper. It stores, forwards, and routes messages across peers with no servers, and no central discovery. Nostr’s new NIP-17 spec adds encrypted DMs to its social relay system. SimpleX leans even harder on privacy, routing messages through one-time relays and Tor channels.

It’s the purest form of decentralization, and as such the hardest to scale. No account recovery, no global discovery, and little spam control. These networks will matter most where freedom matters most.

MLS and the RCS landgrab

While the crypto world argues about decentralization, the telecom giants are quietly shipping end-to-end encryption to billions.

The Message Layer Security (MLS) protocol, an IETF standard, will soon power RCS, the default chat layer across Android and iOS. Apple finally joined Google and the GSMA alliance to support it.

This isn’t decentralized, but it’s historic. MLS brings group E2EE to carrier messaging for the first time. Billions of phones will suddenly be encrypted by default.

What it doesn’t solve: identity and portability. Your phone number remains your passport. You can’t take your reputation with you when you switch apps.

The X factor

Then there’s X.

After killing off its previous encryption scheme, Elon Musk’s platform is rebuilding DMs as “XChat.” The marketing talks about “Bitcoin-style encryption.” The code doesn’t.

It’s not decentralized. It’s still a closed, centralized system with an uncertain cryptographic base. The value lies in the user base, not the trust model.

X proves a point: big platforms can bolt on crypto, but they can’t decentralize identity. The DNA doesn’t match.

The next layer: identity, reputation, portability

Every messaging system now faces the same three questions:

Who are you?: Centralized apps rely on phone numbers. Decentralized systems use DIDs and verifiable credentials. Can I trust you?: That’s the spam problem. Without portable reputation, decentralized networks drown in noise. Systems like Orange Protocol can assign on-chain reputation scores that follow you across apps. Good behavior and bad. Can I leave?: Real decentralization means you can take your messages, graph, and identity elsewhere. Wallet-based and DID-based protocols are getting close. How Ontology fits

Ontology has spent years building the rails for these questions.

ONT ID gives users a verifiable, self-controlled digital identity. Orange Protocol builds portable reputation and trust scoring across platforms. ONTO Wallet connects both: a messenger, identity agent, and asset hub in one. Ontology Network provides the reliable and low-cost infrastructure, designed for identity-centric use. Reality check

Decentralized messaging isn’t utopia.

Matrix, the biggest federated network, recently broke compatibility to fix protocol-level flaws. Privacy-first tools like SimpleX attract both activists and abuse. Wallet UX remains fragile.

Still, the direction is clear. The next messaging war won’t be fought over stickers or features. It’ll be fought over trust without dependency.

The takeaway

Encryption is table stakes.

Identity and reputation are the new moat.

The protocol that nails both, without locking you in, wins.

And when that happens, messaging stops being an app.

It becomes an ecosystem of verifiable relationships.

That’s the real revolution.

Decentralized Messaging Just Forked Four Ways was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


ComplyCube

Canada Bank Fined $601,139.80 for Five Major AML Breaches

FNBC has been fined $601,139.80 for five AML violations. FINTRAC discovered that the firm did not meet AML standards, including submitting suspicious activity reports, updating client information, and performing due diligence. The post Canada Bank Fined $601,139.80 for Five Major AML Breaches first appeared on ComplyCube.

FNBC has been fined $601,139.80 for five AML violations. FINTRAC discovered that the firm did not meet AML standards, including submitting suspicious activity reports, updating client information, and performing due diligence.

The post Canada Bank Fined $601,139.80 for Five Major AML Breaches first appeared on ComplyCube.


Ocean Protocol

The ASI Alliance from Ocean’s Perspective

By: Bruce Pon People are rightly angry and frustrated. No one is a winner in this current state of unease, lack of information and transparency, and mudslinging. Ocean doesn’t see the benefit of throwing around unfounded and false allegations or the attempts to sully the reputations of the projects and people — it just damages both the ASI and Ocean communities unnecessarily. Ocean has chos
By: Bruce Pon

People are rightly angry and frustrated. No one is a winner in this current state of unease, lack of information and transparency, and mudslinging. Ocean doesn’t see the benefit of throwing around unfounded and false allegations or the attempts to sully the reputations of the projects and people — it just damages both the ASI and Ocean communities unnecessarily.

Ocean has chosen to remain silent until now, out of respect for the ongoing legal processes. But given so many flagrant violations of decency, Ocean would like to take an opportunity to rebut many publicly voiced false allegations, libels, and baseless claims being irresponsibly directed towards the Ocean Protocol project. The false and misleading statements serve only to further inflame our community, while inciting anger and causing even more harm to the ASI and Ocean communities than is necessary.

There are former $OCEAN token holders who converted to $FET and who now face the dilemma of whether to stay with $FET, return to $OCEAN, or to liquidate and be completely done with the drama.

Rather than throw unsubstantiated jabs, I would like to provide a full context with supporting evidence and links, to address many of the questions around the ASI Alliance, Ocean’s participation, and the many incorrect allegations thrown out to muddy the waters and sow confusion among our community.

This blogpost will be followed up with a claim-by-claim rebuttal of all the allegations that have been directed towards Ocean since October 9, 2025 but for now, this blog gives the context and Ocean’s perspective.

I encourage you to read it all, as it reflects months of conversations that reveal the context and progression of events, so that you can best understand why Ocean took steps to chart a separate course from the ASI Alliance. We hope the ASI Alliance can continue its work and we wish them well. Meanwhile, Ocean will go its own way, as we have every right to do.

These are the core principles of decentralization — non-coercion, non-compulsion, individual agency, sovereign property ownership and the power of you, the individual, to own and control your life.

Table of Contents

∘ 1. The Builders
∘ 2. June 2014 — Audacious Goals
∘ 3. January 2024 — AI Revolution in Full Swing
∘ 4. March 2024 — ASI Alliance
∘ 5. April 2024 — A Very Short Honeymoon
∘ 6. May 2024 — Legal Dispute Delays the ASI Launch
∘ 7. June 2024 — Re-Cap Contractual Obligations of the ASI Alliance
∘ 8. August 2024 — Cudos Admittance into ASI Alliance
∘ 9. December 2024 — SingularityNET’s Spending, Declining $FET Token Price and the Ocean community treasury
∘ 10. January 2025 — oceanDAO Shifts from a Passive to an Active Token Holder
∘ 11. May 2025 — oceanDAO Establishes in Cayman
∘ 12. June 2025 — Fetch’s TRNR “ISI” Deal
∘ 13. June 2025 — oceanDAO becomes Ocean Expeditions
∘ 14. June 2025 — ASI Alliance Financials
∘ 15. July 2025 — Ocean Expeditions Sets Out to Diversify the Ocean community Treasury
∘ 16. August 2025 — Ocean Requests for a Refill of the $OCEAN/$FET Token Migration Contract
∘ 17. August 2025 — A Conspiracy To Force Ocean to Submit
∘ 18. October 2025 — Ocean Exits the ASI Alliance
∘ 19. Summary

The Builders

Trent and I are dreamers with a pragmatic builder ethos. We have done multiple startups together and what unifies us is an unquenchable belief in human potential and technological progress.

To live our beliefs, we’ve started multiple companies between us. One of the most rewarding things I’ve done in my life is to join forces with, and have the honor to work with Trent.

Builders create an inordinate amount of value for society. Look at any free and open society where capital is allowed to be deployed to launch new ideas — they thrive by leveraging the imagination, brainpower and hard work needed to bring about technological progress. These builders attract an ecosystem of supporters and services, but also as is natural, those who seek to earn easy money.

Builders also start projects with good faith, forthrightness and a respect for the truth, since everyone who has played this game knows that the easiest person to lie to is yourself. So, it’s best to constantly check assumptions and stand grounded on truth, even if wildly uncomfortable. Truth is always the best policy, sometimes because it is the hardest path. It also means that one doesn’t need to live a web of lies, in a toxic environment and constantly wondering when the lie catches up with you.

Builders focus on Win-Win outcomes, seeking to maximize value for everyone in the game, and make the best of bad situations by building our way through it. No one wants to waste time, with what limited time one has on Earth, least of all, to leave the world worse off for being in it. We all want to have a positive impact, however small, so that our existence has meaning in the void of the cosmic whole.

June 2014 — Set Audacious Goals

Twelve years ago, Trent and I decided to try something audacious — to build a global, decentralized network for data and AI that serves as a viable alternative to the centralized, corrupted and captured platforms. We had been inspired by Snowden, Assange and Manning, and horrified to learn what lies we’d been told. If successful, we could impact the lives of millions of developers, who in turn, could touch the lives of everyone on earth. It could be our technology that powered the revolution.

Trent had pulled this off before. In a prior startup, Trent was a pioneer at deploying AI at scale. His knowledge and the software he’d built helped to drive Moore’s Law for two decades. Every single device you hold or use has a small piece of Trent’s intellect, embedded in at the atomic level to make your device run so you can keep in touch with loved ones, scroll memes, and do business globally.

We’d learnt from the builders of the Domain Name System (DNS), Jim Rutt and David Holtzman who are legends in their own right, that the most valuable services on earth are registries — Facebook for your social graph, Amazon for purchases, and, surprisingly, governments with all the registry services they provide. We delved into the early foundations of the Internet and corresponded with Ted Nelson, one of the architects of our modern internet in the early 1960’s. Ted was convinced that the original sin of the internet was to strip away the “ownership” part of information and intellectual property.

Blockchains restored this missing connection. As knowledge and transactions were all to be ported to blockchains over the next 30 years, these blockchain registries would serve as the most powerful and valuable databases on earth. They were also public, free and open to anyone. A magical epiphany was then made by Trent. It wouldn’t be humans that drew intelligence and insight, it would be AI. The logical users of the eventual thousands of blockchains are AI algorithms, bots and agents.

After 3.5 years on ascribe and then BigchainDB, Ocean was the culmination of work as pioneers in the crypto and blockchain space. Trent saw that the logical endpoint for all these L0 and L1 blockchains was a set of powerful registries for data and transactions. Ocean was our project to build this bridging technology between the existing world (which was 2017 by now) and the future world where LLMs, agents and other AI tools could scour the world and make sense for humans.

January 2024 — AI Revolution in Full Swing

ChatGPT had been released 14 months prior, in November 2022, launching the AI revolution for consumers and businesses. Internet companies committed hundreds of billions to buy server farms, AI talent was getting scooped up for seven-to-nine figure sums and the pace was accelerating fast. Ocean had been at the forefront on a lonely one-lane road and overnight the highway expanded to an eight-lane freeway with traffic zooming past us.

By that time, Trent and I had been at it for 10 years. We’d built some amazing technologies and moved the space forward with fundamental insights on blockchains, consensus algorithms, token design, and AI primitives on blockchains, with brilliant teammates along the way. We’d launched multiple initiatives with varying degrees of adoption and success. We’d seen a small, vibrant community, “The Ocean Navy,” led by Captain Donnie “BigBags”, emerge around data and AI, bound with a cryptotoken — the $OCEAN token.

We were also feeling the fatigue of managing a large community that incessantly wanted the token price to go up, with expectations of constant product updates, competitions, and future product roadmaps. I myself had been on the startup grind since 2008, having unwisely jumped into blockchain to join Trent immediately after exiting my first startup, without taking any break to reflect and recover. By the beginning of 2024, I was coming out of a deep 2-year burnout where it had been a struggle to just get out of bed and accomplish one or two things of value in a day. After 17 years of unrelenting adrenaline and stress, my body and mind shut down and the spirit demanded payment. The Ocean core team was fabulous, they stepped in and led a lot of the efforts of Ocean.

When January 2024 came around, both Trent and I were in reasonable shape. He and I had a discussion on “What’s next?” with Ocean. We wanted to reconcile the competing demands of product development and the expectations of the Ocean community for the $OCEAN token. Trent and I felt that the AI space was going to be fine with unbridled momentum kicked off with ChatGPT, and that we should consider how Ocean could adapt.

Trent wanted to build hardcore products and services that could have a high impact on the lives of hundreds of people to start — a narrow but deep product, rather than aim for the entire world — broad but shallower. The rest of the Ocean team had been working on several viable hypotheses at varying scales of impact. For me, after 12 years of relentless focus in blockchain, I wanted to explore emerging technologies and novel concepts with less day-to-day operational pressure.

Trent and I looked at supporting the launch of 2–5 person teams as their own self-contained “startups” and then carving out 20% of revenue to plow back into $OCEAN token buybacks. We also bandied about the idea of joining up with another mature project in the crypto-space, where we could merge our token into theirs or vice versa. This had the elegant outcome where both Trent and I could be relieved of the day-to-day pressures, offloading the token and community management, and growing with a larger community.

March 2024 — ASI Alliance

In mid-March, Humayun Sheikh (“Sheikh”) reached out to Trent with an offer to join forces. Fetch and SingularityNet had been in discussions for several months on merging their projects, led and driven by Sheikh.

Even though Fetch and SingularityNet were not Ocean’s first choice for a partnership, and the offer came seemingly out of the blue, I was brought in the next day. Within 5 days, all three parties announced a shotgun marriage between Fetch, SingularityNet and Ocean. To put it bluntly, we, Ocean, had short-circuited our slow-brain with our fast-brain, because we had prepped ourselves for this type of outcome when it appeared, even with candidates that we hadn’t previously considered, and we rationalized it.

24 Mar 2024 Call between Dr. Goertzel & Bruce Pon

The terms for Ocean to join the ASI Alliance were the following:

The Alliance members will be working towards building decentralized AI Foundations retain absolute control over their treasuries and wallets It is a tokenomic merger only and all other collaborations or activities would be decided by the independent Foundations.

Sovereign ownership over property is THE core principle of crypto and it was the primary condition of Ocean joining the ASI Alliance. Given that there were two treasuries for the benefit of the Ocean community, a hot wallet managed by Ocean Protocol Foundation for operational expenses and a cold wallet, owned and controlled by oceanDAO (the independent 3rd party collective charged with holding $OCEAN passively,), we wanted to make sure that sovereign property and autonomy would be respected. In these very first discussions, SingularityNet also acknowledged the existence of the oceanDAO as a separate entity from Ocean. With this understanding for ownership of treasuries in place, Ocean felt comfortable to move forward. Ocean announced that it was joining the ASI Alliance on 27 March 2024.

April 2024 — A Very Short Honeymoon

Immediately after the announcement, cracks started to appear and the commercial understandings that had induced Ocean to enter into the deal started to be violated or proven untrue.

SingularityNet confided in us that they were very grateful that Ocean could join since their own community would balk at a merger solely with Fetch, citing the SingularityNet community skepticism of Mr. Sheikh and Fetch.

Ben Goertzel of SingularityNet spoke of how Sheikh would change his position on various issues, and of Sheikh’s desire to “pump and dump” a lot of $FET tokens. Ben confided in us that they were very grateful that Ocean could join since their own SingularityNet community would balk at a merger solely with Fetch, citing the strong community skepticism about Sheikh and Fetch.

Immediately after the ASI Alliance was announced, SingularityNet implemented a community vote to mint $100 million worth of $AGIX with the clear intent on selling them down via the token bridge and migration contract, in our newly shared $ASI/$FET liquidity pools.

The community governance voting process was a farce. Fetch is 100% owned and controlled by Sheikh who holds over 1.2 billion $FET, so any “community” vote was guaranteed to pass. For SingularityNet, the voting was more uncertain, so SingularityNet was forced to massage the messages to convince major token holders to get on board. Ocean took its own pragmatic approach to community voting with the position, if $OCEAN holders don’t want $FET, they can sell their $OCEAN and move on. Ocean wanted to keep the “voting” as thin as possible so that declared preferences matched actual preferences.

Mr. David Lake (“Lake”), Board Member of SingularityNet also disclosed that Sheikh treated network decentralization as an inconvenient detail that he didn’t particularly care about and only paid “lip service” to it.

In hindsight this should have been a major red flag.

April 3, 2024 — Lake to Pon

Ocean discovered that the token migration contracts, which SingularityNet had represented as being finished and security audited, were nowhere near finished or security audited.

A combined technology roadmap assessment showed little overlap, and any joint initiatives for Ocean would be “for show” and expediency rather than serving a practical, useful purpose.

The vision of bringing multiple new projects on-board, the vision sold to Ocean for ASI, hit the Wall when Fetch asserted that their L1 chain would retain primacy, so they could keep their $FET token. This meant that only ERC20 tokens could be incorporated into ASI in the future. ASI would not be able to integrate any other L1 chain into the Alliance.

This presented a dilemma for Ocean. Ocean was working closely with Oasis ($ROSE) and had planned on deeper technical integrations on multiple projects. If Ocean’s token was going to become $FET but Ocean’s technology and incentives were on $ROSE, there was an obvious mismatch.

Ocean worked feverishly for three weeks to develop integration plans, migration plans and technology roadmaps that could bridge the mismatch but, in the end, the options were rejected outright.

Summary of Ocean’s Proposal and Technical Analysis that was presented to Fetch and SingularityNET

Outside of technology, the Ocean core team were being dragged into meeting-hell with 4–6 meetings a day, sucking up all our capacity to focus on delivering value to the Ocean community. ASI assumed the shape of SingularityNet, which was very administratively heavy and slow.

No one had done proper due diligence. We’d all made a mistake of jumping quickly into a deal.

At the end of April 2024, 1 month after signing the ASI Token Merger Agreement, Ocean asked to be let out of the ASI Alliance. Ocean had ignored the red flags for long enough and wanted to part ways amicably with minimal damage. Ocean drafted a departure announcement that was shared in good faith with Fetch and SingularityNet.

April 25/26 — Sheikh and Pon

The next day emails were exchanged, starting with one from Sheikh to myself, threatening Ocean and myself with a lawsuit that would result in “significant damages.”

Believing that Sheikh shared a commitment to the principles of non-coercion and non-compulsion, I responded to say that the escalation path of Sheikh went immediately towards a lawsuit.

Sheikh then accused Ocean of being guilty of compelling and coercing the other parties against their will, and made clear that any public statement about Ocean leaving the ASI Alliance would be met with a lawsuit.

I re-asserted Ocean’s right to join or not join ASI, and asked that the least destructive path be chosen to minimize harm on the Fetch, SingularityNet and Ocean communities.

For Ocean, it was regrettable that we’d jumped into a deal haphazardly. At the same time, Ocean had signed a contract and we valued our word and our promises. We knew that it was a mistake to join ASI, but we’d gotten ourselves into a dilemma. We decided to ask to be let out of the ASI contract.

May 2024 — Legal Dispute Delays the ASI Launch

Ocean’s request to be let out of the ASI Alliance was met with fury, aggression, and legal action was initiated immediately on Ocean. Sheikh was apparently petrified of the market reaction and refused to entertain anything other than a full merger.

Over the month of May 2024, with the residual goodwill from initial March merger discussions, I negotiated with David Levy who was representing Fetch, with SingularityNet stuck in the middle trying to play referee and keep everyone happy.

May 2, 2024 — Lake and Pon

Trent put together an extensive set of technical analyses exploring possible options for all parties to get what they wanted. Fetch wanted a merger while keeping their $FET token. Ocean needed a pathway that wouldn’t obstruct us to integrate with Oasis. SingularityNet wanted everyone to just get along.

By mid-May sufficient progress had been made so that I could lay down a proposal for Ocean to rejoin the ASI initiative.

May 12, 2024 — Pon to Sheikh

By May 24, 2024 we were coming close to an agreement.

Given our residual reluctance to continue with the ASI Alliance, Ocean argued for minority rights so that we would not be bullied with administrative resolutions at the ASI level that compelled us to do anything that did not align with our values or priorities.

May 24, 2024 — Pon to Levy

Despite Fetch and SingularityNET each (separately) expressing to Ocean concerns that each of the other was liquidating too many tokens too quickly (or had the intention to do so), we strongly reiterated the sacrosanct principle of all crypto, that property in your wallet is Yours. SingularityNet agreed, wanting the option to execute airdrops on the legacy Singularity community if they deemed it useful.

In short:

· Ocean would not interfere with Fetch’s or SingularityNet’s treasury, nor should they interfere with Ocean (or any other token holder).
· Fetch’s, SingularityNET’s and Ocean’s treasuries were sole property of the Foundation entities, and the owning entities had unencumbered, unrestricted rights to do as they wish with their tokens.

oceanDAO, the Ocean community treasury DAO which had been previously acknowledged by SingularityNET in March at the commencement of merger discussions, then came up over multiple discussions with Mr. Levy.

A sticking point in the negotiations appeared when Fetch placed significant pressure to compel Ocean (and oceanDAO) to convert all $OCEAN to $FET immediately after the token bridge was opened. Ocean did not control oceanDAO, and Ocean reiterated forcefully that oceanDAO would make their own decision on the token swap. No one could compel a 3rd party to act one way or the other, but Ocean would give best efforts to socialize the token swap benefits.

In keeping with an ethos of decentralization, Ocean would support any exchange choosing to de-list $OCEAN but Ocean would not forcefully advocate it. Ocean believed that every actor — exchange, token holder, Foundation — should retain their sovereign rights to do as they wish unless contractually obligated.

May 24, 2024 — Pon to Levy

As part of this discussion, Ocean disclosed to Fetch all wallets that it was aware of for both Ocean and the oceanDAO collective. What is clearly notable is that Ocean clearly highlighted to Fetch that OceanDAO was a separate entity, independent from Ocean (i.e. Ocean Protocol Foundation) and that it is not in any way controlled by Ocean.

May 24, 2024 — Pon to Levy (Full disclosure of all Ocean Protocol Foundation and oceanDAO wallets)

Fetch applied intense pressure on Ocean to convert all $OCEAN treasury tokens (including oceanDAO treasury tokens) into $FET. In fact, Fetch sought to contractually compel Ocean to do so in the terms of the ASI deal. Ocean refused to agree to this, since, as already made known to Fetch, the oceanDAO was an independent 3rd party.

Finally acknowledging the reality of the oceanDAO as a 3rd party, Fetch.ai agreed to the following term into the ASI deal:

Ocean “endeavors on best efforts to urge the oceanDAO collective to swap tokens in their custody” into $FET/$ASI as soon as the token bridge was opened, acknowledging that Ocean could not be compelled to force a 3rd party to act.

Being close to a deal, we moved on to the Constitution of the ASI entity (Superintelligence Alliance Ltd). As was clear from the Constitution, the only role of the ASI entity was the assessment and admittance of new Members, and the follow-on instruction to Fetch to mint the necessary tokens to swap out the entire token supply of the newly admitted party.

This negotiated agreement allowed Ocean to preserve its full independence within the ASI Alliance so that it could pursue its own product roadmap based on pragmatism and market demand, rather than fake collaborations within ASI Alliance for marketing “show.” Ocean had fought, over and over again, for the core principle of crypto — each wallet holder has a sole, unencumbered right to their property and tokens to use as they saw fit.

It also allowed Ocean to reject any cost sharing on spending proposals which did not align to Ocean’s needs or priorities, to the significant dismay of Fetch and SingularityNet. They desired that Ocean would pay 1/3 of all ASI expenses that were proposed, even those that were nonsensical or absurd. Ocean’s market cap made up 20% of ASI’s total market cap, so whatever costs were commonly agreed, Ocean would still be paying “more than its fair share” relative to the other two members.

May 24, 2024 — Pon to Levy
In early-June, Ocean, Fetch and SingularityNet struck a deal and agreed to proceed. Fetch made an announcement of the ASI merger moving forward for July 15, 2024.

Ocean reasoned that a protracted legal case would not have helped anyone, $OCEAN holders would have a good home with $FET, that there were worse outcomes than joining $FET and that it would relieve the entire Ocean organization from the day-to-day management of community expectations, freeing the Ocean core team to focus on technology and product.

From June 2024, the Ocean team dove in to execute, in support of the ASI Alliance merger. Ocean had technical, marketing and community teams aligned across all three projects. The merger went according to plan, in spite of the earlier hiccups.

Seeing that there would potentially be technology integration amongst the parties moving forward, the oceanDAO announced through a series of blogposts that all $OCEAN rewards programs would be sun-downed in an orderly manner and that the use of Ocean community rewards would be re-assessed at a later date.

51% treasury for the Ocean community

It’s possible that it was at this juncture that Sheikh mistakenly assumed that the Ocean treasury would be relinquished solely for ASI Alliance purposes. This is what may have led to Sheikh’s many false allegations, libelous claims and misleading statements that Ocean somehow “stole” ASI community funds when, throughout the entire process, Ocean has made forceful, consistent assertions for treasury sovereignty.

Meanwhile, the operational delay had somewhat dampened the enthusiasm in the market for the merger. SingularityNet conveyed to Ocean that this had likely prevented Sheikh from using the originally anticipated hype and increased liquidity to exit large portions of his $FET position with a huge profit for himself. As it turned out, Ocean’s hesitation, driven by valid commercial concerns, may have inadvertently protected the entire ASI community by taking Sheikh’s planned liquidation window away.

In spite of any earlier bad blood, I sent Sheikh a private note immediately upon hearing that his father was gravely ill.

June 10, 2024 — Pon to Sheikh June 2024 — Re-Cap Contractual Obligations of the ASI Alliance

To take a quick step back, the obligations for the ASI Alliance were the following:

Fetch would mint 610.8m $FET to swap out all Ocean holders at a rate of 0.433226 $FET / $OCEAN Fetch would inject 610.8m $FET into the token bridge and migration contract so every $OCEAN token holder could swap their $OCEAN for $FET.

In exchange, Ocean would:

Swap a minimum of 4m $OCEAN to $FET (Ocean Protocol Foundation only had 25m $OCEAN, of which 20m $OCEAN were locked with GSR) Support exchanges in the swap from $OCEAN to $FET Join the to-be established ASI Alliance entity (Superintelligence Alliance Ltd).

When the merger happened in July 2024, Fetch.ai injected 500m $FET each into the migration contracts for $AGIX and $OCEAN, leaving a shortfall of 110.8m $FET which Ocean assumed would be injected later when the migration contract ran low.

With the merger completed, Ocean set about to focus on product development and technology, eschewing many of the outrageous marketing and administrative initiatives proposed by Fetch and SingularityNet.

July 17, 2024 — Pon to Lake and Levy

This singular product focus continued until Ocean’s eventual departure from the ASI Alliance in October 2025.

August 2024 — Cudos Admittance into ASI Alliance

In August 2024, Fetch had prepared a dossier to admit Cudos into the ASI Alliance. The dossier was relatively sparse and missed many key pertinent technical details. Trent had many questions about Cudos’ level of decentralization, which was supposedly one of the key objectives of the ASI Alliance, and whether Cudos’ service was both a cultural and technical fit within the Alliance. During the 2h Board meeting, it got heated when Sheikh made clear that he regarded decentralization as some “rubbish, unknown concept”.

The vote on Cudos proceeded. I voted for Cudos to try to maintain good relations with the others while Trent rightfully voiced his dissatisfaction with the compromise on decentralization principles. The resolution passed 5 of 6 when Fetch and Singularity both unanimously voted “Yes” for entry of Cudos.

The Cudos community “vote” proceeded. Even before the results had been publicly announced on 27 Sep 2024, Fetch.ai had minted the Cudos token allotment, and then sent the $FET to the migration contract to swap out $CUDOS token holders.

December 2024 — SingularityNET’s Spending, Declining $FET Token Price and the Ocean community treasury

By December 2024, many of the ASI and Ocean communities had identified large flows of $AGIX and $FET tokens from the SingularityNet treasury wallets. At the start of the ASI Alliance, Ocean ignored the red flag signals from SingularityNet on their undisciplined spending that was untethered to reality.

Dr. Goertzel was hellbent on competing with the big boys of AI who were deploying $1 Trillion in CapEx. Meanwhile Dr. Goertzel apparently thought that a $100m buy of GPUs could make a difference. As part of this desire to “keep up” with OpenAI, X and others, SingularityNet had a headcount over 300 people. Their monthly fixed burn rate of $6 million per month exceeded the annual burn rate of both the Fetch and Ocean teams combined. This was, in Ocean’s view, unsustainable.

The results were clear as day in the $FET token price chart. From a peak of $3.22/$FET when the ASI Alliance was announced, the token price had dropped to $1.25 by end December 2024. Ocean had not sold, or caused to be sold, any $FET tokens.

Based on independent research, it appears that Fetch.ai also sold or distributed tokens to the tune of 390 million $FET worth $314 million from March 2024 until October 2025:

Further research shows a strong correlation between Fetch liquidations and injections into companies controlled by Sheikh in the UK.

All excess liquidity and buy-demand for $FET was sucked out through SingularityNet’s $6 million per month (or more) burn rate and Fetch’s liquidations with a large portion likely going into Sheikh controlled companies. As a result, the entire ASI community suffered, as $FET underperformed virtually every other AI-crypto token, save one. $PAAL had the unfortunate luck to get tangled up with the ASI Alliance, and through the failed token merger attempt, lost their community’s trust and support, earning the unenviable honour of the worst performing AI-crypto token this past year.

SingularityNet was harming all of ASI due to their out-of-control spending and Fetch’s massive sell-downs compounded the strong negative price pressure.

As painful as it was, Ocean held back from castigating SingularityNet, as one of the core principles of crypto is that a wallet holder fully controls their assets. Ocean kept to that principle, believing that it would likewise apply to any assets controlled by Ocean or oceanDAO. We kept our heads down and maintained strict fiscal discipline.

For the record, from March 2024 until July 2025, a period of 16 months, neither Ocean nor oceanDAO liquidated ANY $FET or $OCEAN into the market, other than for the issuance of community grants, operational obligations and market making to ensure liquid and functioning markets. Ocean had lived through too many bear markets to be undisciplined in spending. Ocean kept budgets tight, assessed every expense regularly and gave respect to the liquidity pools generated by organic demand from token holders and traders.

Contrast this financial discipline with the records which now seem to be coming out. Between SingularityNet and Fetch, approximately $500 million was sent to exchange accounts on the Ethereum, BSC and Cardano blockchains, with huge amounts apparently being liquidated for injection into Sheikh’s personal companies or being sent for custody as part of the TRNR deal (see below). This was money coming from the pockets of all the ASI token holders.

January 2025 — oceanDAO Shifts from a Passive to an Active Token Holder

In January 2025, questions arose from the oceanDAO, whether it would be prudent to explore options to preserve the Ocean community treasury’s value. In light of a $FET price that was clearly declining at a faster rate relative to other AI-crypto tokens, something had to be done.

Since 2021, when the custodianship of oceanDAO had been formally and legally transferred from the Ocean Protocol Foundation, the oceanDAO had held all assets passively. In June 2023, the oceanDAO minted the remaining 51% of the $OCEAN supply and kept them fully under control of a multisig without any activity until July 2025, to minimize any potential tax liabilities on the collective. I was one of seven keyholders.

To put to bed any false allegations, the $OCEAN held by oceanDAO are for the sole benefit of the Ocean community and no one else. It doesn’t matter if Sheikh makes claims based on an alternative reality hundreds of times or that these claims are repeated by his sycophants — the truth is that the $OCEAN / $FET owned by oceanDAO is for the benefit of the Ocean community.

May 2025 — oceanDAO Establishes in Cayman

The realization that SingularityNet (and, as it now turns out, Fetch) was draining liquidity and creating a consistent negative price impact on the community spurred the oceanDAO to investigate what could be done to diversify the Ocean community treasury out of the passively held $OCEAN which was pegged to $FET.

The oceanDAO collective realized it had to actively manage the Ocean community treasury to protect Ocean community interests, especially as the DeFi landscape had matured significantly over the years and now offered attractive yields. Lawyers, accountants and auditors were engaged to survey suitable jurisdictions for this purpose — Singapore, Dubai, Switzerland, offshore Islands. In the end, the oceanDAO decided on Cayman.

Cayman offered several unique advantages for DAOs. Cayman law permits the creation of entities which could avoid giving Founders or those close to the project any legal claim on community assets, ensuring that the work of the entity would be solely deployed for the Ocean community. One quarter of all DAOs choose Cayman as their place to establish, including SingularityNet.

By June 2025, a Cayman trust was established on behalf of the oceanDAO collective for the benefit of the Ocean community. This new entity became known as Ocean Expeditions (OE). oceanDAO transferred its assets to the OE entity and the passively held $OCEAN were converted to $FET. OE could now execute an active management of the treasury. As it happened, Fetch.ai had in fact gotten what it wanted, namely, for oceanDAO to convert its entire treasury of 661 million $OCEAN into $FET tokens.

Contrary to what Sheikh has been insinuating, Ocean does not control OE. Whilst I am the sole director of OE, I remain only one of several keyholders, all of whom entered into a legally binding instrument to act for the collective benefit of the Ocean community.

June 2025 — Fetch’s TRNR “ISI” Deal

Unbeknownst to Ocean or oceanDAO, in parallel, Fetch.ai UK had been working on an ETF deal with Interactive Strength Inc (ISI), aka the “TRNR Deal”.

Neither Ocean nor oceanDAO (or subsequently OE) had any prior knowledge, involvement or awareness of this. In fact, “Ocean” is not mentioned even once in the SEC filings. Consistent with the original understanding that each Foundation had sole control of their treasuries, Ocean was not consulted by Fetch.

I don’t have the full details and I encourage the ASI community to inquire further but the mid-June TRNR deal seems to have committed Fetch to supply $50 million in a cash loan for DWF and ISI, and $100 million in tokens (125m $FET) for a backstop to be custodied with BitGo.

SingularityNet told Ocean that they were strong-armed by Fetch.ai to put in $15 million in cash for this deal, but were not named in any of the filings. The strike price for the deal was around $0.80 per $FET and the backstop would kick-in if $FET dropped to $0.45, essentially betting that $FET would never drop -45%.

However, this ignored the fact that crypto can fall 90% in bear markets or flash crashes. The TRNR deal not only put Fetch.ai’s assets at risk if the collateral was called, the 125m $FET would be liquidated as well, causing significant harm to the entire ASI community.

Well, four months later, that’s exactly what happened. On the night of Oct 10, 2025, Trump announced tariffs on China sending the crypto market into chaos. Many tokens saw a temporary drawdown of 95% before recovering with 2/3 of their valuation from the day before. One week later on Oct 17, further crypto-market aftershocks occurred with another round of smaller liquidations.

Again, I don’t have all the details, but it appears that large portions of the $FET custodied with BitGo were liquidated causing a drop in $FET price from $0.40 down to $0.32.

Oct 12, 2025 — Artificial Superintelligence Alliance Telegram Chat

The ASI and Fetch community should be asking Fetch.ai some hard questions such as why Fetch.ai would sign such a reckless and disastrous deal? They should ask for full transparency on the TRNR deal with clear numbers on the amounts loaned, $FET used as collateral, and the risk assessment on the negative price impact to $FET if the collateral was called and liquidated by force.

June 2025 — oceanDAO becomes Ocean Expeditions

Two weeks after the TRNR deal was announced, OE received its business incorporation papers in Cayman and assets from oceanDAO could be immediately transferred over to the OE entity.

The timing of OE’s incorporation was totally unrelated to Fetch’s TRNR deal, and had in fact been in the works long before the TRNR deal was announced. OE’s strategy to actively manage the Ocean community treasury was developed completely independently from Fetch’s TRNR deal, because remember, Ocean was never informed of anything except for a head’s up on the main press release a few days before publication.

OE had few options with the $OCEAN it held because (contrary to recent assertions) Fetch.ai had mandated a one-way transfer from $OCEAN to $FET in the June 2024 deal for Ocean to re-engage with the ASI Alliance. By this time, most exchanges had de-listed $OCEAN, which closed off virtually all liquidity avenues. As a result, $OCEAN lost 90% of its liquidity and exchange pairs.

OE had only one way out and that was to convert $OCEAN to $FET. This was consistent with the ASI deal. It was Fetch.ai that wanted Ocean to compel oceanDAO to convert $OCEAN to $FET as part of the ASI deal.

On July 1 2025, all 661m $OCEAN held by OE in the Ocean community wallet were converted.

Completely unbeknownst to Ocean and to OE, Sheikh viewed OE’s treasury activities, not as support for his $FET token, rather as sabotage for his TRNR plans.

But recall, OE had no idea about the details of the deal. Neither OE, nor Ocean, was a party to the deal in any way. I found out like everyone else via a press release on June 11 that the deal had closed and I promptly ignored it to focus on Ocean’s strategy, products and technology.

Sticking to the principle that each Foundation works in its own manner for the benefit of the ASI community, Ocean didn’t feel the need to demand any restrictions on Fetch.ai nor to delve into any documents. Personally, I didn’t even read the SEC filings until September, in the course of the ongoing legal proceedings to understand the allegations being made against Ocean. The TRNR deal was solely a Fetch.ai matter.

June 2025 — ASI Alliance Financials

As an aside, I had been driving the effort to keep the books of the ASI Alliance properly up-to-date.

Sheikh was insistent that Fetch be reimbursed by the other Members for its financial outlays, assuming that other ASI members had spent less than Fetch.ai. When Sheikh found out that it was actually Ocean who had contributed the most money to commonly agreed expenditures, even though Ocean was the smallest member, and SingularityNet and Fetch.ai would owe Ocean money, the complaint was dropped.

Instead, Sheikh tried another tactic to offload expenses.

SingularityNet and Ocean had signed off on the 2024 financial statements for the ASI Alliance. However, the financials were delayed by Fetch.ai. Sheikh wanted to load up the balance sheet of the ASI Alliance with debt obligations based on the spendings of the member Foundations.

June 20, 2025 — Pon to Sheikh

Fetch’s insistence was against the agreement made at the founding of ASI, that each Member would spend and direct their efforts on ASI initiatives of their own choosing and volition, and the books of ASI Alliance would be kept clean and simple. This was especially prudent as the ASI Alliance had no income or assets.

After a 6-week delay and back and forth discussions, in mid-August we finally got Fetch.ai’s agreement to move forward by deferring the conversation on cost sharing to the following year.

This incident stuck in my mind as an enormous red flag, as these types of accounting practices hinted at the type of tactics that Sheikh may consider as a normal way of doing business. Ocean strongly disagrees and does not find such methods to be prudent.

July 2025 — Ocean Expeditions Sets Out to Diversify the Ocean community Treasury

On July 3, Ocean Expeditions (OE) sent 34 million $FET to a reputable market maker for mid-dated options with sell limits set to $0.75–0.95, so OE could earn premiums while allowing for the liquidation of $FET if the price was higher at option expiry.

This sort of option strategy is a standard approach to treasury management that is ethical, responsible and benefits token holders by maintaining relative price stability. The options only execute and trigger a sale if, upon maturity, the $FET price is higher than the strike price. If at maturity the $FET price is lower than the strike price, the options expire unexercised while still allowing OE to earn premiums, benefiting the Ocean community.

Insinuations that these transactions were a form of “token dumping” are nonsensical and misinformed. OE was simply managing the community treasury.

On July 14, a further 56 million $FET was sent out as part of the same treasury strategy with strikes set at $0.70-$1.05.

These option transactions did lead to a responsible liquidation of $18.2 million worth of $FET on July 21, one that accorded with market demand and did not depress the $FET price. Further, this was 6 weeks after the TRNR deal was announced. From July 21 until Ocean’s exit from the ASI Alliance on Oct 9, 2025, there were no further liquidations of $FET save for one small tranche that raised $2.2m.

In total, Ocean Expeditions raised $22.4 million for the Ocean community, a significantly smaller sum compared to the estimated $500 million of liquidations by the other ASI members.
August 2025 — Ocean Requests for a Refill of the $OCEAN/$FET Token Migration Contract

Around this time, Ocean realized that the $OCEAN/$FET token migration contract was running perilously low. The migration contract was supposed to cover over 270 million $OCEAN to be converted by 37,500 token holders, but only 7 million $FET were left in the migration contract.

On July 22, Ocean requested Fetch to top-up the migration contract with 50m $FET without response. Another email was sent to Sheikh on July 29 with a response from him of “will work on it.” Sheikh asked for a call on Aug 1, where he agreed to top up the migration contract with the remaining tokens. On Aug 5, I wrote an email to Fetch and Sheikh with a formal request for a top-up, while confirming that all wallets are secured for the Ocean community.

I sent a final note on August 12 to Sheikh with a request for information why the promised top-up had not yet occurred.

August 2025 — A Conspiracy To Force Ocean to Submit

Starting August 12, Fetch.ai and SingularityNet actively conspired against Ocean. Without allowing Ocean’s directors to vote on the matter (on the grounds that Ocean’s directors were purportedly “conflicted”), Fetch’s and SingularityNet’s directors on the ASI Alliance unilaterally attempted to pass a resolution to close the $OCEAN-$FET token bridge. This action clearly violated the ASI Constitution which mandated a unanimous agreement by all directors for any ASI Alliance actions.

On August 13, Mario Casiraghi, SingularityNET’s CFO, issued the following email:

The next day on August 14, I received this message from Lake:

(In this note, Lake acknowledged that Sheikh’s original plans to dump the ASI Alliance were still in place, albeit potentially at an accelerated pace).

Ocean objected forcefully, citing the need to protect the ASI and Ocean communities, and pleading to keep the matter private and contained.

August 19, 2025 — Pon, Dr. Goertzel, Mario Casiraghi

At this point, I highlighted the obvious hypocrisy of SingularityNet and Fetch.

SingularityNet and Fetch had moved $500 million worth of $FET, sucking out excess liquidity from all token holders. All the while, Ocean held its tongue and maintained fiscal discipline.

Yet, the very first instance that oceanDAO/ Ocean Expeditions actually liquidated any $FET tokens, Ocean was accused of malicious intent, exercising control over OceanDAO/OE and called to task. Fetch had accused the wrong entity, Ocean, for actions of a wholly separate 3rd party, and jumped to completely false conclusions about the motives.

The improper ASI Alliance Directors’ actions violated the core principle of the ASI Alliance that crypto-property was to be solely directed by each Foundation. Additional clauses with demands for transparency, something neither Fetch.ai nor SingularityNet had ever offered or provided themselves, were included to further try to hamper and limit Ocean Protocol Foundation.

The only authority of the ASI Alliance and the Board, as defined in the ASI Constitution, was to vote on accepting new members and then minting the appropriate tokens for a swap-out. There was no authority, power or mandate to sanction any Member Foundation.

Any and all other actions needed a unanimous decision from the Board and Member Foundations. This illegal action was exactly what Ocean was so concerned about in the May 2024 “re-joining” discussions — the potential for the bullying and harassment of Ocean as the weakest and smallest member of the ASI Alliance.

Finally, in seeing the clear intent on closing the token bridge and the active measures to harm 37,600 innocent $OCEAN token holders, Ocean needed to act.

Ocean immediately initiated legal action to protect Ocean and ASI users on August 15, 2025. This remains ongoing.

Within hours of Ocean’s filing, Fetch responded with a lengthy countersuit against Ocean accompanied with witness statements and voluminous exhibits. This showed that Fetch had for weeks been planning on commencing a lawsuit against Ocean and instructing lawyers behind the scenes. On August 19, Ocean also received a DocuSign from SingularityNet’s lawyer. This contained the resolution which Fetch and Singularity attempted to pass without the Ocean-appointed directors, i.e. myself and Trent.

On August 22, by consent, parties agreed to an order to maintain confidentiality during the legal process, and out of respect for the process, Ocean refrained from communicating with any 3rd parties, including OE who was not a party to the dispute or the proceedings. It is also the reason why Ocean has, until now, refrained from litigating this dispute in public.

October 2025 — Ocean Exits the ASI Alliance

As the legal proceedings carried on, and evidence was provided from August until late-September, it was clear that Ocean could no longer be a part of the ASI Alliance.

The only question was when to exit?

Ocean was confident that the evidence and facts presented to the adjudicator would prove its case and vindicate it, so Ocean wanted the adjudicator to forcefully make an assessment.

Once the adjudicator issued his findings (which Ocean has proposed to waive confidentiality over and release to the community so the community can see the truth for themselves, but which Fetch has refused to agree to), Ocean decided that it was time to leave the ASI Alliance.

The 18-month ordeal was too much to bear.

From the violation of the original agreements on the principles of decentralization, to the encroachment on both Ocean and Ocean Expedition treasuries, while watching SingularityNet and Fetch disregard and pilfer the community for their own priorities, Ocean knew that it needed out.

Ocean couldn’t save ASI, but could try to salvage something for the Ocean community.

SingularityNet and Fetch used their treasuries recklessly as they saw fit, without regard or consideration of the impacts to the ASI community.

From Fetch’s over-reaction the first time Ocean wanted to bow out amicably, Ocean knew that additional legal challenges and attempts to block Ocean from leaving could be expected.

Ocean has only tried to build decentralized AI products, exert strict fiscal discipline, collaborate in good faith and protect the ASI and Ocean communities as best as we can.

As of Oct. 9, Ocean Expeditions retained the vast majority of the $FET that were converted from $OCEAN. All tokens held by Ocean Expeditions are its property, and will be used solely for the benefit of the Ocean community. They are not controlled by Ocean, or by me.

Summary

$FET dropped from a peak of $3.22 at the time of the ASI Alliance announcement to $0.235 today, a -93% drop. Fetch and SingularityNet have tried to convince the community that this was all a result of Ocean leaving the ASI Alliance, but that is untrue.

Ocean announced its withdrawal on Oct 9 from the ASI Alliance in a fully amicable manner, without pointing fingers to minimize any potential fallout. Even after 8 hours of Ocean’s announcement, the price of $FET had only fallen marginally from $0.55 to $0.53. In other words, Sheikh is blaming Ocean for a problem that has little to do with anything Ocean has done.

Price Chart “1h-Candles” on $FET at the time of the Ocean withdrawal

The Oct 10/11 Crypto flash crash due to Trump’s China tariff announcement took the entire market down and $FET went down to $0.11 before it recovered to $0.40.

On the evening of Oct 12, a further decline in $FET came when the TRNR collateral was called on and started to be liquidated. This event brought $FET down to $0.32. This was the ill-conceived deal entered into by Fetch.ai which apparently ignored the extreme volatility of crypto-markets and caused unnecessary damage to the entire ASI community.

Meanwhile, in the general crypto market, a second aftershock of liquidations happened around Oct 17.

Combined with Fetch and Sheikh’s attempts to denigrate Ocean, and in the process causing damage to their own $FET token as the allegations became more and more ludicrous, and the narrative attacks started to contradict themselves.

In short, the -93% drop in $FET from 27 March 2024 until 19 October 2025 was due to the broader market sentiment and volatility, SingularityNet and Fetch’s draining of liquidity from the entire community by dumping upwards of $500 million worth of $FET tokens, a reckless TRNR deal that failed to anticipate crypto dropping more than 45% and wiping out $150 million in cash and tokens, and Fetch.ai’s FUDing of its own project, bringing disrepute on itself when Ocean decided that it could not in good conscience remain a part of the ASI Alliance.

X Post: @cryptorinweb3 — https://x.com/cryptorinweb3/status/1980644944256930202

I’m not going to say whose fault I think the drop in $FET’s price is, but I can with very high confidence say it has next to nothing to do with Ocean leaving the ASI Alliance.

I hope that the Fetch and SingularityNet communities ask for full financial transparency on the spendings of the respective Fetch.ai and SingularityNet Companies and Foundations.

I would also like to sincerely thank the Ocean community, trustees and the public for their patience and support during Ocean’s radio silence in respect of the legal processes.

The ASI Alliance from Ocean’s Perspective was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Tokeny Solutions

Apex Group’s Tokeny & AMINA Bank combine tokenisation innovation with regulated banking

The post Apex Group’s Tokeny & AMINA Bank combine tokenisation innovation with regulated banking appeared first on Tokeny.

Luxembourg & Zug – 23 October 2025 – AMINA Bank AG (“AMINA”), a Swiss Financial Market Supervisory Authority (FINMA)-regulated crypto bank with global reach, has entered into a collaboration agreement with Tokeny, (an Apex Group company), the leading onchain finance operating system, to create a regulated banking bridge for institutional tokenisation. This strategic collaboration addresses critical institutional bottlenecks by applying Swiss banking standards to blockchain innovation.

Through this agreement, AMINA Bank will deliver regulated banking and custody for underlying assets such as government bonds, corporate securities, treasury bills, and other traditional financial instruments, while Tokeny provides the tokenisation platform. AMINA’s extensive crypto and stablecoin offering also enables clients to seamlessly move on and off chain.

Market demand for tokenisation is coming from the open blockchain ecosystems, and institutions need a compliant and scalable way to meet it. By integrating AMINA Bank's regulated banking and custody framework with Tokeny's orchestrated tokenisation infrastructure, we provide financial institutions with a fast, seamless, and secure path to market. Luc FalempinCEO of Tokeny, and Head of Product for Apex Digital

The tokenised assets market is experiencing explosive growth, with major institutions, including JP Morgan and BlackRock, leading adoption of blockchain-based financial products. This momentum is supported by accelerating regulatory clarity across the globe, from the US GENIUS Act to Hong Kong’s ASPIRe framework.

The collaboration leverages AMINA’s regulated banking infrastructure alongside Tokeny’s proven tokenisation expertise. AMINA provides Swiss banking-standard custody and compliance, while Tokeny contributes first-mover tokenisation technology and an enterprise-grade platform that has powered over 120 use cases and billions of dollars in assets. It has recently been acquired by Apex Group, a global financial services provider with $3.5 trillion in assets under administration.

In the past year, there’s been increased demand from our institutional clients for compliant access to tokenised assets on public blockchains. Tokenised entities still face critical challenges such as setting up banking and custody solutions. There’s a lack of orchestrated infrastructure that connects with legacy systems. My priority is delivering this innovation through the safest, most regulated pathway possible, and we’re excited to partner with Tokeny to make this happen. Myles HarrisonChief Product Officer at AMINA Bank

The combined solution offers financial institutions end-to-end tokenisation capability with fast time-to-market measured in weeks. Starting with traditional financial instruments where institutional demand is focused, the collaboration agreement establishes the regulated infrastructure foundation for future expansion into asset classes where tokenisation can deliver greater utility.

Tokeny’s platform leverages the ERC-3643 standard for compliant tokenisation, the standard is built on top of ERC-20 with a compliance layer to ensure interoperability with the broader DeFi ecosystem. This ensures that, even within an open blockchain ecosystem, only authorised investors can hold and transfer tokenised assets while maintaining issuer control and automated regulatory compliance.

“The future of finance is open, and institutions now have the tools to take full advantage, without compromising on compliance, security, or operational efficiency,” added Falempin.

About Tokeny

Tokeny is a leading onchain finance platform and part of Apex Group, a global financial services provider with $3.5 trillion in assets under administration and over 13,000 professionals across 52 countries. With seven years of proven experience, Tokeny provides financial institutions with the technical tools to represent assets on the blockchain securely and compliantly without facing complex technical hurdles.Institutions can issue, manage, and distribute securities fully onchain, benefiting from faster transfers, lower costs, and broader distribution. Investors enjoy instant settlement, peer-to-peer transferability, and access to a growing ecosystem of tokenized assets and DeFi services. From opening new distribution channels to reducing operational friction, Tokeny enables institutions to modernize how assets move and go to market faster, without needing to be blockchain experts.

Website | LinkedIn | X/Twitter

About AMINA – Crypto. Banking. Simplified.

Founded in April 2018 and established in Zug (Switzerland), AMINA Bank AG is a pioneer in the crypto banking industry. In August 2019, AMINA Bank AG received the Swiss Banking and Securities Dealer License from the Swiss Financial Market Supervisory Authority (“FINMA”). In February 2022, AMINA Bank AG, Abu Dhabi Global Markets (“ADGM”) Branch received Financial Services Permission from the Financial Services Regulatory Authority (“FSRA”) of ADGM. In November 2023, AMINA (Hong Kong) Limited received its Type 1, Type 4 and Type 9 licenses from the Securities and Futures Commission (“SFC”).

To learn more about AMINA, visit aminagroup.com

The post Apex Group’s Tokeny & AMINA Bank combine tokenisation innovation with regulated banking appeared first on Tokeny.


Thales Group

Thales reports its order intake and sales as of September 30, 2025

Thales reports its order intake and sales as of September 30, 2025 prezly Thu, 10/23/2025 - 06:00 Group Investor relations Share options Facebook X
Thales reports its order intake and sales as of September 30, 2025 prezly Thu, 10/23/2025 - 06:00 Group Investor relations

Share options

Facebook X Whatsapp Linkedin Email URL copied to clipboard 23 Oct 2025 Order intake: €16.8 billion, up +8% (+9% on an organic basis1) Sales: €15.3 billion, up +8.4% (+9.1% on an organic basis) Confirmation of all 2025 financial targets2 Book-to-bill ratio above 1 Organic sales growth between +6% and +7%3 Adjusted EBIT margin: 12.2% to 12.4%

Thales (Euronext Paris: HO) today announced its order intake and sales for the period ending September 30, 2025.

Order intake

In € millions

9m 2025

9m 2024

Total

change

Organic change

Aerospace

3,919

3,639

+8%

+7%

Defence

9,943

8,951

+11%

+12%

Cyber & Digital

2,827

2,905

-3%

-0%

Total – operating segments

16,689

15,494

+8%

+8%

Other

73

56

Total

16,762

15,551

+8%

+9%

Of which mature markets4

12,342

11,413

+8%

+9%

Of which emerging markets4

4,419

4,137

+7%

+8%

Sales

In € millions

9m 2025

9m 2024

Total

change

Organic change

Aerospace

4,108

3,839

+7.0%

+6.9%

Defence

8,243

7,239

+13.9%

+14.0%

Cyber & Digital

2,803

2,914

-3.8%

-1.3%

Of which Cyber

1,059

1,140

-7.1%

-4.8%

Of which Digital

1,744

1,774

-1.7%

+1.0%

Total – operating segments

15,154

13,993

+8.3%

+8.9%

Other

101

76

Total

15,256

14,069

+8.4%

+9.1%

Of which mature markets4

12,053

11,220

+7.4%

+7.7%

Of which emerging markets4

3,203

2,849

+12.4%

+14.5%

“In the third quarter 2025, Thales delivered sustained organic growth in both order intake and sales, further confirming the Group's strong momentum since the beginning of the year. ​
​In this supportive environment, Thales confirms all its financial targets for 2025. I welcome the constant commitment of our teams to pursue this sustainable growth trajectory.”
​Patrice Caine, Chairman & Chief Executive Officer
Order intake

Over the first nine months of 2025, order intake amounted to €16,762 million, up +9% organically compared with the first nine months of 2024 (up +8% on a reported basis). The Group continues to benefit from strong commercial momentum in most of its activities, particularly in the Aerospace and Defence segments. ​

Over this period, Thales recorded 14 large orders with a unit value of more than €100 million, for a total amount of €5,331 million:

5 large orders recorded in Q1 2025: Contract signed with Space Norway, a Norwegian satellite operator, for the supply of the THOR 8 telecommunications satellite; Order by SKY Perfect JSAT to Thales Alenia Space of JSAT-32, a geostationary telecommunications satellite; Signing of a contract between Thales and the European Space Agency (ESA) to develop Argonaut, a future autonomous and versatile lunar lander designed to deliver cargo and scientific instruments to the Moon; Order from the Dutch Ministry of Defence for the modernization and support of vehicle tactical simulators; Order from the French Defence Procurement Agency (DGA) for the development, production, and maintenance of vetronics equipment for various Army vehicles as part of the SCORPION programme. 5 large orders recorded in Q2 2025: Contract related to the supply of 26 Rafale Marine to India to equip the Indian Navy; As part of the SDMM (Strategic Domestic Munition Manufacturing) contract signed in 2020 for the supply of ammunition to the Australian armed forces, entry into force of years 6 to 8. The continuation of the SDMM contract concerns the design, the development, manufacture and maintenance of a variety of ammunition; Contract for the delivery to Ukraine of 70 mm ammunition and the transfer of the final assembly line of certain components of this ammunition from Belgium to Ukraine; Order for the production and supply of AWWS (Above-Water Warfare System) combat systems intended for frigates equipment in Europe; Order by Sweden of compact multi-mission medium range Ground Master 200 radars. 4 large orders recorded in Q3 2025: Signing of the Initial Phase Contract between Thales Alenia Space and the SpaceRISE consortium of satellite operators to engineer the system and secured payload solutions for the future European constellation IRIS²; Order from the UK Ministry of Defence for the production and delivery of 5,000 air defence LMM missiles; Order from the German Ministry of Defence for the delivery to a third country of portable land surveillance radars; Order from a European country for the production and delivery of 70 mm ammunition.

At €11,431 million, order intake of a unit amount below €100 million was up +8% compared to the first nine months of 2024; meanwhile, those with a unit value of less than €10 million were slightly up at September 30, 2025.

Geographically5, order intake in mature markets recorded organic growth of +9%, at €12,342 million, driven notably by solid momentum in Europe (up organically by +13%). Order intake in emerging markets amounted to €4,419 million and showed an organic increase of +8% at 30 September 2025, notably benefiting from the strong dynamism in Asia (+39% organic growth).

Order intake in the Aerospace segment amounted to €3,919 million, up +7% over the first nine months of 2025. The Avionics market has enjoyed sustained commercial momentum in its various activities since the beginning of the year. The Space business, which recorded four orders with a unit value of more than €100 million in the first nine months of 2025, also saw its order intake increase over the period.

With an amount of €9,943 million compared to €8,951 million in the first nine months of 2024, order intake in the Defence segment recorded a strong organic increase of +12%. This growth reflects an excellent commercial dynamic, supported notably by the relevance of Thales’ portfolio of products and solutions in the current context. Nine orders with a unit amount exceeding €100 million have been recorded since the beginning of the year 2025. Among them, two orders in the field of air defence in the UK and Germany were recorded in the third quarter.

At €2,827 million, order intake in the Cyber & Digital segment was structurally very close to sales as most business lines in this segment operate on short sales cycles. The order book is therefore not significant.

Sales

Sales for the first nine months of 2025 amounted to €15,256 million, compared with €14,069 million in the same period of 2024, up +9.1%6 at constant scope and exchange rates (+8.4% on a reported basis).

Geographically7, sales recorded solid growth in mature markets (+7.7% in organic terms), notably in the United Kingdom (+12.3%). Emerging markets also recorded strong growth (+14.5% organically over the period), with double-digit organic growth in all regions.

In the Aerospace segment, sales reached €4,108 million, up +7.0% compared to the first nine months of 2024 (+6.9% at constant scope and exchange rates). This growth reflects the continued momentum in the Avionics market, with a solid performance in both civil and military domains. Sales in the Space business recorded growth in line with annual expectations over the first nine months of 2025.

Sales in the Defence segment reached €8,243 million, up +13.9% compared to the first nine months of 2024 (+14.0% at constant scope and exchange rates). This growth was driven by all activities in the Defence segment, which benefitted notably from production capacity expansion projects being deployed.

Cyber & Digital segment sales amounted to €2,803 million, down -3.8% compared to the first nine months of 2024 (-1.3% at constant scope and exchange rates), reflecting contrasted trends:

Cyber businesses reported a decrease over the first nine months of 2025 (-4.8% at constant scope and exchange rates): The Cyber Products business, down at September 30, 2025, has not yet returned to a normal level of activity after the disturbances recorded during the first half of the year. These disturbances, that still weighed on the third quarter, are linked to the merger of Imperva and Thales' sales teams, a key step in the integration that will allow to benefit from the full potential of the business; The Cyber Premium Services business also showed a decline over the first nine months of 2025, affected by soft market demand, particularly in Australia. The ongoing execution of the strategy aimed at refocusing the business on selective profitable growth segments shows encouraging signs. Digital activities recorded an increase of +1.0% at constant scope and exchange rates: Sales from Payment Services enjoyed a strong growth in digital banking solutions, but remained affected by still low volumes on payment cards; Secure Connectivity solutions recorded sustained growth, driven by digital solutions (including eSIM as well as on-demand connectivity platforms). Outlook

Thales, with its strong positioning in all of its major markets and the relevance of its products and solutions, benefits from a favorable medium and long-term outlook.

Assuming no new disruptions in the macroeconomic and geopolitical contexts, and no new tariffs developments8, Thales confirms all its targets for 2025:

A book-to-bill ratio above 1; An expected organic sales growth between +6% and +7%, corresponding to a sales range of €21.8 to €22.0 billion9; An Adjusted EBIT margin between 12.2% and 12.4%.

****

This press release contains certain forward-looking statements. Although Thales believes that its expectations are based on reasonable assumptions, actual results may differ significantly from the forward-looking statements due to various risks and uncertainties, as described in the Company's Universal Registration Document, which has been filed with the French financial markets authority (Autorité des marchés financiers – AMF).

UPCOMING EVENTS

Ex-interim dividend date

December 2, 2025

Interim dividend payment date

December 4, 2025

Full Year 2025 results

March 3, 2026 (before market)

Annual General Meeting

May 12, 2026

1 In this press release, “organic” means “at constant scope and exchange rates”.

2 Assuming no new disruptions of the macroeconomic and geopolitical context. Regarding tariffs, the Group’s guidance for the year 2025 is valid on the basis of 1) reciprocal tariffs of 15% from the EU, 10% from and the UK and 25% from Mexico, 2) the maintenance of the EU-US tariff exemption on Aeronautics and 3) consequently, the absence of European retaliatory measures.

3 ​ Corresponding to €21.8 to €22.0 billion and based on end of September 2025 scope, average exchange rates as at 30 September 2025 and the assumption of an average EUR/USD exchange rate of 1.17 in Q4 2025.

4 Mature markets: Europe, North America, Australia, New Zealand; emerging markets: all other countries.

5 See table on page 7.

6 Considering a positive currency effect of -€164 million and a net scope effect of €90 million.

7 See table on page 7.

8 Regarding tariffs, the Group’s guidance for the year 2025 is valid on the basis of 1) reciprocal tariffs of 15% from the EU, 10% from the UK and 25% from Mexico, 2) the maintenance of the EU-US tariff exemption on Aeronautics and 3) consequently, the absence of European retaliatory measures.

9 Based on end of September 2025, average exchange rates as at 30 September 2025 and the assumption of an average EUR/USD exchange rate of 1.17 in Q4 2025.

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies for the Defence, Aerospace, and Cyber & Digital sectors. Its portfolio of innovative products and services addresses several major challenges: sovereignty, security, sustainability and inclusion.

The Group invests more than €4 billion per year in Research & Development in key areas, particularly for critical environments, such as Artificial Intelligence, cybersecurity, quantum and cloud technologies. Thales has more than 83,000 employees in 68 countries. In 2024, the Group generated sales of €20.6 billion.

View PDF Related Documents

Thales - Q3 2025 - slideshow

1.37 MB 29 Oct 2025 PDF Download corporate : Group + Investor relations https://thales-group.prezly.com/thales-reports-its-order-intake-and-sales-as-of-september-30-2025 thales-reports-its-order-intake-and-sales-september-30-2025 On Thales reports its order intake and sales as of September 30, 2025

Airbus, Leonardo and Thales sign Memorandum of Understanding to create a leading European player in space

Airbus, Leonardo and Thales sign Memorandum of Understanding to create a leading European player in space prezly Thu, 10/23/2025 - 06:00 Civil Aviation Defence Space Share options Facebook
Airbus, Leonardo and Thales sign Memorandum of Understanding to create a leading European player in space prezly Thu, 10/23/2025 - 06:00 Civil Aviation Defence Space

Share options

Facebook X Whatsapp Linkedin Email URL copied to clipboard 23 Oct 2025 New European space player aims to unite and enhance capabilities by combining the three respective activities in satellite and space systems manufacturing and space services. Major milestone in strengthening the European space ecosystem, supporting strategic autonomy and competitiveness, to ensure Europe enhances its role as a key player in the space global market. New company could be operational in 2027, subject to regulatory approvals and satisfaction of other closing conditions. Project expected to generate significant synergies, foster innovation, and deliver added value to customers, shareholders and employees.

* * *

Airbus (stock exchange symbol: AIR), Leonardo (Borsa Italiana: LDO) and Thales (Euronext Paris: HO) have signed a Memorandum of Understanding (“MoU”) aimed at combining their respective space activities into a new company.

By joining forces, Airbus, Leonardo and Thales aim to strengthen Europe’s strategic autonomy in space, a major sector that underpins critical infrastructure and services related to telecommunications, global navigation, earth observation, science, exploration and national security. This new company also intends to serve as the trusted partner for developing and implementing national sovereign space programmes.

This new company will pool, build and develop a comprehensive portfolio of complementary technologies and end-to-end solutions, from space infrastructure to services (excluding space launchers). It will accelerate innovation in this strategic market, in order to create a unified, integrated and resilient European space player, with the critical mass to compete globally and grow on the export markets.

This new player will be able to foster innovation, combine and strengthen investments in future space products and services, building on the complementary assets and world-class expertise of all three companies. The combination is expected to generate mid triple digit million euro of total annual synergies on operating income five years after closing. Associated costs to generate those synergies are expected to be in line with industry benchmark.

The project is expected to unlock incremental revenues, leveraging an expanded portfolio of end-to-end products and services leading to a more competitive offering, and greater global commercial reach. The combined capabilities also pave the way for even more innovative new programmes to enlarge the new company’s market positioning. Further operational synergies in, among others, engineering, manufacturing and project management, are anticipated to drive long-term efficiency and value creation. Upon conclusion of the transaction, this new company will encompass the following contributions:

Airbus will contribute with its Space Systems and Space Digital businesses, coming from Airbus Defence and Space. Leonardo will contribute with its Space Division, including its shares in Telespazio and Thales Alenia Space. Thales will mainly contribute with its shares in Thales Alenia Space, Telespazio, and Thales SESO.

The combined entity will employ around 25,000 people across Europe. With an annual turnover of about 6.5bn€ (end of 2024, pro-forma) and an order backlog representing more than three years of projected sales, this new company will form a robust and competitive entity worldwide.

Ownership of the new company will be shared among the parent companies, with Airbus, Leonardo and Thales owning respectively 35%, 32,5% and 32,5% stakes. It will operate under joint control, with a balanced governance structure among shareholders.

Accelerating European leadership in space and ensuring its strategic autonomy, the new company aims to:

Foster innovation and technological progress by harnessing joint R&D capabilities to be at the cutting edge of space missions in all domains, including services, and enhance operational efficiency, benefiting from economies of scale and optimized production processes. Increase competitiveness facing global players, reaching critical mass and ensuring Europe secures its role as a major player in the international space market. Lead innovative programmes to address evolving customer and European sovereign needs, national sovereign and military programmes, by providing integrated solutions for infrastructure & services in all major space domains, driving cooperation across nations and having the capability to invest. Strengthen the European space ecosystem by bringing more stability and predictability to the industrial landscape, amplifying opportunities for the benefit of European suppliers of all sizes. Create new opportunities for employee development through broader technical capabilities and the extensive multinational footprint of the new company.

Joint Statement

Guillaume Faury, Chief Executive Officer of Airbus, Roberto Cingolani, Chief Executive Officer and General Manager of Leonardo and Patrice Caine, Chairman & Chief Executive Officer of Thales, declared:
​“This proposed new company marks a pivotal milestone for Europe’s space industry. It embodies our shared vision to build a stronger and more competitive European presence in an increasingly dynamic global space market. By pooling our talent, resources, expertise and R&D capabilities, we aim to generate growth, accelerate innovation and deliver greater value to our customers and stakeholders. This partnership aligns with the ambitions of European governments to strengthen their industrial and technological assets, ensuring Europe’s autonomy across the strategic space domain and its many applications. It offers employees the opportunity to be at the heart of this ambitious initiative, while benefiting from enhanced career prospects and the collective strength of the three industry leaders.”

Next steps

Employee representatives of Airbus, Leonardo and Thales will be informed and consulted on this project according to the laws of involved countries and the collective agreements applicable at each parent company.

Completion of the transaction is subject to customary conditions including regulatory clearances, with the new company expected to be operational in 2027.

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies for the Defence, Aerospace, and Cyber & Digital sectors. Its portfolio of innovative products and services addresses several major challenges: sovereignty, security, sustainability and inclusion. The Group invests more than €4 billion per year in Research & Development in key areas, particularly for critical environments, such as Artificial Intelligence, cybersecurity, quantum and cloud technologies. Thales has more than 83,000 employees in 68 countries. In 2024, the Group generated sales of €20.6 billion.

About Airbus

Airbus pioneers sustainable aerospace for a safe and united world. The Company constantly innovates to provide efficient and technologically-advanced solutions in aerospace, defence, and connected services. In commercial aircraft, Airbus designs and manufactures modern and fuel-efficient airliners and associated services. Airbus is also a European leader in space systems, defence and security. In helicopters, Airbus provides efficient civil and military rotorcraft solutions and services worldwide.

About Leonardo

Leonardo is an international industrial group, among the main global companies in Aerospace, Defence, and Security (AD&S). With 60,000 employees worldwide, the company approaches global security through the Helicopters, Electronics, Aeronautics, Cyber & Security and Space sectors, and is a partner on the most important international programmes such as Eurofighter, JSF, NH-90, FREMM, GCAP, and Eurodrone. Leonardo has significant production capabilities in Italy, the UK, Poland, and the USA. Leonardo utilises its subsidiaries, joint ventures, and shareholdings, which include Leonardo DRS (71.6%), MBDA (25%), ATR (50% ), Hensoldt (22.8%), Telespazio (67%), Thales Alenia Space (33%), and Avio (28.7%). Listed on the Milan Stock Exchange (LDO), in 2024 Leonardo recorded new orders for €20.9 billion, with an order book of €44.2 billion and consolidated revenues of €17.8 billion. Included in the MIB ESG index, the company has also been part of the Dow Jones Sustainability Indices (DJSI) since 2010.

View PDF market_segment : Civil Aviation + Defence + Space ; corporate : Investor relations + Group https://thales-group.prezly.com/airbus-leonardo-and-thales-sign-memorandum-of-understanding-to-create-a-leading-european-player-in-space airbus-leonardo-and-thales-sign-memorandum-understanding-create-leading-european-player-space On Airbus, Leonardo and Thales sign Memorandum of Understanding to create a leading European player in space

FastID

Build for Scale: Fastly’s Principles of Distributed Decision Making and Self-healing Systems

Learn how Fastly's distributed decision-making and self-healing systems build a resilient, high-performance network. Discover key benefits and examples.
Learn how Fastly's distributed decision-making and self-healing systems build a resilient, high-performance network. Discover key benefits and examples.

Wednesday, 22. October 2025

SC Media - Identity and Access

Over 180 million stolen credentials added to Have I Been Pwned

Stolen credentials are now part of the economic engine that drives the digital economy.

Stolen credentials are now part of the economic engine that drives the digital economy.


Is SaaS losing its shine? Rethinking IAM for security, flexibility, and control

Hybrid IAM gains traction as enterprises seek more control, security, and flexibility beyond SaaS.

Hybrid IAM gains traction as enterprises seek more control, security, and flexibility beyond SaaS.


Fake job offers leveraged in Facebook credential phishing campaign

Fake job offers leveraged in Facebook credential phishing campaign HackRead reports that widely known brands, including KFC, Red Bull, and Ferrari, have been impersonated in fraudulent job postings aimed at compromising Facebook login details as part of a sweeping credential phishing campaign.

Fake job offers leveraged in Facebook credential phishing campaign HackRead reports that widely known brands, including KFC, Red Bull, and Ferrari, have been impersonated in fraudulent job postings aimed at compromising Facebook login details as part of a sweeping credential phishing campaign.


OAuth apps exploited for persistent compromise

Threat actors have been leveraging OAuth apps to ensure persistence within hacked environments, according to Cybernews.

Threat actors have been leveraging OAuth apps to ensure persistence within hacked environments, according to Cybernews.


Beyond the blind spot: Securing unmanaged devices in a hybrid world

This article explores the rising risks associated with unmanaged devices and the practical steps organizations can take to extend visibility, trust, and control beyond corporate perimeters.

This article explores the rising risks associated with unmanaged devices and the practical steps organizations can take to extend visibility, trust, and control beyond corporate perimeters.


Trinsic Podcast: Future of ID

Chris Goh – Scaling Mobile IDs in Australia with ISO mDocs

In this episode of The Future of Identity Podcast, I’m joined by Chris Goh, former National Harmonisation Lead for Australia’s mobile driver’s licenses (mDLs) and the architect behind Queensland’s digital driver’s license. Chris played a pivotal role in driving national alignment across states and territories, culminating in the 2024 agreement to adopt ISO mDoc/mDL standards for mobile driver

In this episode of The Future of Identity Podcast, I’m joined by Chris Goh, former National Harmonisation Lead for Australia’s mobile driver’s licenses (mDLs) and the architect behind Queensland’s digital driver’s license. Chris played a pivotal role in driving national alignment across states and territories, culminating in the 2024 agreement to adopt ISO mDoc/mDL standards for mobile driver’s licenses and photo IDs across Australia and New Zealand.

Our conversation dives into Australia’s path from early blockchain experiments to a unified, standards-based approach - one that balances innovation, security, and accessibility. Chris shares lessons from real-world deployments, cultural challenges like “flash passes,” and how both Australia and New Zealand are building digital ID ecosystems ready for global interoperability.

In this episode we explore:

Why mDoc became the foundation: Offline + online verification, PKI-based trust, and modular architecture enabling scalable, interoperable credentials. From Hyperledger to harmony: Lessons from early decentralized trials and how certification and conformance reduce fragmentation. Balancing innovation and standardization: Why agility and stability must coexist to keep identity ecosystems moving forward. The cultural realities of adoption: How flash passes, retail constraints, and public education shaped Australia’s rollout strategy. The road ahead: How national trust lists, privacy “contracts,” and delegated authority could define the next phase of digital identity in the region.

This episode is essential listening for anyone building or implementing digital credentials, whether you’re a policymaker, issuer, verifier, or technology provider. Chris offers a clear, grounded perspective on what it really takes to move from pilots to national-scale digital identity infrastructure.

Enjoy the episode, and don’t forget to share it with others who are passionate about the future of identity!

Learn more about Valid8.

Reach out to Riley (@rileyphughes) and Trinsic (@trinsic_id) on Twitter. We’d love to hear from you.

Listen to the full episode on Apple Podcasts or Spotify, or find all ways to listen at trinsic.id/podcast.


Veracity trust Network

How do we deal with “synthetic users” accessing our data?

“If AI agents can now behave like humans well enough to pass CAPTCHA, we’re no longer dealing with bots we’re dealing with synthetic users. That creates real risk.” The above statement, by Marcom Strategic Advisor and Investor Katherine Kennedy-White, in response to a LinkedIn post by Veracity’s CEO Nigel Bridges, shows the real concern over how sophisticated AI agents and bots are becoming.

“If AI agents can now behave like humans well enough to pass CAPTCHA, we’re no longer dealing with bots we’re dealing with synthetic users. That creates real risk.”

The above statement, by Marcom Strategic Advisor and Investor Katherine Kennedy-White, in response to a LinkedIn post by Veracity’s CEO Nigel Bridges, shows the real concern over how sophisticated AI agents and bots are becoming.

The post How do we deal with “synthetic users” accessing our data? appeared first on Veracity Trust Network.


Herond Browser

How to Turn Off Ads Blocker in 3 Simple Steps

Herond Browser is engineered with a smarter, more selective approach to ad filtering built-in, helps you turn off ads blocker. The post How to Turn Off Ads Blocker in 3 Simple Steps appeared first on Herond Blog. The post How to Turn Off Ads Blocker in 3 Simple Steps appeared first on Herond Blog.

You’ve downloaded an ad blocker for a reason, but now that essential tool is causing friction, forcing you to know how do I turn off ads blocker just to bypass site paywalls, access crucial content, or stop mandatory video viewing. This guide offers the quick fix you need: a fast, universal, three-step method that works across all popular browsers and extensions. More importantly, while ads blockers are a great starting point, consider this: the Herond Browser is engineered with a smarter, more selective approach to ad filtering built-in, suggesting you might not even need a separate, disruptive extension at all.

Universal Guide: How do I turn off ads blocker Step 1: How do I turn off ads blocker – Locate the Extension Icon

To begin, you need to find your ad blocker. This icon is almost always located in the top-right corner of your browser’s toolbar. It is typically represented by a shield, a hand, or a stop sign. If you’re using a popular blocker like AdBlock Plus or uBlock Origin, look for their distinct logos there. This is the central control point for quickly managing its features.

Step 2: How do I turn off ads blocker – Select the Disabled Option

The most effective quick fix is selecting “disable on this site“. This is the recommended setting as it instantly turns the blocker off only for the specific website you are currently viewing, allowing you to access paywalled content or required videos immediately. The blocker remains active everywhere else, ensuring your general browsing stays ad-free.

Choosing “Pause Ads Blocker” offers a temporary fix by globally suspending the extension for a brief period, often 30 seconds or until you refresh the page. This option is best used when you are unsure if a site needs the blocker disabled, as it allows you to test content access without committing to a permanent site exclusion.

The option “Don’t run on pages in this domain” creates a permanent exception rule. Unlike the quick “disable on this site,” this actively adds the entire domain (e.g., herond.org) to your whitelist. This is useful for sites you frequently visit and trust, ensuring the blocker never activates on any page associated with that domain moving forward.

Step 3: How do I turn off ads blocker – Refresh the Page

The final and crucial step to apply your change is to refresh the page (F5 or the refresh icon). Whether you choose to disable the blocker for the site or pause it temporarily, the browser needs to reload the page without the extension’s script running. This simple action immediately loads the content, allowing you to bypass the paywall or access the video without further delay.

Specific Instructions by Browser/Extension (Detailed Utility)

Chrome/Edge (Extension-Based)

For users on Chrome or Edge, managing an extension-based ad blocker requires navigating to the dedicated settings page. The fastest way to access this is by typing chrome://extensions (or edge://extensions for Edge) directly into your address bar. This grants you the detailed control panel necessary to completely disable the extension, manage its permissions, or remove it entirely from your browser.

Firefox

For Firefox users, managing your ad blocker requires navigating the Add-ons Manager. You can quickly access this by typing about:addons into your address bar, or by clicking the menu icon (three horizontal lines) and selecting “Add-ons and themes.” This centralized hub provides the detailed controls needed to fully disable, adjust specific permissions, or completely remove your ads blocking extension from the browser.

Safari

To manage ads blockers in Safari, the process is integrated directly into the browser’s preferences rather than relying on an extensions page. On a Mac, navigate to Safari > Settings (or Preferences) and select the Websites tab. Here, you can find the Content Blockers section, allowing you to quickly disable the blocker for individual sites or adjust its general settings and permissions across the board.

Herond Browser

The Herond Browser eliminates the need for disruptive third-party extensions altogether. Its powerful ad and content blocker is managed directly within the main Settings menu, providing seamless, integrated control. This built-in approach offers a smarter, less intrusive defense, allowing you to easily adjust protections without juggling multiple add-ons.

The Smarter Way to Block Ads: Introducing Herond Browser

Why Separate Extensions Fail

Separate ad-blocking extensions often create more problems than they solve, frequently breaking website functionality and noticeably slowing down overall browser performance. Crucially, they require constant manual intervention, forcing the user to disable them repeatedly just to access content. This defeats the purpose of seamless browsing and highlights the limitation of relying on third-party add-ons for essential functionality.

Herond’s Integrated Ad Shield

Performance

Faster browsing because the blocker is native.

Selective Blocking

Easily toggle blocking per site or choose to only block malicious/intrusive ads, allowing non-intrusive ads to support content creators (Solving the user’s original problem more elegantly).

Security

Native integration offers deeper protection against malware and trackers.

Conclusion

You now have the simple, three-step method for managing your ad blocker and immediately regaining access to paywalled sites or crucial content. Whether you used the quick fix – selecting “disable on this site” – or adjusted the settings for a permanent exclusion, a simple page refresh is all it takes. Remember that while disabling blockers is sometimes necessary, consider shifting to the Herond Browser. Its built-in, selective ad filtering system means you avoid the need for external extensions entire. This also offers a faster, smoother, and less intrusive way to browse while still controlling your online experience.

About Herond

Herond Browser is a Web browser that prioritizes users’ privacy by blocking ads and cookie trackers, while offering fast browsing speed and low bandwidth consumption. Herond Browser features two built-in key products:

Herond Shield: an adblock and privacy protection tool; Herond Wallet: a multi-chain, non-custodial social wallet.

Herond aims at becoming the ultimate Web 2.5 solution that sets the ground to further accelerate the growth of Web 3.0, heading towards the future of mass adoption.

Join our Community!

The post How to Turn Off Ads Blocker in 3 Simple Steps appeared first on Herond Blog.

The post How to Turn Off Ads Blocker in 3 Simple Steps appeared first on Herond Blog.

Tuesday, 21. October 2025

Indicio

Why Digital Travel Credentials provide the strongest digital identity assurance

The post Why Digital Travel Credentials provide the strongest digital identity assurance appeared first on Indicio.
Stop fraud before it starts. Indicio Proven closes the door on account takeovers, social engineering scams and deepfakes with secure, interoperable Digital Travel Credentials — the highest level of digital identity assurance and the easiest to use.

By Helen Garneau

Identity fraud is rising around the world, and travelers are starting to lose confidence that airlines and hotels can keep their personal data safe. More stories about new kinds of scams are coming out using AI-generated deepfakes, fake documents, and other digital tricks that can fool identity systems. As airports and airlines depend more on facial recognition and other biometric tools, the risk of these attacks becomes a serious threat to the entire travel experience.

Think of how this plays out in real life. A thief uses a stolen credit card to buy an airline ticket and checks in with a forged passport. An impostor calls into an airline call center with a stolen password, takes over the victim’s account, and steals their miles. A criminal walks through a border checkpoint using false biometrics. Each case happens because identity cannot be verified in real time, directly from the traveler.

Digital Travel Credentials fix this problem.

A Digital Travel Credential — DTC — is a secure, digital version of a passport that aligns with    specifications outlined by the International Civil Aviation Organization, a global body that was responsible for standardizing physical passports.

Currently, there are two types of implementable DTCs: One issued by a government along with a physical passport (DTC-2), or one issued by an organization, such as an airline or hotel, by way of data derived from a valid passport and biometrically authenticated against the rightful passport holder (DTC-1).

The data in each DTC is digitally  signed, which provides a way to cryptographically prove its origin (who issued it) and that it hasn’t been tampered with. The credential is held by the passport holder in a digital wallet on their mobile device, which provides two additional layers of binding (biometric/code to first unlock the device and then the wallet).

Here’s what makes the DTC a deepfake buster

First, you can’t re-engineer the cryptography using AI to alter the data. Second, each person is able to carry an authenticated biometric with them for verification. It’s like having a second you to prove who you are. The biometric template in the credential can be automatically compared with a liveness check, so authentication is not only instant, it doesn’t require the verifying party to store biometric data.

The DTC completely transforms identity verification and fraud prevention in one go.

The upshot is that identity authentication no longer needs usernames, passwords, centralized data storage, multifactor authentication, or increasingly complex and expensive layers of security; instead, customers hold their data and present it for seamless, cryptographic authentication, which can be done anywhere using simple mobile software.

Their data is protected, you’re protected, and your operations can be automated and streamlined for better customer experiences and reduced cost.

The easy switch for implementing DTC credentials

Indicio Proven® is the most advanced market solution for issuing, holding, and verifying interoperable DTC-1 and DTC-2 aligned credentials, with options for the leading three Verifiable Credential formats, SD JWT VC, AnonCreds, and mDL.

Proven is equipped with advanced biometric and document authentication tools, and our partnership with Regula enables us to validate identity documents from 254 countries and territories for issuance as Verifiable Credentials. It has a white-label digital wallet compatible with global digital identity standards and a mobile SDK for adding Verifiable Credentials to your apps.

It’s easy and quick to add to your existing biometric infrastructure, removing the need to rip and replace identity systems. It can effortlessly scale to country level deployment, and best of all, it’s significantly less expensive than centralized identity management.

Proven also follows the latest open standards, including eIDAS 2.0 and the EUDI framework, lowering regulatory risks, preserving traveler privacy, and opening markets that would otherwise be off limits.

Shut down fraud before it starts

Fraud should never be accepted as part of doing business. With Proven DTCs, airlines can defend against ticket and loyalty fraud before they even talk to a passenger. Airports can trust the traveler and the data they receive because it matches the credential and the verified government records. Hotels can check in guests with confidence, no passport photocopying or manual lookups required — and they have a simple and powerful way to reduce chargeback fraud.

Indicio Proven removes legacy vulnerabilities to identity fraud and closes the gaps between systems so identity can be trusted from start to finish. It protects revenue, safeguards customer relationships, and restores confidence across every stage of travel.

It’s time to stop fraud, simplify identity verification, and give travelers a secure, seamless experience with Indicio Proven.

Contact Indicio today and see how you can protect your business and your customers with Indicio Proven.

 

The post Why Digital Travel Credentials provide the strongest digital identity assurance appeared first on Indicio.


SC Media - Identity and Access

Google faces Digital Childhood Institute lawsuit over youth privacy

Google has been sued by the Digital Childhood Institute over alleged violations of U.S. privacy laws involving unfair and deceptive practices involving children and teens, according to CyberScoop.

Google has been sued by the Digital Childhood Institute over alleged violations of U.S. privacy laws involving unfair and deceptive practices involving children and teens, according to CyberScoop.


Federal employees doxed in cyberattack

Cybernews reports that nearly 1,000 employees from the Department of Homeland Security, the Department of Justice, and the FBI had their personal information and emails exposed by The Com hacking collective following an extensive coordinated cyberattack that comes amid the U.S. government's mounting crackdown on immigration and protests.

Cybernews reports that nearly 1,000 employees from the Department of Homeland Security, the Department of Justice, and the FBI had their personal information and emails exposed by The Com hacking collective following an extensive coordinated cyberattack that comes amid the U.S. government's mounting crackdown on immigration and protests.


This week in identity

E64 - The Growing Impact of Digital Dependency

Keywords AWS outage, digital dependency, business continuity, FIDO, authentication, passkeys, digital certificates, threat informed defense, false positives, cyber resilience Summary In this episode of the Analyst Brief Podcast, Simon Moffatt and David Mahdi discuss the recent AWS outage and its implications on digital dependency and business continuity. They explore the importance of disast

Keywords

AWS outage, digital dependency, business continuity, FIDO, authentication, passkeys, digital certificates, threat informed defense, false positives, cyber resilience


Summary

In this episode of the Analyst Brief Podcast, Simon Moffatt and David Mahdi discuss the recent AWS outage and its implications on digital dependency and business continuity. They explore the importance of disaster recovery plans and the evolving landscape of authentication technologies, particularly focusing on the FIDO Authenticate Conference. The conversation delves into the lifecycle of passkeys and digital certificates, emphasizing the need for threat-informed defense strategies and the challenges of managing false positives in security. The episode concludes with a call for better integration of systems and shared intelligence across the industry.


Chapters


00:00 Introduction and Global Outage Discussion

03:01 The Impact of Digital Dependency

06:00 Business Continuity and Disaster Recovery

09:10 FIDO Authenticate Conference Overview

16:09 Evolution of Authentication Technologies

21:45 The Lifecycle of Passkeys and Digital Certificates

29:59 Threat Informed Defense and False Positives

39:55 Conclusion and Future Considerations



Dock

Europe’s travel experiment just made digital identity real

This summer, Amadeus, a global travel technology company that powers many airline and airport systems, and Lufthansa, Germany’s flag carrier and one of Europe’s largest airlines, successfully tested the EU Digital Identity Wallet (EUDI Wallet) in real travel scenarios. The test showed how credential-based

This summer, Amadeus, a global travel technology company that powers many airline and airport systems, and Lufthansa, Germany’s flag carrier and one of Europe’s largest airlines, successfully tested the EU Digital Identity Wallet (EUDI Wallet) in real travel scenarios.

The test showed how credential-based travel could soon replace manual document checks.

During these tests, travellers could:

Check-in online by sharing verified ID credentials from their wallet with one click, instead of entering passport data manually.

Move through the airport by simply tapping their phone at check-in desks, bag drop machines, and boarding gates, rather than repeatedly showing physical documents.

The results point to a future where travel becomes smoother and more secure, thanks to verifiable credentials and privacy-preserving identity verification.


Elliptic

Why government agencies should own their blockchain intelligence data

Government agencies now have access to blockchain intelligence capabilities that were impossible just a few years ago. Where investigators once had to work within the constraints of third-party platforms designed for individual transaction tracing, they can now run comprehensive intelligence operations across complete blockchain datasets.

Government agencies now have access to blockchain intelligence capabilities that were impossible just a few years ago. Where investigators once had to work within the constraints of third-party platforms designed for individual transaction tracing, they can now run comprehensive intelligence operations across complete blockchain datasets.


Spherical Cow Consulting

The People Problem: How Demographics Decide the Future of the Internet

I've been having an intellectually fascinating time diving into Internet fragmentation and how it is shaped by supply chains more than protocols. There’s another bottleneck ahead, though, one that’s even harder to reroute: people. Innovation doesn’t happen in a vacuum. It requires human talent that builds systems and sets standards. The post The People Problem: How Demographics Decide the Future

“I’ve been having an intellectually fascinating time diving into Internet fragmentation and how it is shaped by supply chains more than protocols. There’s another bottleneck ahead, though, one that’s even harder to reroute: people.”

Innovation doesn’t happen in a vacuum. It requires engineers, designers, policy thinkers, and entrepreneurs. In other words, it needs human talent to build systems and set standards. And demographics are destiny when it comes to innovation. The places where populations are shrinking face not only economic strain but also a dwindling supply of innovators. The regions with young, growing populations could take the lead, but only if they can translate those numbers into participation in building tomorrow’s Internet.

Right now, the imbalance is striking. The countries that dominated the early generations of the Internet—the U.S., Europe, Japan, and now China—are either stagnating or shrinking. Meanwhile, countries with youthful demographics, especially across Africa and parts of South Asia, aren’t yet present in large numbers in the open standards process that defines the global Internet. That absence will shape the systems they inherit in the next 10-15 years.

This is the third in a four-part series of blog posts about the future of the Internet, seen through the lens of fragmentation.

First post: “The End of the Global Internet“ Second post: “Why Tech Supply Chains, Not Protocols, Set the Limits on AI and the Internet“ Third post: [this one] Fourth post: “Can standards survive trade wars and sovereignty battles?” [scheduled to publish 28 October 2025]

A Digital Identity Digest The People Problem: How Demographics Decide the Future of the Internet Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:12:01 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link Embed

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

The United States: a leaking talent pipeline

For decades, the U.S. thrived as the global hub of Internet development. Silicon Valley became Silicon Valley not just because of venture capital, but because talent from around the world came to build there. That was then.

Domestically, U.S. students continue to lag behind peers in international comparisons of math and science performance, as OECD’s PISA 2022 makes clear. Graduate programs in engineering and computer science still brim with energy, but overwhelmingly from international students. Those students often want to stay, yet immigration bottlenecks, capped and riotously expensive H-1B visas, and green card backlogs create real uncertainty about whether they can.

Even inside the standards world, there are warning signs. The IETF’s 2024 community survey showed concerns about age distribution, with long-time participants nearing retirement and too few younger contributors entering. If the U.S. cannot fix its education and immigration systems, its long-standing leadership in setting Internet rules will decline, not through policy shifts in Washington which are not helping, but because of demographic erosion.

China: aging before it gets rich

China has built its growth story on a huge working-age population. That dividend is spent. Fertility hovers around 1.0, far below the replacement rate of 2.1, and the working-age population has already begun shrinking. By 2040, the elderly dependency ratio will climb sharply, with more pressure on pensions, healthcare, and younger workers.

The state has made automation and AI a cornerstone of its adaptation strategy. Investments in robotics and machine learning are designed to offset the loss of youthful labor. But an older population means fewer risk-takers, fewer startups, and more fiscal resources tied up in sustaining a rapidly aging society.

Japan’s experience offers a cautionary tale. Starting in the 1990s, it faced a similar contraction. Despite strong institutions and technological sophistication, growth stagnated. China risks repeating that path on a larger scale, and with less wealth per capita to cushion the fall.

Europe & Central Asia: slow contraction, unevenly distributed

Europe’s demographic transformation is a slow squeeze rather than a sudden cliff. According to the International Labour Organization’s 2025 working paper, the old-age ratio in Europe and Central Asia—the number of people over 65 compared to those of working age—will rise from 28 in 2024 to 43 by 2050. The region is expected to lose roughly ten million workers over that period.

The impact will not be uniform. Southern Europe is on track for some of the steepest shifts, with old-age ratios rising to two-thirds by 2050. By contrast, Central Asia maintains a relatively youthful profile, with projections of only 17 older adults per 100 workers. Policymakers across the continent are pushing familiar levers: encouraging older workers to stay employed longer, increasing women’s participation, and opening doors to migrants. But even with those adjustments, the fiscal weight of pensions, healthcare, and social protection will grow heavier, forcing innovation to rely more on productivity than population.

South Korea: the hyper-aged pioneer

South Korea is the most dramatic example of how quickly demographics can shift. The Beyond The Demographic Cliff report describes a “demographic cliff”: fertility has collapsed to just 0.7 children per woman, the lowest in the world. The working-age share, once 72 percent in 2016, will fall to just 56 percent by 2040. By 2070, nearly half the population will be over 65.

Unlike the U.S. or Germany, South Korea has little immigration to soften the decline; only about five percent of the population is foreign-born. Despite spending trillions of won since 2005 on pronatalist programs, fertility has only dropped further. The government has little choice but to adapt. With one of the world’s highest industrial robot densities, Korea is leaning heavily on automation and robotics. At the same time, the “silver economy” is becoming a growth engine, with eldercare, health technology, and age-friendly industries gaining traction.

The sheer speed of Korea’s shift is staggering. What took France nearly two centuries—from 7 percent to 20 percent of the population being over 65—took Korea less than thirty years. That compressed timeline means Korea is a test case for what happens when demographics move faster than institutions can adapt.

Africa: the continent of the future

While the industrialized world contracts, Africa surges. As a World Futures article makes clear, Tropical Africa alone will account for much of the world’s population growth this century. By 2100, Africa will be the largest source of working-age population in the world.

This demographic wave could be transformative. Africa holds vast reserves of cobalt, lithium, and other rare earths critical to green technologies. Combined with a youthful workforce, that could give the continent a central role in shaping the next century’s innovation. But the risks are real: education systems remain uneven, governance is fragile in many states, and climate pressures could destabilize growth. A demographic dividend only pays out if paired with investment in education and institutions.

Still, Africa is where the people will be. Whether or not it becomes a driver of global innovation depends on choices made now by African governments, but also by those investing in the continent’s infrastructure and industries.

If you’d rather have a notification when a new blog is published rather than hoping to catch the announcement on social media, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Who shows up in the standards process

And here is the connection to Internet fragmentation: the regions with the fastest-growing, youngest populations are not yet shaping the standards process in any significant way.

The W3C Diversity Report 2025 shows that governance seats are still dominated by North America, Europe, and a slice of Asia. Africa and South Asia barely register. ISO admits the same problem: while more than 75 percent of its members are from developing countries, many lack the resources to participate actively. That’s why ISO has launched programs such as its Action Plan for Developing Countries and capacity-building initiatives for technical committees. Membership may be global, but influence is not.

Participation isn’t just about fairness. It determines the rules that future systems will follow. If youthful regions aren’t in the room when those rules are written, they’ll inherit an Internet designed elsewhere, reflecting other priorities. In the meantime, outside players are shaping the infrastructure. China is investing heavily in African digital and industrial networks, creating regional value chains that may set defaults long before African voices appear in open standards bodies.

Cross-border interdependence

Even if the Internet fragments politically or technologically, demographics will keep it globally entangled. Aging countries will depend on migration and remote work links to tap youthful labor pools. Younger countries will increasingly provide the engineers, developers, and operators who sustain platforms. Standards bodies may eventually shift to reflect new population centers, but the lag between demographic change and institutional presence can be decades.

This interdependence means that fragmentation won’t create neatly separated Internets. Instead, we’ll see overlapping systems shaped partly by who has the people and partly by who invests in them.

Destiny is in the demographics

Demographics don’t move quickly, but they do move inexorably. The U.S. risks losing its edge through education and immigration failures. China is aging before it fully secures prosperity. Europe faces a slow decline. South Korea is already living the reality of a hyper-aged society. Africa is the wild card, with the potential to become the global engine of innovation if it can turn population growth into a dividend rather than a liability.

The stage is clearly set: the regions with the people to build tomorrow’s Internet aren’t yet present in the open standards process. Others, especially China, are already investing heavily in shaping what those regions will inherit.

If you want to know what kind of Internet we’ll have in the decades to come, don’t just look at protocols or supply chains. Watch the people. Watch where they are, and who is investing in them. That’s where the future of innovation lies.

If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript

[00:00:30] Welcome back to the Digital Identity Digest! I’m Heather Flanagan, and if you’ve been following this series, you’ll remember that we’ve been exploring Internet fragmentation from multiple angles.

In this episode, we’re zooming out once again—because even when the protocols align perfectly and the chips get made, there’s still one more piece of the puzzle that determines the Internet’s future: people.

More precisely, demographics.

Why Demographics Matter

[00:01:17] Who shows up to build tomorrow’s systems?
Who are the engineers, the designers, the startup founders?
Which regions have enough young people to sustain innovation—and which don’t?

This isn’t just about the present moment. It’s about what happens in 15 years.

[00:01:35] The countries that built and shaped the early Internet—the U.S., the EU, Japan, and more recently China—are all aging. Some are even shrinking.

Meanwhile, regions with the youngest and fastest-growing populations, such as Africa and South Asia, are not yet fully represented in the rooms where global standards are written. And that gap matters deeply for the Internet we’ll all inherit.

The United States: Talent Pipeline Challenges

[00:02:07] For decades, the U.S. has been the global hub for Internet innovation. Silicon Valley thrived not just on venture capital, but because brilliant people from around the world came to build there.

[00:02:20] Yet, the domestic talent pipeline is starting to leak:

U.S. students lag behind international peers in math and science. Graduate programs remain strong, but most are filled with international students. Immigration backlogs and visa caps make it harder for those graduates to stay.

[00:02:44] Even inside the standards community, demographics are aging. The IETF’s own survey shows long-time contributors retiring and not enough young participants stepping in.

If the U.S. can’t fix its education and immigration systems, its leadership won’t decline due to competition—it’ll slip because there aren’t enough people to carry the work forward.

China: From Growth to Grey

[00:03:10] China’s story is different—but no less stark. For decades, its explosive growth came from a huge working-age population.

[00:03:19] That demographic dividend is over. Fertility rates have fallen to barely one child per woman. The working-age population peaked in 2015 and has been shrinking since.

[00:03:33] China’s solution has been to automate—investing heavily in robotics, AI, and machine learning.

But as populations age, societies often shift resources away from risk-taking. An older economy tends to:

Produce fewer startups Take fewer risks Spend more on pensions and healthcare

Japan’s experience offers a cautionary example—and China risks following it on a larger scale and with less wealth per person to cushion the impact.

Europe: Managing a Slow Decline

[00:04:24] Europe faces a quieter version of the same story.

[00:04:41] By 2050, the ratio of older to working-age adults in Europe and Central Asia is expected to rise from 28 to 43. That means millions fewer workers and millions more retirees.

Europe’s strategy includes:

Keeping older workers employed longer Expanding women’s participation in the workforce Opening the door to migrants

However, the basic reality remains—fewer young people are entering the workforce. Innovation will depend more on productivity gains than on population growth.

South Korea: The Hyper-Aged Future

[00:05:12] South Korea offers a glimpse into the world’s most rapidly aging society.

[00:05:14] Fertility has collapsed to 0.7 children per woman, the lowest in the world. By 2070, nearly half the population will be over 65.

Unlike the U.S. or Germany, Korea has almost no immigration to balance the decline. Despite huge government investments in pronatalist programs, fertility continues to fall.

Korea is adapting through:

High robot density and automation Growth in the silver economy — industries around elder care, health tech, and age-friendly products

The speed of this shift is astonishing: what took France 200 years, Korea did in less than 30. It’s now a laboratory for adaptation—figuring out how policy and technology respond when demographics move faster than politics.

Africa: The Continent of the Future

[00:06:28] While industrialized nations age, Africa is booming.

By the end of this century, Africa will account for the majority of the world’s working-age population.

Its advantages are immense:

Rapid population growth Rich reserves of critical minerals (cobalt, lithium, rare earths) Expanding urbanization and education

However, these opportunities are balanced by real challenges:

Under-resourced education systems Fragile governance Climate pressures

[00:07:22] If managed well, Africa could become the innovation hub of the late 21st century. But much depends on where investment originates—within Africa or from abroad—and whose values and standards shape the technologies that follow.

Who’s in the Room?

[00:07:54] This is where demographics meet Internet fragmentation directly.

Regions with the youngest populations are still underrepresented in open standards bodies.

The W3C’s diversity reports show most seats are still held by North America, Europe, and parts of Asia. Africa and South Asia barely register. ISO has many developing-country members, but few can participate actively.

[00:08:36] Membership may be broad, but influence is not.

And that absence matters—because standards define power. They determine how the Internet functions, what’s prioritized, and who benefits.

If youthful regions aren’t in the room when rules are written, they’ll inherit an Internet designed elsewhere.

Looking Ahead

[00:09:02] Meanwhile, China is filling that vacuum—investing heavily in African digital infrastructure and shaping defaults long before African voices are fully present in global standards.

Even as the Internet fragments politically and technologically, demographics tie us together.

Aging nations will rely on migration and remote work. Younger countries will provide the engineers and operators sustaining global platforms. Standards institutions may eventually reflect new population centers—but change lags behind demographic reality.

[00:09:43] The people who build the Internet of the future will increasingly come from Africa and Southeast Asia—while the institutions writing the rules still reflect yesterday’s demographics.

Wrapping Up

[00:10:00] Demographics move slowly—but they are relentless. You can’t rush them.

The U.S. risks losing its edge through education and immigration challenges. China is aging before securing long-term prosperity. Europe faces a gradual, gentle decline. South Korea is already living the reality of hyper-aging. Africa remains the wild card—its youth could define the next Internet if it can translate population growth into participation and policy.

[00:10:57] So, if you want to glimpse the Internet’s future, don’t just look at protocols or supply chains. Look at the people—where they are, and who’s investing in them. That’s where innovation’s future lies.

Closing Notes

[00:11:09] Thanks for listening to this week’s episode of the Digital Identity Digest.

If this discussion helped make things clearer—or at least more interesting—share it with a friend or colleague. Connect with me on LinkedIn (@lflanagan), and if you enjoyed the show, please subscribe and leave a review on your favorite podcast platform.

You can always find the full written post at sphericalcowconsulting.com.

Stay curious, stay engaged, and keep the conversations going.

The post The People Problem: How Demographics Decide the Future of the Internet appeared first on Spherical Cow Consulting.


auth0

Social or Enterprise: Which Connection is Right?

Understand the differences between Social and Enterprise Connections to choose the right identity provider for your application.
Understand the differences between Social and Enterprise Connections to choose the right identity provider for your application.

FastID

Your API Catalog Just Got an Upgrade

Discover, monitor, and secure your APIs with Fastly API Discovery. Get instant visibility, cut the noise, and keep your APIs secure and compliant.
Discover, monitor, and secure your APIs with Fastly API Discovery. Get instant visibility, cut the noise, and keep your APIs secure and compliant.

Monday, 20. October 2025

1Kosmos BlockID

What Is Decentralized Identity? Complete Guide & How To Prepare

The post What Is Decentralized Identity? Complete Guide & How To Prepare appeared first on 1Kosmos.

Spruce Systems

Modernizing BSA and AML/CFT Compliance with Verifiable Digital Identity

In our U.S. Treasury RFC response, we propose an Identity Trust model to modernize AML/CFT compliance—delivering transparency, accountability, and trust for regulators and institutions, without sacrificing innovation or privacy.
Thank you to Linda Jeng and Elizabeth Santucci for their instrumental contributions to the analysis and recommendations in our U.S. Treasury comment letter.

The financial system’s integrity, and the public trust it depends on, can no longer rest on paper-era compliance. For more than fifty years, the Bank Secrecy Act (BSA) has guided how institutions detect and report illicit activity. Yet as the economy digitizes, this framework has become a drag on both effectiveness and inclusion. The cost of compliance has soared to $59 billion annually, while less than 0.2% of illicit proceeds are recovered. Community banks spend up to 9% of non-interest expenses on compliance; millions of Americans remain unbanked because the system is too manual, too fragmented, and too dependent on outdated verification models.

SpruceID’s response to the U.S. Treasury’s recent Request for Comment on Innovative Methods to Detect Illicit Activity Involving Digital Assets (TREAS-DO-2025-0070-0001) outlines a path forward. Drawing on our real-world experience building California’s mobile driver’s license (mDL) and powering state-endorsed verifiable digital credentials in Utah, we propose a model that unites lawful compliance, privacy protection, and public trust.

Our framework, called the Identity Trust model, shows how verifiable digital credentials and privacy-enhancing technologies can make compliance both more effective for enforcement and more respectful of individual rights.

Our proposal is not to expand surveillance or broaden data collection, but to make compliance more precise. The Identity Trust model is designed to be applied only where existing laws such as the BSA and AML/CFT rules require verification or reporting. Today’s compliance systems often require collecting and storing more personal information than is strictly necessary, which increases costs and risks for institutions and customers alike. By enabling verifiable digital credentials and privacy-enhancing technologies, our model ensures institutions can fulfill their obligations with higher assurance while minimizing the amount of personal data collected, stored, and exposed. This shift replaces excess data retention with cryptographic proofs, delivering better outcomes for regulators, financial institutions, and individuals alike.

This framework proposes regulation for the digital age, using the same cryptographic assurance that already secures the nation’s payments, passports, and critical systems to bring transparency, precision, and fairness to financial oversight.

A System Ready for Reform

Compliance with BSA and AML/CFT rules remain rooted in outdated workflows: identity verified by a physical ID, information stored in readable form, and centralized personal data. These methods have become liabilities. They drive up costs, create honeypots of data for breaches, and encourage “de-risking” that locks out lower-income and minority communities.

The technology to fix this exists today. Mobile driver’s licenses (mDLs) are live in more than seventeen U.S. states, accepted by the TSA at over 250 airports. Utah’s proposed State-Endorsed Digital Identity (SEDI) approach, detailed in Utah Code § 63A-16-1202, already provides a framework for trusted, privacy-preserving digital credentials. Federal pilots, such as NIST’s National Cybersecurity Center of Excellence (NCCoE) mobile driver’s license initiative, are proving these models ready for financial use.

What’s missing is regulatory recognition: the clarity that these trusted credentials, when properly verified, fulfill legal identity verification and reporting obligations under the BSA.

The Identity Trust Model

The Identity Trust model offers a blueprint for modernizing compliance without the need for new legislation. It allows regulated entities, such as banks or state- or nationally chartered trusts, to issue and rely on pseudonymous, cryptographically verifiable credentials that prove required attributes (such as sanctions screening status or citizenship) without disclosing unnecessary personal data.

The framework operates in four stages:

Identifying: A regulated entity (the Identity Trust, of which there can be many) is responsible for verifying an individual’s identity using digital and physical methods, based on modern best practices such as NIST SP 800-63-4A for identity proofing. Once verified, the trust issues a pseudonymous credential to the individual and encrypts their personal information. Conceptually, the unlocking key is split into three parts: one held by the individual, one by the Trust, and one by the courts, with any two sufficient to unlock the record (roughly, a “two-of-three key threshold”). Transacting: When the individual conducts financial activity, the individual presents their pseudonymous credential. Transactions are then tagged with unique one-time-use identifiers that prevent linking activity across contexts, even if collusion were attempted. Each identifier carries a cryptographically-protected payload that can only be “unlocked” with the conceptual two-of-three key threshold. Entities and decentralized finance protocols processing the identifiers are able to cryptographically verify that the identifier is correctly issued by an Identity Trust and remains valid. Investigating: If law enforcement or regulators demonstrate lawful cause, conceptually, both the court and the Identity Trust decide to operate their keys to reach the two-of-three threshold to designate authorized access to specific, limited data justified by the circumstances. The Identity Trust must have a robust governance framework for granting access to law enforcement that respects privacy and due process rights with law enforcement needs through judicial orders. Once the keys from the two entities are combined, the vault containing the relevant information about the identity can then be decrypted if it exists, revealing the individual’s information in a controlled and auditable manner, including correlating other transactions depending on the level of access granted by the lawful request. Alternatively, the individual is able to combine their key with the Identity Trust’s key to gain the ability to see their entire audit log, and also create cryptographic proofs of their actions across their transactions. Monitoring: The Identity Trust performs these continuous checks against suspicious actors and sanctions lists in a privacy-preserving manner with approved policies for manner and intervals, with the auditable logs protected and encrypted such that only the individual or duly authorized investigators can work with the Identity Trust to access the plaintext. Individuals may also request attribute attestations from the Identity Trust, for example, that they are not on suspicious actors or sanctions lists, or attestations for credit checks. 

This structure embeds accountability and due process into the architecture itself. It enables lawful access when required and prevents unauthorized surveillance when not. Crucially, the model fits within existing AML authority, leveraging the same legal and supervisory frameworks that already govern banks, trust companies, and credential service providers. 

Policy Recommendations for Treasury

SpruceID’s recommendations to Treasury and FinCEN focus on aligning policy with existing technology, ensuring that the U.S. remains a global leader in both compliance and digital trust.

Request for Consideration

Reasoning and Impact

1. Recognize verifiable digital credentials (VDCs) issued by many acceptable sources as valid evidence under Customer Identification Program (CIP) and Customer Due Diligence (CDD) obligations, including as “documentary” verification methods when appropriate.

Treasury and FinCEN should interpret 31 CFR § 1020.220 (and corresponding CIP rules and guidance) to include verifiable digital credentials if they can meet industry standards, such as a baseline of National Institute of Standards and Technology (NIST) SP 800-63-4 Identity Assurance Level 2 (IAL2) identity verification or higher, issued directly from government authorities, or through reliance upon approved institutions or identity trusts.

These verifiable digital credentials (VDCs), such as those issued pursuant to the State-Endorsed Digital Identity (SEDI) approaches, should be treated as “documentary” evidence where appropriate. The principle of data minimization should become a pillar of financial compliance, enabling VDC-enabled attribute verification encouraged over requiring the sharing of unnecessary personally identifiable information (PII), such as static identity documents, where possible.


Current CIP programs largely presume physical IDs, limiting innovation and remote onboarding, even as the statute is not prescriptive in medium or security mechanisms.

Verifiable digital credentials issued by trusted authorities provide cryptographically proven authenticity and higher assurance against forgery or impersonation, to better fulfill the aims of risk-based compliance management programs.

Recognizing VDCs as documentary evidence would enhance verification accuracy, reduce compliance costs, and align U.S. practice with FATF Digital ID Guidance (2023) and EU eIDAS 2.0, promoting global interoperability.

Attribute-based approaches to AML, such as “not-on-sanctions-list” or “US-person,” should be preferred whenever possible as they can effectively manage risks without the overcollection of PII data, avoiding a “checkpoint society” riddled with unnecessary ID requirements.

2. Permit financial institutions to rely on VDCs issued by other regulated entities, identity trusts, or accredited sources via verified real-time APIs for AML/CFT compliance.

Treasury and FinCEN should authorize institutions to accept credentials and attestations from peer financial institutions or identity trust networks when those issuers meet assurance and audit standards.

Congress should further consider the addition of a new § 201(d) to the Digital Asset Market Structure Discussion Draft (Sept. 2025) clarifying Treasury’s authority to recognize and accredit digital-identity and privacy-enhancing compliance frameworks.

While current CIP programs still assume physical ID presentation, the underlying statute is technology neutral and does not mandate any specific medium or security mechanism. Recognizing VDCs can modernize onboarding by reducing costs and friction, improving AML data quality and transparency, and enabling faster, more collaborative investigations across institutions and borders—all while minimizing data-collection risk.

Statutory clarity ensures that Treasury’s modernization efforts rest on a durable, technology-neutral foundation. This amendment would future-proof the U.S. AML/CFT regime, align it with G7 digital-identity roadmaps, and strengthen U.S. leadership in global digital-asset regulation.

3. Permit privacy-enhancing technologies (PETs) to meet verification and monitoring obligations.

Treasury should issue interpretive guidance or rulemaking confirming that zero-knowledge proofs, pseudonymous identifiers, and multi-party computation may be used for CIP, CDD, and Travel-Rule compliance if equivalent assurance and auditability are maintained.


PETs enable institutions to prove AML/CFT compliance without exposing underlying PII, minimizing data breach and insider risk exposure while maintaining verifiable oversight.

Recognizing PETs would modernize compliance architecture, lower data-handling costs, and encourage innovation consistent with global privacy and financial-integrity standards.

4. Modernize the Travel Rule to enable verifiable digital credential-based information transfer.

Treasury should amend 31 CFR § 1010.410(f) or issue guidance allowing originator/beneficiary data to be transmitted via cryptographically verifiable credentials or proofs instead of plaintext PII.

The current Travel Rule framework was built for wire transfers, not blockchain systems. Verifiable digital credentials can carry or attest to required information with integrity, selective disclosure, and traceability.

This approach preserves law-enforcement visibility while protecting privacy, ensuring interoperability with FATF Recommendation 16 and global Virtual Asset Service Providers (VASPs).

5. Establish exceptive relief for good-faith reliance on accredited identity trust, VDC, and Privacy-Enhancing Technology (PET) systems.

Treasury should use its § 1020.220(b) rulemaking authority to provide exceptive relief deeming institutions compliant when they rely on Treasury-accredited credentials or PET frameworks meeting defined assurance standards.

Institutions adopting accredited compliance tools should not face enforcement liability for third-party system errors beyond their control. Exceptive relief would provide regulatory certainty and clear boundaries of accountability.

Exceptive relief incentivizes the adoption of privacy-preserving identity systems such as identity trusts, reducing costs while strengthening overall compliance integrity.

6. Leverage NIST NCCoE collaboration for technical pilots and standards.

Treasury and FinCEN should partner with NIST’s National Cybersecurity Center of Excellence (NCCoE) Digital Identities project to pilot mDLs, VDCs, and interoperable trust registries for CIP and CDD testing.

The NCCoE provides standards-based prototypes (e.g., NIST SP 800-63-4 and ISO/IEC 18013-5/-7 mDL) that validate real-world feasibility and assurance equivalence.

Collaboration ensures technical soundness, interagency alignment, and rapid deployment of privacy-preserving digital-identity frameworks.

7. Direct FinCEN to engage proactively with industry on the adoption of advanced technologies that enhance AML compliance, investigations, and privacy protection.

Treasury should issue formal direction or guidance requiring FinCEN to establish an ongoing public-private technical working group with industry, academia, states, and standards bodies to pilot and evaluate advanced compliance technologies.

Continuous engagement with the private sector ensures that FinCEN’s rules keep pace with innovation and that compliance tools remain effective, privacy-preserving, and economically efficient.

This collaboration would strengthen AML/CFT investigations, reduce false positives, and alleviate the compliance burden on financial institutions while upholding privacy and data-protection standards.

The Path Forward

Time and again, regulatory compliance challenges have sparked the next generation of financial infrastructure. EMV chips transformed fraud detection; tokenization improved payment security; now, verifiable identity can redefine AML/CFT compliance.

By replacing static data collection with cryptographic proofs of compliance, regulators gain better visibility, institutions reduce cost, and individuals retain control over their personal information. The transformation is not solely technological—it’s institutional: from data collection to trust verification.

SpruceID’s aim is to build open digital identity frameworks that empower trust—not just between users and apps, but between citizens and institutions. Our experience powering government-issued credentials demonstrates that strong identity assurance and privacy can coexist. In our response to the Treasury, we’ve shown how those same principles can reshape AML/CFT for the digital age. But the work is far from finished.

Over the coming months, SpruceID will release additional thought pieces on how public agencies and private institutions can collaborate to advance trustworthy digital identity, from privacy-preserving regulatory reporting to unified standards for trustworthy digital identity.

We invite policymakers, regulators, technologists, and financial leaders to join us in dialogue and in action. Together, we can build a compliance framework that is lawful, auditable, and worthy of public trust.

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


Ockto

Klantbeheer met brondata: van verplicht nummer naar waardevolle check-in

In aflevering 14 van de Data Sharing Podcast gaat host Caressa Kuk in gesprek met Gert-Jan van Dijke en Jeroen van Winden (Ockto) over klantbeheer in de financiële sector. Want zodra een klant aan boord is, begint het echte werk pas. 

In aflevering 14 van de Data Sharing Podcast gaat host Caressa Kuk in gesprek met Gert-Jan van Dijke en Jeroen van Winden (Ockto) over klantbeheer in de financiële sector. Want zodra een klant aan boord is, begint het echte werk pas. 


Kin AI

The Kinside Scoop 👀 #16

Kin's biggest update yet

Hey folks 👋

Big one today.

Like we hinted in the last edition, Kin 0.6 is rolling out. This is Kin’s biggest update ever, and it’s packed.

Full rollout begins tomorrow (Tuesday, October 21, 2025), but we’ve got some sneak peeks for you.

We also have a super prompt based around making the most out of the new opportunities this update provides - so make sure you read to the end.

What’s New With Kin🚀 Meet your advisory board 🧠

You’ve probably seen them drifting into chat recently - little hints of Harmony, Sage, and the rest. Now the full board of five arrives, each with expertise in advising on a particular topic.

Sage: Career & Work

Aura: Values & Meaning

Harmony (Premium only): Relationships

Pulse (Premium only): Sleep & Energy

Ember (Premium only): Social

Each one brings a different lens on your life, but all pull insight from your Journal entries, conversations, and memories.

Conversation Starters 💬

Every advisor’s chat screen now includes some personalized, context-aware starters - not just to make beginning a conversation easier, but remembering things you wanted to talk about as effortless as possible.

Memory, re-engineered (finally) 📂

It feels like we’re always alluding to this - but now it’s here.

Kin’s Memory is now 5× more accurate when recognizing and extracting memories from conversations.

Advisors can also now search across all of your memories, Journal entries, and conversations, so they can build an understanding of context quickly.

All of this means that no matter which advisor you speak with, Kin is much more able to pull the relevant information from its improved memory structure - so you get better, smarter, more relevant advice from every advisor.

We’ve also beefed up the Memory UI. On top of the classic memory graph, you can now see what Kin knows about you - as well as the organizations and people you’re connected to.

And each of these people/organizations/places now have their own Entity pages, where you can see, edit, and add to what Kin has collected about them from your conversations.

You can even finally search memories for key words and associations!

See your progress 📊

There’s a brand new Stats page that visualizes your growth with Kin.

You can see a breakdown of usage stats and Memory types, so you can see what you’re talking about a lot, and where you and your Kin might have some blind spots.

Journaling, cleaned up 📝

Based on all your feedback, we’ve finished rebuilding the Journal from the ground up.

There’s a brand-new, simplified UI to make daily journaling easier than ever.

Premium users also unlock custom journal templates, perfect for capturing anything from gratitude logs to tough feedback moments.

New onboarding (for everyone) 🔐

Next time you open Kin, you’ll be prompted to sign in with Apple, Google, or Email.


This makes onboarding smoother and syncing easier (rumours of a desktop version abound), and lays the groundwork for future features.

But don’t worry: your data hasn’t moved an inch.

It still lives securely with you, on your device.

We’ll share a detailed write-up soon (as promised), but the short version is: simpler sign-in, same privacy-first design.

Premium (by request!) ⭐

You asked. We built it.

Premium unlocks the full Kin experience, and extends existing Free features so you can make the most of your Kin.

If you join Premium, you’ll get:

All 5 advisors (Harmony, Pulse, Ember + the two free advisors, Sage and Aura)

Unlimited text messages

1 hour of voice per day

Custom journal templates

Premium is currently $20/month - and there’s a discount if you go for 3 months.

If you don’t want to upgrade though, don’t fret. The Free tier is going nowhere: Premium is for power users who want the full advisor board and voice time.

When? 🗓️

Rollout starts Tuesday, October 21, 2025. That’s tomorrow, if you’re reading this as it goes out!


Expect updates over the following week as we make sure everything runs smoothly. Speaking of…

Talk to us!🗣

This is the biggest change Kin has ever gone through. It’s our largest step toward a 1.0 release yet - and we want to make sure we’re heading in the right direction before we get too far.

The KIN team can be reached at hello@mykin.ai for anything, from feedback on the app to a bit of tech talk (though support@mykin.ai is better placed to help with any issues).

You can also share your feedback in-app. Just screenshot to trigger the feedback form.

But if you really want to get involved, the official Kin Discord is the best place to talk to the Kin development team (as well as other users) about anything AI.

We have dedicated channels for Kin’s tech, networking users, sharing support tips, and for hanging out.

We also regularly run three casual calls every week - and we’d love for you to join them:

Monday Accountability Calls - 5pm GMT/BST
Share your plans and goals for the week, and learn tips about how Kin can help keep you on track.

Wednesday Hangout Calls - 5pm GMT/BST
No agenda, just good conversation and a chance to connect with other Kin users.

Friday Kin Q&A - 1pm GMT/BST
Drop in with any questions about Kin (the app or the company) and get live answers in real time.

We updated Kin so that it can better help you - help us make sure that’s what it does!

Our current reads 📚

No new Slack screenshot this edition - we’ve been too busy to share new articles recently!

Article: Google announced DeepSomatic, an open-source cancer research AI
READ - blog.google

Article: Meta AI glasses fuel Ray-Ban maker’s best quarterly performance ever
READ - reuters.com

Article: Google launch Gemini Enterprise, to make AI accessible to employees
READ - cloud.google.com

Article: How are MIT entrepreneurs using AI?
READ - MIT News

This edition’s super prompt 🤖

This time, your chosen advisor can help you answer the question:

“How do I make the most of new opportunities?”

Try prompt in Kin

Once the update comes out tomorrow, try hitting the link with different advisors selected, and get a few different viewpoints!

Your are Kin 0.6 (and beyond) 👥

Without you, Kin wouldn’t be anything. We want to make sure you don’t just know that, but feel it.

So, please: get involved. Chat in our Discord, email us, or even just shake the app to get in contact with anything and everything you have to say about Kin.

Most importantly: enjoy the update!

With love,

The KIN Team


Ockto

Klantbeheer met brondata: van verplicht nummer naar waardevolle check-in

Banken hebben de afgelopen jaren flinke stappen gezet in het digitaliseren van onboarding. Nieuwe klanten aanhaken gaat steeds makkelijker. Maar zodra het gaat om klantbeheer – het actueel houden van klantgegevens tijdens de looptijd – blijft de sector achter. Terwijl juist hier de druk toeneemt, van toezichthouders én vanuit zorgplicht.

Banken hebben de afgelopen jaren flinke stappen gezet in het digitaliseren van onboarding. Nieuwe klanten aanhaken gaat steeds makkelijker. Maar zodra het gaat om klantbeheer – het actueel houden van klantgegevens tijdens de looptijd – blijft de sector achter. Terwijl juist hier de druk toeneemt, van toezichthouders én vanuit zorgplicht.


auth0

Introducing CheckMate for Auth0: A New Auth0 Security Tool

Announcing CheckMate for Auth0, a new, open-source tool to proactively assess and improve your Auth0 security. Analyze your tenant configuration against best practices.
Announcing CheckMate for Auth0, a new, open-source tool to proactively assess and improve your Auth0 security. Analyze your tenant configuration against best practices.

FastID

3 Costly Mistakes in App and API Security and How to Avoid Them

Avoid costly app and API security mistakes. Learn how to streamline WAF evaluation, estimate TCO, and embrace agile development for optimal security.
Avoid costly app and API security mistakes. Learn how to streamline WAF evaluation, estimate TCO, and embrace agile development for optimal security.

Sunday, 19. October 2025

Matterium

THE DIGITAL CRISIS — TOKENS, AI, REAL ESTATE, AND THE FUTURE OF FINANCE

THE DIGITAL CRISIS TOKENS, AI, REAL ESTATE, AND THE FUTURE OF FINANCE Artificial inflation of real estate prices for decades caused the global financial crisis. We propose converting the global system from a debt-backed to an equity-backed model to solve it. We propose using AI to manage the diligence work, and using the blockchain to handle the share registers and other obligat
THE DIGITAL CRISIS

TOKENS, AI, REAL ESTATE, AND THE FUTURE OF FINANCE

Artificial inflation of real estate prices for decades caused the global financial crisis.

We propose converting the global system from a debt-backed to an equity-backed model to solve it.

We propose using AI to manage the diligence work, and using the blockchain to handle the share registers and other obligations.

By Vinay Gupta (CEO Mattereum)
with economics and policy support from Matthew Latham of Bradshaw Advisory
and art and capitalism analysis from A.P. Clarke

THE BAR CODE STARTED IT

Back when I was in primary school in the 1970s, they suddenly started teaching us binary arithmetic; why?

Well, because they could see that computers were a coming thing and, of course, we’d all have to know binary to program them. So, The Digital has been a rolling wave for pretty much all my life — that was an early ripple, but it continued to build relentlessly, and when it truly started surging forwards in the 1990s, it began to transform everything in its path. Some things it washed away entirely, some things floated on it, but everywhere it went, digital technology transformed things. Sometimes the result is amazing, sometimes it is disastrous.

The wave was sometimes visible, but sometimes invisible. You could see barcodes arrive, and replace price stickers.

I’m just a bit too young to remember when UK money was pounds, shillings and pence. In 1970 there were two hundred and forty pence to the pound.

Through the 1970s and 1980s the introduction of barcodes on goods was a fundamental change in retail, not just because it changed how prices were communicated in stores, but because it enabled a flow of realtime information about the sale of goods to the central computer systems managing logistics and money in the big store’s supply chain.

Before the bar code, every store used to put the price on every object with a little sticker gun. Changing prices meant redoing all the stickers. Pricing was analogue.

In many ways decimalization and barcoding marked the end of the British medieval period. We still buy and sell real estate pretty much the same way we did in 1970.

Monty Python and the Holy Grail, 1975 The sword has two edges

When you get digitization wrong, the downsides tend to be much larger than the upsides. It’s all very well to “move fast and break things” but the hard work is in replacing the broken thing with something that works better. It’s not a given that better systems will emerge by smashing up the old order, but the digital pioneers were young, and it seems obvious to young people that literally anything would be better than the old people’s systems. This is particularly true in America, which, being founded by revolutionaries, lacks a truly conservative tradition: in America, what is conserved, what people have nostalgia for, is revolution itself.

That makes change a constant, and this is both America’s greatest strength, and weakness. The only thing you can get people interested in is a revolution. Nobody cares about incrementally improving business-as-usual. Everybody acts like they have nothing to lose at all times.

This winner-takes-all nothing-held-back attitude exemplified by “move fast and break things” has become the house style of the digitization.

But as a result a lot of things are broken these days.

Jonathan Taplin - Move Fast and Break Things

Wikipedia turned out pretty well

You can get a decent enough answer for most purposes from Wikipedia, it’s free, it’s community generated, there’s no ads, it doesn’t enshittify (yet), and you do not need to spend a fortune on a shelf full of instantly out of date paper encyclopedias. Most people would agree this is “digitization done right.”

Spotify, not so much; it wrecked musicians livelihoods, turned music listening into a coercive hellscape of ‘curated’ playlists, and is on course to overwhelm actual human created music with AI produced digital soundalike slop that will do its best to kill imaginative new leaps in music — no AI without centuries of history and culture build up in its digital soul could come up with anything like the extraordinary stuff Uganda’s Nyegge Nyegge Tapes label finds for example — and you pay for it or get hosed with ads. It was, after all, always intended to be an advertising platform that tempted audiences in with streamed music.

Nobody ever stopped to ask how the musicians were doing.

Of course for the listeners — the consumers of music-content-product — the experience was initially utopian. People used to talk about the celestial jukebox, the everything-machine, and for a while $9.99 a month got you that. The first phase was utopian, for everybody except the musicians, the creators of music: they had a better financial deal when they got their own CDs pressed and sold them at shows. Seriously, musicians look back at that time as the good old days. Digitization went badly wrong for music.

How Spotify is stealing from small indie artists, why it matters, and what to do about it

It’s not just the software that can be disastrous; take massive data centres, not only do they cover vast areas and consume ludicrous amounts of energy, they are extremely vulnerable to disaster. The Korean government just lost the best part of a petabyte of data when one went up in smoke — the backup was in the same place it seems.

Then there’s a contagious Bluetooth hack of humanoid robots that has just come to light. You can infect one of the robots, and then over Bluetooth, it can infect the other robots, until you have a swarm of compromised humanoid robots, and Elon Musk says he’s going to produce something like 500,000 of these things.

We always thought Skynet would be some sinister defence company AI, but it turns out that basically it’s just going to be 4Chan’s version of ChatGPT — and it’s not like there isn’t plenty of dodgy, abrasive internet culture in the training data already!

This is the digital crisis, it inevitably hits field after field, but whether what emerges at the end is a winner or a disaster is completely unpredictable.

Will it lead to a Wikipedia, or a Spotify, something that’s just sort of OK, like Netflix, or something deeply weird and sinister like those hacked robots. Did Linux save the world? Will it?

Why is there such a range in outcomes over a process who’s arrival is so predictable? That is because the Powers-That-Be that might steer the transition, that could come up with an adequate response, the nation states, are really poor at digital. Nation States move too slowly, they fundamentally fail to understand the digital, and their mechanisms just haven’t caught up; they just suck at digital at a very primordial level, so the result of any digital crisis requiring state intervention is a worse crisis.

That’s not to say that any of the possible alternatives to nation states show any sign of doing this better — that’s part of the problem.

Whoever is doing the digitizing during the critical period for each industry has outsized freedom to shape how the digitization process plays out.

4chan faces UK ban after refusing to pay 'stupid' fine

Move fast and break democracy

Eventually “move fast and break things” took over in California and beyond, and crypto (the political faction, the industry, the ideology!) identified the fastest moving object in the political space (Trump-style MAGA Republicanism) and backed it to the hilt.

The American Libertarian branch of the crypto world is now trying to build out the rest of their new political model without a real grasp of how politics worked before they got interested in it. The crypto SuperPACs and associated movements threw money at electing a team who would accommodate them, in the process destroying the old American political mode without, perhaps, much concern about what else they might do once in power.

There’s a whole bunch of “break things phase” activity emerging from this right now.

Unprecedented Big Money Surge for Super PAC Tied to Trump

The “break things” part of “move fast and break things” has a very constrained downside in a corporation. Governments are a lot more dangerous to tinker with.

The Silicon Valley venture capital ecosystem itself is a relic. It is itself a legacy system. Dating back to the 1950s American boom times, Silicon Valley is having an increasingly hard time generating revenue, and today its insularity and short sightedness are legendary. There is a lot of need for innovation, and there’s no good way to fund most of it. Keeping innovation running needs a new generation of financial instruments (remember the 2018 ICO craze?) but instead we’re stuck with Series funding models.

Funding the future is a now a legacy industry.

Series A, B, C, D, and E Funding: How It Works

It still isn’t fully appreciated that today’s political crisis, to a significant extent, is because the Silicon Valley could not integrate into the old American political mode. For decades Silicon Valley struggled to find a voice in Washington, or to figure out whether the right wing or the left wing was its natural home. Meanwhile life got worse and worse in California because of a set of frozen political conflicts and bad compromises nobody seemed to be able to fix. The situation slowly escalated, but the problem in Silicon Valley was always real estate.

How Proposition 13 Broke California Housing Politics

Digital real estate is a huge global gamble

The digital crisis is just about to collide with one of society’s other major crises — the housing crisis.

We have problems, globally, with real estate. We don’t seem to be able to build enough of it, nobody seems to be able to afford it, largely because it’s being used as an asset class by finance instead of being treated as a basic human need.

Real estate availability and real estate bubbles are horrendous problems.

The U.S. Financial Crisis

Now the hedge funds are moving in to further financialize the sector at the same time as people seem not to be able to buy enough housing to have kids in.

This has been steadily getting worse since Thatcher and Reagan in the late 70s/early 80s. Once, one person in work could comfortably buy a house and support a family, then it became necessary for two people to work to do that, now it’s slipping beyond the grasp of even two people, and renting is no cheaper; renters are just people who haven’t got a deposit together for a mortgage, so are paying someone else’s and coming out the end with nothing to show for it. It’s a mess, and then we’re going to come along and we’re going to digitize real estate. What could possibly go wrong?

Well, if we don’t deal with this as being an aspect of a much larger crisis, we will be rolling the dice on whether we like the outcome we get from digitization of real estate. Things are really bad already, and bad digitization could make them so much worse. But, as is the nature of the digital crisis, it could also make them better, and it is up to us, while things are still up in the air, to make sure that this is what happens.

The initial skirmishes around digitization of real estate have mostly been messy: the poster children are Airbnb and Booking, both of which enjoy near-monopoly status and externalize a range of costs onto the general public, while usually offering seamless convenience to renters and guests. But when things go wrong and an apartment gets trashed or a hotel is severely substandard, often people are left out in the cold dealing with a corporation which is so large it might as well be a government and this is, indeed, usually how the Nation State as an institution has handled digital.

Corporations the size of governments negotiate using mechanisms that look more like international treaties than contracts, and they increasingly wield powers previously reserved to the State itself. It’s not a great way to handle a customer service dispute on an apartment.

Neoreaction (NRx) and all the rest of it simply want to ratify this arrangement and create a permanent digital aristocracy as a layer above the democracy of the (post-)industrial nation states: the directors and owners of corporations treated as above the law.

Inside the New Right, Where Peter Thiel Is Placing His Biggest Bets

Economic stratification and political complexity

One reason we aren’t dealing adequately with these crises is that the very existence of many of them is buried by an increase in the variance of outcomes. It used to be that people operated within a fairly narrow bandwidth. The standard deviation of your life expectations was relatively narrow, barring things like wars. Now, what we have is this incredibly broad bimodal distribution, trimodal distribution. A chunk of people manage to stay in the average, a tiny number of people wind up as billionaires, and then maybe 20% of the society gets shoved into various kinds of gutters. In America, it’s medical bankruptcy, it’s homelessness, it’s the opioid epidemic, it’s being abducted by ICE, those kinds of things.

What we’ve done is create a much wider range of possible outcomes, and a lot of those outcomes are bad, but the average still looks kind of acceptable — the people at the top end of that spectrum are throwing off the averages for the entire rest of the thing.

Ten facts about wealth inequality in the USA - LSE Inequalities

In fact, generally speaking, on the streets things repeatedly approach the point of revolution as various groups boil over. If they all boil over at the same time, that’s it, game over, new regime.

We’re in a position where we’ve managed to create a much more free society with a much wider range of possible outcomes, however, the bad outcomes are very severe and often masked by the glitzy media circus around the people enjoying the good outcomes. Good outcomes are being disproportionately controlled by a tiny politically dangerous minority at the top, but as these are the ones making the rules, trying to correct the balance is super difficult.

Democracy as we knew it was rooted in economic democracy, and nothing is further from economic democracy than robots, AI, and massive underemployment. Political democracy without economic democracy is unstable and only gives the lucky rich short term benefits; they are gambling on being able to constantly surf the instabilities to keep ahead of the game, continuing to reap those benefits while palming the externalities off on everyone else. But that can’t be done; eventually someone gets something wrong and the whole lot hits the wall in financial crashes, riot, revolution, and no one gets a good outcome. It all ends up like Brazil if you’re lucky and Haiti if you’re not.

The combination of extreme wealth gaps and democracy cannot be stabilized, and increasingly the rich are looking at democracy as a problem to be solved, rather than the solution it once was. I cannot tell you how bad this is.

Yet the benefits of technology are all around us, increasingly so. Democracy tends towards the constant redistribution of those benefits through taxation-and-subsidy. To fight against being redistributed, the billionaires are rapidly moving towards a post-democratic model of political power. The general human need for access to a safe and stable future seems to be less and less a stated goal for any political faction. This is getting messy.

Today, middle of the road democratic redistribution sounds like communism, but it’s not; it just sounds like that because the current version of capitalism is so distorted and out of whack. American capitalism used to function much more like Scandinavian capitalism, a version of capitalism that gives everyone a reasonable bit of the pie, with a strong focus on social cohesion. Within that model, the slice may vary considerably in size, but it should allow even those at the lower end safe and dignified lives. Weirdly enough the only large country running a successful 1950s/1960s “rapid economic growth with reasonable redistribution of wealth” model of capitalism today is China.

Breakneck

Fractocrises and magic bullets

In 2016 there was a little dog with a cup of coffee who reflected back the feeeling the world had gone out of control and nobody cared.

In 2016. Sixteen. No covid. Not much AI. Little war. But still the pressure.

https://www.nytimes.com/2016/08/06/arts/this-is-fine-meme-dog-fire.html

Understandably some very smart people are pursuing the concept of polycrisis as a response to the many arms of chaos.

https://x.com/70sBachchan/status/1723103050116763804

Deal with the crises in silos and this mess is the result.

The impulse towards polycrisis as a model is understandable, but it’s a path we know leads to a very particular kind of nowhere. It leads to Powerpoint.

https://www.nytimes.com/2010/04/27/world/27powerpoint.html

In truth, crises are fractal. They are self-similar across levels. The chains of cause-and-effect which spider across the policy landscape in impenetrable webs are produced by a relatively small number of repeating patterns.

“Follow the money”, for example, almost always cuts through polycrisis and replaces the complexity of the situation with a small number of actors who are above the law.

To use a medical analogy, a patient can present with a devastating array of systemic failures driven by a single root cause. Considr someone suffering from dehydration — blood pressure is way down, kidneys are failing, maybe 40 different systems are going seriously wrong. Treat them individually and the patient will just die.

Step back and realise “Oh, this patient is dehydrated!”, give them water and rehydration salts and appropriate care and all the problems are solved at once.

Or maybe it’s reintroducing wolves to Yellowstone Park; suddenly the rivers work better, there are more trees, insect pests decline, because one big key change ramifies through the system and brings about a whole load of unanticipated benefits downstream. Systems have systemic health. Systems also have systemic decline. But the complex systems / “polycrisis” analysts focus entirely on how failing systems interact to produce faster failure in other failing systems, effectively documenting decline, and carry around phrases like “there is no magic bullet.”

There is. The magic bullet for dehydration is water.

Finding the magic bullets is medicine; documenting systemic collapses is merely biology.

REHYDRATING THE AMERICAN DREAM

The dollar is a dead man walking — there is no way to stabilise the dollar in the current climate. It is holed below the waterline but the general public has only the very earliest awareness of this problem today. By the time they all know there will be no more dollar. Perhaps the entire fiat system is in terminal decline as a result: if the dollar hyperinflates, or dies in some other way, will it take the Pound and the Euro and the Yen with it? Who could have foreseen this?

Frankly, in 2008, following the Great Financial Crisis, everybody knew.

https://theconversation.com/as-uk-inflation-falls-to-2-3-heres-what-it-could-mean-for-wages-230563

There is a long term macro trend of fiat devaluation. There is also the acute fallout of the 2008 catastrophe. We have a fundamental problem: the 1971 adoption of the fiat currency system (over the gold standard) is not working. The dysfunction of the fiat system detonated in 2008. We have now had 17 years of negotiations with the facts of the matter, but so far, no solutions.

Well, other than this one…

All the fiat economies are carrying tons of debt, crazy unsustainable amounts of debt, both personal and national. It could well be that a lot of smart people are thinking that a “great reset” of some kind would solve a lot of problems simultaneously.

The Jubilee Report: A Blueprint for Tackling the Debt and Development Crises and Creating the Financial Foundations for a Sustainable People-Centered Global Economy

The nature of that “great reset” is going to determine whether your children live as slaves, or live at all.

So the global approach to currency needs overhauling as part of a more general effort to make the political economy stabilize in an age of exponential change.

It is not the first time that it has been done even within living memory.

Bowie released “The Man Who Sold The World” while the dollar was still backed by physical gold. This is not ancient history. It’s not at all irrational to think that 6000 year old human norms about handing over shiny bits of metal for food might need to be updated for the world we are in today. But it’s also not too late to adjust our models and fine-tune the experiment.

Globally issued non-state fiat, like Bitcoin, is just not going to get you the society that you want, unless the society you want is an aristocratic oligarchy. Bitcoin is just a different kind of fiat — money that only exists because someone says it’s money and enough people go along with it, rather than money based on something that has intrinsic value itself. It has the same problem as fiat currency has: there is no way to accurately vary the amount of money to meet the demand for money to keep the price of money stable. Purchasing power is always going to be unpredictable and that makes long term economic forecasting difficult for workers and governments alike.

Governments print too much. Bitcoin prints too little, particularly this late in the Halving Cycle.

Understanding the Bitcoin Halving Cycle and Its Impact on 2025 Market Trends

The problem of purchasing power fluctuations causes for estimating long term infrastructure project economics has huge impacts too: if you can’t accurately predict the future, you can’t finance infrastructure. You can’t plan for pensions. The great wheel of civilization grinds to a halt as short-termism eats the seed corn of society. Nobody wants to make a 30 year bet because of robots and AI and all the rest, and so we wind up ruled quarter by quarter with occasional 4 year elections.

Not dollars, not Bitcoin

The debates about what money should be are not new.

Broadly, there are three models for currency

(1) government fiat/national fiat — fine in principle but, in practice, in nearly all highly democratic societies the governments wind up inflating away their own currencies over time

(2) global fiat issued on blockchains — Ethereum, Bitcoin, all the rest of those things

(3) resource-backed currencies — conventionally that means gold but it can also apply to things like Timebanking and various mutual credit systems

Gold is already massively liquid. You cannot solve a global crisis by making gold 40 times more valuable than it currently is because it becomes the backing for all currencies again. Gold is also very unequally distributed: Asian women famously collect the stuff in the form of jewellery and a shift to a new gold standard could easily make India one of the wealthiest countries in the world again, women first. Much as this sounds like a delightful outcome, it’s hard to imagine a new economic order ruled by now-very-wealthy-indeed middle class Indian housewives who had a couple of generations to build up a solid pile of bangles.

This, by the way, is the same argument against hyperbitcoinization — being on the Silk Road in 2011 and buying illegal substances using bitcoin is not the same thing as being good at productive business or being a skilled capital allocator: windfalls based on a social choice about currency systems are not a sensible way to allocate wealth, although it does often happen.

Hyperbitcoinization Explained - Bitcoin Magazine

You can argue that bitcoin mining requires a ton of expertise and technological capacity, and this is worthy economic reward, but there is a fundamental limit to how many kilograms of gold you can rationally expect to pull out of a data center running 15 year old open source software.

Similarly, the areas which were geographically blessed (or is that cursed?) by gold would wind up with a huge economic uplift. It becomes a question of geological roulette whether you have gold or not, and unlike the coal and oil and iron and uranium lotteries, nobody can build anything using gold as an energy source or a tool. Gold is just money. It’s like an inheritance.

Resource curse - Wikipedia

So what’s the alternative? Bitcoin scarcity, gold scarcity, these are all models in which early owners of the asset do very well when the asset class is selected for the backing of the new system. Needless to say those asset owners are locked in a very significant geostrategic power struggle for the right to define the next system of the world. They are all bastards.

Strange women lying in ponds distributing swords is no basis for a system of government...

But what if we move to something that is genuinely, fundamentally useful? Well, what about land? You’re much more likely to get a world that works if you rebase the global currencies on real estate in a way that causes homes to get built, than if you rebase the world’s currencies on non-state fiat.

Both sides of this equation must balance. If we simply lock the amount of real estate in the game, then (figure out how to) use it as currency we wind up with another inflexible monetary supply problem. Might as well use Bitcoin or Gold. We’ve been down this track: we did not like it, and in 1971 we changed course permanently.

Real estate could be “the new gold” but real estate has flexible supply because you can always build more housing.

If the law permits.

And if we can solve that problem, the incentives align in a new way: building housing increases the money supply. If house prices are rising too fast, build more housing.

Bryan's Cross of Gold and the Partisan Battle over Economic Policy | Miller Center

Artificially scarce real estate is the gold of today

We’ve been manipulating real estate prices for a few generations.

The data is screamingly clear, and that’s 100% evidence of pervasive market manipulation: housing is not hard to physically build, but bureaucratically there’s been a massive concerted effort to keep the stuff expensive by bureaucratic limitations on supply. There are entire nation states dedicated to this cause.

The exceptions to this rule look like revolutionary actions.

Consider Austin, Texas which saw its real economic growth and potential status as The Next Great Californian City threatened by a San Francisco style house price explosion. Austin responded with a massive building wave, and managed to rapidly stabilize house prices at a more sustainable level.

Some reports say that >50% of Silicon Valley investor money eventually winds up in the pockets of landlords.

Peter Thiel: Majority of capital poured into SV startups goes to 'urban slumlords'

The way out is to build housing, and a lot of it.

But not like this.

Digital finance has to build more real estate to win

At the root of everything is that the digitization of real estate has to build more real estate. If the next system does not work for average people to get them a better outcome than the current system, there is going to be real trouble: state failures or the violent end of capitalism.

First and above all, this means we need to build more real estate.

Building has been artificially restricted because to make it work as an investment that increases in value, there needs to be scarcity; if you build more its investment value goes down, but its utility value increases.

YIMBY - Wikipedia

One way to digitize real estate is to create currencies backed by real estate, but the logical outcome of this is to make real estate as scarce as possible to protect the value of the currency, which is a disaster for the people who actually need to live somewhere. It would be like a society where mining gold is illegal because the value of the gold supply has to be protected, except we are doing this for homes. We are here now, and we could make this disaster worse.

In truth, if we take that path, we are fucked beyond all human belief. We will have literally immanentized the eschaton. You basically wind up with the economic siege of the young by the old, and that is a powder keg waiting to blow. State failures and violent revolutions.

The 2008 crisis was triggered by over-valuing real estate (underpricing the risk, to be precise) on gigantic over-simplified financial instruments like mortgage-backed securities, literally gigantic bundles of mortgages with a fake estimate about how many of the people taking out those mortgages could afford them in the long run. The global economic slowdown triggered by the US-led invasion of Afghanistan and Iraq (don’t even get me started) hit the mortgage payers, and the risk concentrated in markets like “subprime mortgages” and the credit default swaps which were being used to hedge those (and other) risks.

Credit default swap - Wikipedia

The digital crisis, when it hits real estate, could make 2008 look like the boom of the early 90s. However we choose to tokenize real estate, it has to result in more homes getting built.

However we choose to tokenize real estate, it has to result in more homes getting built.

You cannot use real estate as the backend for stablecoins, then limit the supply of real estate in a way that causes prices to continually go up. That paradigm is what has caused the current real estate crisis. It’s been destroying our societies in America and Europe for decades, so it’s not going to solve the crisis it has caused.

This is largely downwind of Thatcher and Regan and financial deregulation on one hand, paired with promises to control inflation over the long run (we’re talking decades). This was the core promise made by the Conservatives: inflation will stay low forever. We will not print money.

Once that promise was in place it was possible to have low interest rates and long mortgages, meaning the working class could afford to buy housing. They called this model the Ownership Society.

Ownership society - Wikipedia

The ownership society (and associated models) was an attempt to change the incentives for poor voters so they would not use democracy to take control of the government and vote money from the rich into their own pockets.

What we’ve done is we’ve basically bribed an entire generation (the boomers) with that model, and now we’re at the point where they have no grandchildren and the entire thing is collapsing because housing is a much worse kind of bitcoin than bitcoin. Expensive bitcoin makes bitcoin hard to buy. Expensive housing devastates entire societies. And that’s where we are today.

The solution to all of these ills is to solve these crises at a fundamental level. The patient is dehydrated. The patient needs water. Affordable housing.

This is why you’ve got to focus on outcomes for average people: in any crisis you can find a minority of people who are thriving. Those people are useless for diagnosing the cause of the crisis. You have to look at the losers to understand why the system is broken.

The rent is too damn high.

The patient needs water not antibiotics

If we fix the housing part of this digitization crisis correctly, the results are going to be amazing. That could be the one big change that propagates through the entire financial system and brings back the balance.

Essentially, what works is not backing a currency with real estate, then manipulating the real estate supply to prop it up. What works, we believe, is being able to use land directly as a kind of currency. This is not in the current sense of taking out a loan with the land as collateral, but instead using it directly as money without ever having to dip into any kind of fiat; no need to turn anything into dollars to be able to trade things. Why would I pay interest on a loan against my collateral if I can simply pay for something using my collateral directly?

If we digitize real estate properly, the reward is that we could potentially use tokenized real estate to stabilize the financial system. Regulatory friction is keeping real estate, by far the world’s largest asset class, illiquid in a world which desperately needs liquidity. But there is also a very hard problem in automating the valuation of real estate, and that is going to need AI.

When something is digitized it is inevitably an approximation, and the consequences of that approximation are much larger in some areas than others. With real estate, when we buy and sell we’re constantly in a position where we are dealing with the gap between the written documentation of the real estate and the actual value of the asset. As a result, you wind up with another kind of digitization crisis, one caused by the gap between the digital representation of the object and the object itself.

Using current systems, the liability pathways attached to misleading information in a data set being used to value assets would normally be revealed during legal discovery. If the problem is worth less than tens of millions it’s never going to be found out. If the problem is worth tens or hundreds of billions, it’s now too late. A lot slips through the gaps, historically speaking. And this is only going to get worse now that sellers have started to fake listings using AI.

Realtors Are Using AI Images of Homes They're Selling. Comparing Them to the Real Thing Will Make You Mad as Hell Real estate listing gaffe exposes widespread use of AI in Australian industry - and potential risks

This information-valuation-risk nexus creates friction; to get real estate digitization to work we need to eradicate that friction, and keep fake listings out of the system. This challenge is only going to get harder.

Total Value of Global Real Estate: Property remains the world's biggest store of wealth | Savills Impacts

Real estate is a safer fix for the currency crisis

“Without revolution” is a feature, not a bug.

Vitally, unlike gold or bitcoin, the distribution of land and real estate ownership is close to the current estimates of people’s wealth: a shift to a real estate based economic model would not have the same gigantic and disruptive impacts as moving to either gold or bitcoin or both. There is enough value there too: $400 trillion dollars of real estate, versus $30 trillion of gold or only $38 trillion of US national debt. Global GDP is a bit over $100 trillion.

There is enough real estate, correctly deployed, to create a stable global medium of exchange.

The valuation problem has meant that previously the transactional costs of pricing real estate as collateral were insane. Instead of doing the hard work, pricing the real estate, financial institutions priced the mortgages on the real estate using simplistic models. The 2008-era financial system simply treated the mortgage as a promise to pay, without evaluating whether the person who was supposed to pay had a job, or if anybody was willing to buy the underlying asset which was meant to be backing the mortgage. A thing is worth what you can sell it for!

Shocking Headlines of the 2008 Financial Crisis - CLIPPING CHAINS

You would think somebody was minding the store, but you only need to look at the post-2008 shambles to realize not only is there nobody minding the store, the store itself burned down some time ago. In fact the global financial system is a system in name only: it’s more like a hybrid of a memecoin economy, a Schelling point, a set of real economic flows of oil and machine tools and microprocessors, and big old books of nuclear strategy and doctrine. The globa “system” is a bunch of bolted together game boards with innumerable weird pieces held together by massive external pressures, gradually collapsing because the stress points between different complex systems are beyond human comprehension. Environmental economics, for example. Or the energy policy / national security interface. AI and everything. The complexity overwhelms understanding and the system degrades.

It does not have to be this way.

When you have AI to price complex collateral like real estate (or running businesses), you can do things with that collateral that you couldn’t do previously. Of course that AI system needs trustworthy inputs. If the information coming into the system is factual, and the AI is an objective analyst, various parties can use their own AI systems to do the pricing without human intervention, so the trade friction plummets. Remember too these are competitive systems: players with better AI pricing models will beat out players with less effective price estimation, and that continuous competition will keep the markets honest, at least for a while.

Mattereum Asset Passports can provide the trustworthy inputs, again based on an extremely competitive model to price the risk of bad information getting into the Mattereum Asset Passport which is being used by the AI system to price the asset. The economic model we use was built from the ground up to price every substantial asset in the world even in an environment with the pervasive use of AI systems to manufacture fake documents and perpetrate fraud. We literally built it for these times, but we started in 2017. That’s futurism for you!

The economic mechanism of the Mattereum Asset Passport is a thing of beauty. The way that it works is that data about an asset is broken up into a series of claims. For example, for a gold bar, weight and purity and provenance and vaulting and delivery details and likely enough to price the bar. For an apartment there might be 70 claims including video walk throughs of the space and third party insurances covering issues like title or flood insurance. Every piece of information in the portfolio is tied to a competitively-priced warranty: buyers will rationally select the least expensive adaquate warranty on each piece of data. This keeps warranty prices down. This process is a strain with humans in the loop for every single decision, but in an agentic AI economy this competitive “Product Information Market” model is by far the best way of arriving at a stable on-chain truth about matters of objective fact.

It’s not that the system drives out error: it does, but the point is that it accurately prices the risk of error which is a much more fundamental economic process. This is a subtle point.

Bringing Truth to Market with Trust Communities & Product Information Markets

The combination of AI to commit real estate fraud and Zcash and similar technologies to launder the money stolen in those frauds is going to be unstoppable without really good, competitive models for pricing and then eliminating risk on transactions. The alternatives are pretty unthinkable.

In this new model, if I come to you with a token that says, based on the Mattereum Asset Passport data, this property is worth $380,000. Then you can say I will pay you 20% of this property in return for a car, there’s the transaction. You take 20% of a piece of real estate, I take an SUV. Maybe you can require me to buy back a chunk of that equity every month (a put option). Maybe the equity is pulled into an enormous sovereign wealth fund type apparatus which uses the pool to back standard stable tokens backed by fractions of all the real estate in the country. The story may begin with correctly priced collateral, but it does not end with correctly priced collateral. This is the anchor but it is only a part of a system.

If we get it right — and it’s a lot of moving parts — we could get out of the awful shadow of not only 2008’s financial crisis, but the calamitous changes to the global system which emerged from 1971.

WTF Happened in 1971?

The pragmatics of making real estate liquid

As long as you’ve got the ability to do relative pricing based on AI analysis, you don’t need to convert everything into currency to use it in trade. If you have an AI that can do the relative valuations, including risk and uncertainty, you can reach a position where you don’t have to use fiat money to make a fair exchange between different items, like land and cars, or apartments and antique furniture, or factories and farms; there are a whole set of AI-based value-estimation mechanisms that can be used for doing that and produce a fair outcome.

This cuts down or elimiates the valuation problems which can be caused by any kind of fiat — be it government fiat like the dollar, or private fiat like bitcoin — making it possible to operate on tokenized land — tokens based on an asset that is inherently dramatically more stable and inherently non-volatile. Solid assets back transactions. Closer to gold, but more widely distributed.

It’s a big story. But at its simplest what if we just said… “look, this is a global currency crisis. And the reason we’re in that crisis is artificial inflation. Real estate prices. Take the inflated real estate and the debt associated with the real estate transform it into equity, you know, debt to equity transformation…” and we restart the game on a sounder basis.

Who can follow along with that tune?

If you tokenize half, or even a third, of the real estate, what that provides is a staggeringly enormous pool of assets which move from being illiquid to liquid and that liquidity — widely distributed in the hands of ordinary people by virtue of them already owning these properties — then bails out the rest of the system. The conversion of mortgage debt into shared ownership arrangements, as mortgage lenders take equity rather than facing huge waves of defaults (again), balances the books without requiring huge government bailouts and money printing as in 2008. Homeowners do not hit the sheer logistical nightmares of moving house (particularly in old age) nor do they have to borrow money from lenders by remortgaging, creating more debt.

Rather than attaching debt to the real estate, we simply add a cap table to the real estate as if it was a tiny little company, and then let the owners sell or exchange some of that equity for whatever they want.

It’s a relatively small change to established norms, with massive, outsized benefits.

The key benefit of this approach is precisely that it is non-revolutionary. Compare the social stresses between this approach and doing that rescue process by massively pumping the price of (say) Bitcoin. In the hyperbitcoinization model you wind up with massive, massive, massive class war because you have people that were cryptocurrency nerds who are now worth half a trillion. You can’t have that kind of transfer of power without the system trying to engineer around it. Same thing happens with gold at $38,000 an ounce. The shift in wealth distribution is too violent for society to survive the transitional processes.

But making real estate truly liquid gives the economy the flexibility it desperately needs, probably without wrecking the world in the process.

Turning real estate debt into real estate equity and then making the equity tradable is not a new trick in finance: large scale real estate finance projects do things like this all the time. We’re just using established techniques from corporate finance at a much smaller scale, on a house-by-house basis, to safely manage the otherwise unmanageable real estate bubble. If every piece of real estate in America had the ability to do tokenized equity release built into the title deeds, America would not have solvency problems.

Pricing debt which does not default is relatively easy and prior to 2008 the global system sought stability by pricing debt as if it would not default. This looks like a joke now. But pricing defaults on debt is very very hard because the global economy is a just a part of a much larger unitary interlinked system and factors from beyond the view of spreadsheets can cause the world to move: covid, most recently. Such correlated risks change everything and are inherently unpredictable. Debt-based economies carry such risks poorly. Equity is a much better instrument for handling risk, but we have over-restricted its use, and are paying the price (literally) for this societal-scale error of judgement.

Debt cannot do what equity can, and we have too much debut and not enough equity.

Pricing complex and diverse assets like real estate is orders of magnitude harder than pricing good debt. Fortunately we now have the compute.

Flipping us from a debt world to an equity world needs a competitive AI environment to value the assets, and the blockchain to make issuing and transferring equity in those assets manageable.

That’s what’s needed to start clearing up the gridlocked debt obligation nightmare.

It’s not that hard to imagine, if you could tokenize one house, you could tokenize all of them. If you think of it as the debt to equity transformation for all of the mortgage debt, and then you pull the mortgage debt back out of the American system because you turn it into equity and then you allocate it to the banks, you could actually make America liquid again much faster.

It is an extreme manoeuvre, but the question is, as always, “compared to what?”

At the end of that we’d be left with a very different real estate ownership model, more like the Australian strata title or English Commonhold mode. In both of these instances, aspects of a real estate title deed are split between multiple owners (the “freehold” is fractional) forming what amounts to an implicit corporate structure within every real estate title deed.

Imagine, that, but scaled.

Strata title - Wikipedia Commonhold - Wikipedia So practical government fought dirty for years

Business is pretty good at change once government gets out of the way.

Once tokenized equity is clearly regulated in America, business will figure out real estate tokenization very fast. We could see 5,000 companies in America that are capable of doing real estate tokenization five years after the SEC says it’s okay to do it.

Business will create competing industrial machines that will effect the transformation, and get huge numbers of people out of the debt. Shared equity arrangements for housing could rebalance the economy without crashing society. The speed at which society can get the assets on chain is equal to how quickly finance can satisfactorily document them and fractionalize them.

What is a plausible documentation standard for a real world asset on chain that you could use an AI system to create? That’s a Mattereum Asset Passport.

Mattereum aims to get real estate through the digitization crisis in a healthy and productive way. Specifically, a decentralized, networked way which is kept honest by ruthless competition to honestly price risk in fair and free (AI powered) markets.

A business model which is the best of capitalism.

The alternatives are not attractive.

But there is reason for hope.

Once you tell the Americans what the rules are, the Americans will go there and do it. The only way that the SEC could hold back mass adoption of crypto was by refusing to tell people the rules. It doesn’t matter how onerous the regulatory burden was, if the SEC had told people the rules, they would have crawled up that regulatory tree a branch at a time, and we would have had mass tokenization six months after the rules were set, whatever the rules were.

The long delay was only possible because of an aggressive use of ambiguity, I’m going to say charitably, to protect Wall Street. Maybe it was to keep Silicon Valley out of the banking business, but however you want to think about it, the SEC had a very strong commitment under previous administrations that there was not going to be mass tokenization.

We can take this further — as the digitization wave washes inevitably over everything, if we continue to use this model we can finally be done with the age of the Digital Crisis and all its chaoses, replaced with far more stable, and predictably advantageous outcomes. For example, if everybody is using an AI to put a price tag on anything they look at, and all I have to do is hold that up in the air and say, does anybody want this? Then what you could get is effectively a spot market in everything, because the AIs do pricing. In that environment is anybody going to get a destructive permanent lock in? What makes most of the big digitization disasters into disasters is the formation of wicked monopolies, after all.

Spot markets today are for things like gold, oil or foreign exchange, anything where there’s so much volume in the marketplace that the prices are basically set by magic. With a vast number of participants in a global marketplace, all you need to do is hold up an asset, then everybody uses their AI to price the asset, resulting in a market that has a spot price for everything. Add the tokens to effect the title transfer. When you have a market that has a spot price and everything, all assets are in some way equivalent to gold — the thing that makes gold, gold, is that you can get a spot price on it. So if we have spot pricing for basically everything, based on AI agents, what you wind up with is being able to use almost any asset in the world as if it was gold. Everything is capital, completing the de Soto vision of the future.

Finance and Development

In this future, all assets are equivalent to gold because you can price them accurately and cheaply, and can verify the data about them. It changes the entire nature of global finance, because that finally removes the friction from transacting assets. Then, if you’ve got near-zero friction transactions in assets, why use money? No need for dollars, no need for bitcoin; instead, a new financial system creating itself out of the whole cloth on the fly, and one that is stable and shows every sign of being rational because it is diverse and not tied to any single asset that can distort the market through exuberance and crashes. Diversification is the only stability.

Now that would be a paradigm worthy of the name “stablecoins”!

Anyone got a better plan for saving the world?

In a world that has blockchains, artificial intelligence, and a global currency crisis, we need big ideas and big reach to get to a preferable future. It’s an alignment problem, not just AI alignment but capital alignment. We don’t just strive against 2008’s AAA bonds backed by mouldy sheds alone, but against future Nick Landian factors about AI alignment.

Through the lens of AI, we can start looking at all the world’s real estate as an anchor for the rest of the economy. When we put the diligence package for a piece of real estate on chain in the form of Mattereum Asset Passport, then over time 50 or 70 or 90 or 95 or 99.9% of the diligence could be done by competing networks of AIs, striving to correctly value property and price risk in competitive markets which reliably punish corruption with (for example) shorts. With those tools, we could rapidly tokenize the world and use the resulting liquidity to keep the wheels from falling off the global economy.

This is, at least in potential, a positive way of solving the next financial crisis before it really starts and ensuring that the digitization of real estate does not create another digital disaster.

CONCLUSION

Artificial inflation of real estate prices for decades caused the global financial crisis.

We propose converting the global system from a debt-backed to an equity-backed model to solve it.

We propose using AI to manage the diligence work, and using the blockchain to handle the share registers and other obligations.

THE DIGITAL CRISIS — TOKENS, AI, REAL ESTATE, AND THE FUTURE OF FINANCE was originally published in Mattereum - Humanizing the Singularity on Medium, where people are continuing the conversation by highlighting and responding to this story.

Friday, 17. October 2025

Shyft Network

G20’s Crypto Dilemma: Regulation Without Coordination

The Financial Stability Board (FSB) — the G20’s global risk watchdog — released a sobering statement: there remain “significant gaps” in global crypto regulation. It wasn’t the typical bureaucratic warning. It was a clear signal that the world’s financial governance structures are lagging behind the speed and fluidity of decentralized systems. For an industry built on cross-border code and border

The Financial Stability Board (FSB) — the G20’s global risk watchdog — released a sobering statement: there remain “significant gaps” in global crypto regulation.

It wasn’t the typical bureaucratic warning. It was a clear signal that the world’s financial governance structures are lagging behind the speed and fluidity of decentralized systems. For an industry built on cross-border code and borderless capital, national rulebooks no longer suffice.

But the FSB’s concern reaches beyond oversight. It exposes an unresolved paradox at the heart of digital finance: how to regulate what was designed to resist regulation.

Fragmented Governance, Unified Risk

The FSB’s assessment underscores a growing structural mismatch. The world’s regulatory responses to crypto have been disparate, reactive, and jurisdictionally fragmented.

The United States continues to rely on enforcement-driven oversight, led by the Securities and Exchange Commission (SEC) and Commodity Futures Trading Commission (CFTC), each defining “crypto assets” through its own lens. The European Union is pursuing harmonization through the Markets in Crypto-Assets Regulation (MiCA), creating the first comprehensive regional rulebook for digital assets. Asia remains diverse: Japan and Singapore operate under established licensing regimes, while India and China take more restrictive, state-centric approaches.

To the FSB, this regulatory pluralism is not innovation — it’s exposure. The lack of standardized frameworks for risk management, consumer protection, and cross-border enforcement creates vulnerabilities that can spill over into the traditional financial system.

In a market where blockchain transactions flow without borders, inconsistent regulation becomes the new systemic risk.

Regulatory Arbitrage: The Silent Threat

This fragmented environment fuels what the FSB calls “regulatory arbitrage” — the quiet migration of capital, operations, and data to jurisdictions with the weakest oversight.

Stablecoin issuers, decentralized finance (DeFi) platforms, and digital asset exchanges can relocate at the speed of software. For regulators, national boundaries have become lines on a digital map that capital simply ignores.

The result is a patchwork of supervision. Entities can appear compliant in one jurisdiction while operating opaque structures in another. Risk becomes mobile, and accountability becomes ambiguous.

Ironically, this dynamic mirrors the early years of global banking — before coordinated frameworks like Basel III sought to standardize capital rules. Crypto now faces the same evolution: a system outgrowing its regulatory perimeter.

Privacy as a Barrier and a Battleground

One of the FSB’s most striking observations concerns privacy laws. Regulations originally designed to protect individual data are now obstructing global financial oversight.

Cross-border supervision depends on data sharing — but privacy regimes like the EU’s General Data Protection Regulation (GDPR) and similar frameworks in Asia restrict what can be exchanged between authorities.

This creates a paradox:

To monitor crypto markets effectively, regulators need visibility. To protect users’ rights, privacy laws impose opacity.

The collision of these principles reveals a deeper tension between financial transparency and digital sovereignty.

For blockchain advocates, this friction isn’t a flaw — it’s the point. Privacy, pseudonymity, and autonomy were not accidental features of decentralized systems; they were foundational responses to surveillance-based finance.

Now, as regulators push for traceability “from wallet to wallet,” the original ethos of blockchain — self-sovereignty over data and identity — faces its greatest institutional test.

The Expanding Regulatory Perimeter

The FSB’s report marks a turning point: the global regulatory community no longer debates whether crypto needs rules, but how far those rules should reach.

Stablecoins have become the front line. The Bank of England (BoE) recently stated it will not lift planned caps on individual stablecoin holdings until it is confident such assets pose no systemic threat. Meanwhile, the U.S. Federal Reserve has warned that the growth of privately backed digital currencies could undermine monetary policy if left unchecked.

These positions signal that regulators see crypto not as a niche market, but as a parallel financial infrastructure that must be integrated or contained.

Yet, as oversight expands, so does the distance from decentralization’s original promise. The drive to institutionalize crypto — through licensing, capital controls, and compliance standards — risks turning decentralized finance into regulated middleware for the existing system.

The innovation remains, but the autonomy fades.

From Innovation to Integration

What the FSB implicitly acknowledges is that crypto’s mainstreaming is no longer hypothetical. Tokenized assets, on-chain settlement, and programmable money are being adopted by major banks and financial institutions.

However, this adoption often comes with a trade-off: decentralized architecture operated under centralized control.

The example of AMINA Bank — which recently conducted regulated staking of Polygon (POL) under the Swiss Financial Market Supervisory Authority (FINMA) — illustrates this trajectory. The blockchain may remain decentralized in code, but its operation is now filtered through institutional risk, compliance, and prudential oversight.

Crypto is entering a phase of institutional assimilation, where its tools survive but its principles are moderated.

The Ethical Undercurrent: Control vs. Autonomy

At its core, the FSB’s warning is not only about risk but about control. Global regulators see the same infrastructure that enables open, peer-to-peer exchange also enabling opaque, borderless financial activity that escapes accountability.

Their response — standardization and supervision — is rational from a stability standpoint. But it introduces a new ethical question: who governs digital value?

If every decentralized protocol must operate through regulated entities, if every wallet must be traceable, and if every transaction must comply with jurisdictional mandates, then blockchain’s promise of financial self-determination becomes conditional — granted by regulators, not coded by design.

This doesn’t make regulation wrong. It makes it philosophically consequential.

A Call for Coordination, Not Convergence

The FSB’s call for tighter global alignment does not mean a single, monolithic framework. True coordination will require mutual recognition, data interoperability, and respect for jurisdictional privacy laws, not their erosion.

Without this nuance, global harmonization risks turning into regulatory homogenization, where innovation bends entirely to institutional comfort.

A sustainable balance will depend on how regulators treat decentralization:

As a risk to be mitigated, or As an architecture to be understood and integrated responsibly.

The distinction is subtle but defining.

The Architecture of Financial Sovereignty

The G20’s warning marks a pivotal moment. It is a reminder that the future of digital finance will not be decided by code alone, but by the alignment — or collision — of regulatory philosophies.

Crypto began as a rejection of centralized financial power. It now faces regulation not as an external force, but as an inevitable layer of the system it helped create.

The question ahead is not whether crypto will be regulated. It already is.
The real question is whose definition of sovereignty will prevail — that of the individual, or that of the institution.

About Shyft Network

Shyft Network powers trust on the blockchain and economies of trust. It is a public protocol designed to drive data discoverability and compliance into blockchain while preserving privacy and sovereignty. SHFT is its native token and fuel of the network.

Shyft Network facilitates the transfer of verifiable data between centralized and decentralized ecosystems. It sets the highest crypto compliance standard and provides the only frictionless Crypto Travel Rule compliance solution while protecting user data.

Visitour website to read more, and follow us on X (Formerly Twitter), GitHub, LinkedIn,Telegram,Medium, andYouTube.Sign up for our newsletter to keep up-to-date on all things privacy and compliance.

Book your consultation: https://calendly.com/tomas-shyft or email: bd@shyft.network

G20’s Crypto Dilemma: Regulation Without Coordination was originally published in Shyft Network on Medium, where people are continuing the conversation by highlighting and responding to this story.


Elliptic

Elliptic’s Typologies Report: Detecting the money flows behind the global pig butchering ecosystem

In recent years, the growing scale and profitability of so-called pig butchering scams has sparked increasing concern among law enforcement and regulatory agencies around the world. 

In recent years, the growing scale and profitability of so-called pig butchering scams has sparked increasing concern among law enforcement and regulatory agencies around the world. 


auth0

Auth0 FGA Logging API: A Complete Audit Trail for Authorization

Discover the new Auth0 Fine-Grained Authorization (FGA) Logging API. Programmatically retrieve a complete audit trail of authorization logs to debug access issues, monitor threats, and ensure compliance.
Discover the new Auth0 Fine-Grained Authorization (FGA) Logging API. Programmatically retrieve a complete audit trail of authorization logs to debug access issues, monitor threats, and ensure compliance.

FastID

DDoS in September

Fastly's September 2025 DDoS report details modern application attacks. Get insights and guidance to strengthen your security initiatives.
Fastly's September 2025 DDoS report details modern application attacks. Get insights and guidance to strengthen your security initiatives.

Thursday, 16. October 2025

auth0

September 2025 in Auth0: Advanced Security Controls and Auth0 for AI Agents

Explore Auth0's September 2025 product updates, featuring Auth0 for AI Agents, Tenant Access Control List in GA, Dry Run for Auth0 Deploy CLI, and more.
Explore Auth0's September 2025 product updates, featuring Auth0 for AI Agents, Tenant Access Control List in GA, Dry Run for Auth0 Deploy CLI, and more.

1Kosmos BlockID

What Is Digital Identity Management & How to Do It Right

The post What Is Digital Identity Management & How to Do It Right appeared first on 1Kosmos.

Spruce Systems

Designing Digital Guardianship for Modern Identity Systems

Considerations for how states can responsibly represent parental, custodial, and delegated authority without compromising privacy.

In the move toward more inclusive and privacy-respecting digital government services, guardianship (when one person is legally authorized to act on behalf of another) is a core, but often overlooked, component.

Today, guardianship processes are fragmented across probate court, family court, and agency-level determinations, with no clear mechanism for digital verifications. Without clarity, agencies risk legal challenges if they inadvertently allow the wrong person to act on behalf of a dependent.

Rather than treating guardianship as an abstract capability, we believe states should identify a non-exhaustive list of key use cases they want to enable. For example, a parent accessing school records on behalf of a minor, a guardian applying for healthcare or social services on behalf of a dependent senior adult, or a foster parent temporarily authorized to pick a child up. Each of these may require a different level of assurance, auditability, and inter-agency coordination.

Why Legal Infrastructure Falls Short

Several legal and regulatory barriers may affect the implementation of a state digital identity. At the state level, existing statutes were drafted for physical credentials and may not clearly authorize digital equivalents in all contexts. Without explicit recognition of state digital identity as a legally valid proof of identity, agencies may be constrained in adopting digital credentials for remote service delivery.

This legal ambiguity creates friction for both agencies and residents, limiting the full potential of digital identity solutions.

Mapping Authority: Who Can Issue What, and When

Guardianship in digital identity is a complex and, as yet, unsolved problem. A guardianship solution should accept decisions from the entities legally empowered to make them, represent those decisions in credentials rather than recreating them, and keep endorsements current as circumstances change.

The first step is to enumerate today’s pathways to establishing guardianship and to identify which entities are authorized to issue evidence. This mapping enables cohesive implementation and prevents confusion about who can issue what.

In parallel, a program should also clarify which agencies authorize which actions and what evidence each verifier needs. Where authorities differ, the state can allow agencies to issue guardianship credentials that reflect their scope while still unifying common steps to reduce friction.

A Taxonomy for Real-World Guardianship Scenarios

We believe that states should define a clear guardianship credential taxonomy.

There are multiple ways to define guardianship depending on legal and operational context, such as parental authority, foster care, medical consent, or financial guardianship. This will naturally lead to multiple guardianship credential types, tailored to definitions, use cases, and issuing agencies.

Design for Flexibility and Change

Digital delivery introduces several challenges that the program should address up front. Endorsements need to change cleanly at the age of majority or when a court modifies an order, including a clear transfer of control to the individual. Reissuance and backstops should be specified for lost devices or keys and calibrated to the chosen technical models. 

The design should remain flexible enough to accommodate emerging topics, including AI agent-based interactions, without locking in assumptions that are likely to shift.

Support Human Judgment and Prevent Abuse

The overall system for guardianship should maximize the ability for appropriate and contextualized exercise of human judgement by responsible individuals. All of these systems, even protected with cryptography, security measures, and fraud detection, will still be faulty. They should be designed to prioritize humans and their wellbeing, even with failures and fraud present.

A state digital identity framework should require that as much credential validity information as is appropriate and necessary to be made available to the relying party, and that clear indicators of the credential’s current status are available to holders.

It is equally important to prevent abuse of the system. A state must ensure that guardianship credentials cannot be issued or accumulated in ways that could enable fraud, such as one person holding dozens of guardian endorsements to unlawfully access benefits or facilitate trafficking.

The Future of Digital Guardianship

Guardianship in digital identity is not a future problem, it’s a present-day requirement. A successful state digital identity framework must support these relationships with clarity, flexibility, and privacy at its core.

SpruceID helps states design systems that reduce the risk of fraud without sacrificing individual autonomy. Contact us to learn more.

Contact Us

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


Thales Group

Thales Celebrates 60 Years in Mexico, driving technological innovation and local development

Thales Celebrates 60 Years in Mexico, driving technological innovation and local development prezly Thu, 10/16/2025 - 16:00 Mexico Share options Facebook X
Thales Celebrates 60 Years in Mexico, driving technological innovation and local development prezly Thu, 10/16/2025 - 16:00 Mexico

Share options

Facebook X Whatsapp Linkedin Email URL copied to clipboard 16 Oct 2025 Thales, a global leader in advanced technologies, marks 60 years in Mexico, supporting the country’s technological development with solutions in Defence, Aerospace, Cybersecurity, and Digital. With more than 1,300 employees, the company has established a strong industrial footprint, spearheading key strategic projects for national growth. In this milestone year, Thales has proudly received the official “Hecho en México” label from the Mexican government, recognizing products and services that are designed and manufactured locally.

Mexico City, October 15, 2025 – Since 1965, Thales has been part of Mexico’s technological transformation. Today, with over 1,300 employees, it maintains a strong industrial presence that includes two production and personalization centers for payment cards and SIM/eSIM, an Air Traffic Management Service and Integration Center, and a Cyber Academy that trains professionals in cybersecurity. These operations serve not only the domestic market but also customers around the world, positioning Mexico as a strategic hub for the Group.

Over the past six decades, Thales has become an integral part of the daily lives of millions of Mexicans—from every phone call or mobile connection, every card or digital payment transaction, to the safety of their air travel and national defense. Thales’ radars and control centers manage 100% of Mexico’s airspace traffic. Additionally, the Mexican Navy’s Long-Range Oceanic Patrol Vessel (POLA) is equipped with Thales combat systems and sensors.

Thales is present wherever defence, security, and technological innovation are essential to advancing and safeguarding society. This journey has been made possible thanks to the trust of government entities, private companies, institutions, and cities that, for six decades, have chosen Thales as a strategic partner to face critical moments and explore new frontiers with confidence in an increasingly interconnected and complex world. In the face of every challenge, we reaffirm our commitment to building a future we can all trust.

This year, Thales proudly received the “Hecho en México” designation, awarded by the Ministry of Economy, recognizing not only the local origin of its production, but also its ongoing commitment to innovation, job creation, and specialized talent development in the country. This recognition underscores the company’s dedication to Mexico’s growth and global competitiveness.

"We look to the future with the same enthusiasm that marked the beginning of our journey 60 years ago, ready to remain a driver of change and progress in Mexico’s strategic sectors. And what better way to celebrate 60 years in the country than by honoring our people, strengthening national innovation, and reaffirming our commitment to this nation. At Thales, we proudly carry the 'Hecho en Mexico' label, because behind every project, client, and solution, there are Mexican engineers, researchers, and professionals making world-class technological advancements possible," said Analicia García, Country Director of Thales in Mexico.

Thales plays a key role in strengthening Mexico’s defence and security, with advanced systems that help safeguard its sovereignty and protect its citizens. It is also the leading provider of air traffic management systems in Mexico and a key player in the financial sector, where its cybersecurity and digital identity solutions protect the transactions and sensitive information of millions of citizens. In the field of defence and security, the Group contributes to strengthening national capabilities with advanced technologies that support the protection of territory, sovereignty, and the security of critical infrastructure. Its technology promotes trust in the national financial ecosystem and enhances the country’s resilience against emerging digital threats.

With pride in its legacy and eyes firmly on the future, Thales in Mexico will continue to expand its talent pool, investing in Mexican engineers whose high level of expertise and ability to excel on the international stage are undeniable. The company remains committed to promoting local talent, innovation, and research—solidifying its role as a strategic partner in building a safer, more competitive, and globally connected Mexico.

About Thales in Latin America

With six decades of presence in Latin America, Thales, a global tech leader for the Defence, Aerospace, Cyber & Digital sectors. The Group is investing in digital and “deep tech” innovations – Big Data, artificial intelligence, connectivity, cybersecurity and quantum technology – to build a future we can all trust.

The company has 2,500 employees in the region, across 7 countries - Argentina, Bolivia, Brazil, Chile, Colombia, Mexico and Panama - with ten offices, five manufacturing plants, and engineering and service centres in all the sectors in which it operates.

Through strategic partnerships and innovative projects, Thales in Latin America drives sustainable growth and strengthens its ties with governments, public and private institutions, as well as airports, airlines, banks, telecommunications and technology companies.

View PDF countries : Americas > Mexico https://thales-group.prezly.com/thales-celebrates-60-years-in-mexico-driving-technological-innovation-and-local-development thales-celebrates-60-years-mexico-driving-technological-innovation-and-local-development On Thales Celebrates 60 Years in Mexico, driving technological innovation and local development

LISNR

4 Ways Ultrasonic Proximity Solves the Security-Friction Trade-Off

The Payments Paradox: The financial services landscape is defined by a relentless drive for frictionless commerce. Yet, the industry remains trapped in a payments paradox: increasing convenience often comes at the expense of security and reliability. The current generation of low-friction solutions, primarily QR codes, are highly susceptible to spoofing and fraud. Conversely, secure methods […]
The Payments Paradox:

The financial services landscape is defined by a relentless drive for frictionless commerce. Yet, the industry remains trapped in a payments paradox: increasing convenience often comes at the expense of security and reliability. The current generation of low-friction solutions, primarily QR codes, are highly susceptible to spoofing and fraud. Conversely, secure methods like NFC are costly, hardware-dependent, and struggle with mass deployment.

This trade-off is untenable.

LISNR has introduced the definitive answer: Radius. By utilizing ultrasonic data-over-sound, Radius provides the industry with the missing link—a secure, hardware-agnostic, and offline-reliable method for token exchange and proximity verification. This technology is not an iteration; it is the strategic shift required to future-proof mobile payments.

 

The Current Vulnerability and Reliability Gaps

For financial institutions and payment processors, the challenge lies in securing high-value transactions across a fractured ecosystem:

QR Code Spoofing: QR code payments are vulnerable to “quishing” (QR code phishing). A fraudster can easily overlay a malicious code onto a legitimate one, hijacking payments or stealing credentials. This simplicity is its greatest security flaw. Offline Transaction Liability: In environments with poor connectivity (e.g., transit, emerging markets), most digital wallets revert to a hybrid system where transactions are batched. This exposes merchants to greater fraud liability and introduces a dangerous delay in payment certainty. Deployment Bottlenecks: Scaling a payment solution for tap-to-pay payment solutions quickly requires high capital expenditure. The mandatory, dedicated hardware required for NFC makes global deployment slow and expensive, hindering financial inclusion. Radius: The Strategic Imperative for Payment Modernization

LISNR’s Radius SDK addresses these strategic deficiencies by decoupling transactional security from reliance on hardware and the network. It transforms every device with a speaker and microphone into a secure payment endpoint.

Here are the four non-negotiable benefits of adopting Radius for your payments platform:

1. Absolute Security 

LISNR eliminates the core vulnerability of open-source payment modalities by building security directly into the data transfer protocol.

Spoofing Elimination: ToneLock® uses a proprietary security precaution to obfuscate the payload before transmission. Only receivers with the correct, authorized key can demodulate the tone, making it impossible for unauthorized apps to read or spoof the payment data. End-to-End Encryption: For the highest security standards, the SDK offers optional, built-in AES 256 Encryption for all payloads, ensuring data remains unreadable. 2. Unrivaled Offline Transaction Certainty

Radius is engineered for mission-critical reliability, ensuring transactions are secure and auditable even when the network fails.

Network Agnostic Reliability: The entire ToneLock and AES 256 Encryption/Decryption process can occur offline. This enables the secure exchange and validation of payment tokens without requiring an active internet connection. Radius ensures instant transaction certainty and lowers merchant liability in disconnected environments. Bi-Directional Exchange: The SDK supports bidirectional transactions, allowing two devices (e.g., customer wallet and merchant terminal) to simultaneously transmit and receive tones on separate channels. This two-way handshake initiates payment instantly while simultaneously delivering a merchant record to the consumer device. 3. High-Velocity, Zero-Friction Commerce

The speed of a transaction directly correlates with consumer satisfaction and throughput in high-volume settings. Radius accelerates the process with specialized tone profiles.

Rapid High-Throughput: For point-of-sale environments, LISNR offers Point 1000 and Point 2000 tone profiles. These are optimized for sub-1 meter range and engineered for high throughput, enabling near-instantaneous credential exchange for rapid checkout and self-service kiosks. Seamless User Experience: The process can be nearly entirely automated: the user simply opens the app, and the transaction is initiated and verified by proximity, eliminating manual input, scanning, or tapping. 4. Low-Cost, Universal Deployment

Radius is a software-only solution that democratizes access to secure, contactless payment infrastructure.

Hardware-Agnostic: The SDK is integrated into existing applications and requires only a device’s standard speaker and microphone. This removes the need for costly upgrades to POS hardware, dramatically reducing the capital expenditure barrier for global payment modernization. Scalability: As a software solution, upgrading the entire payment infrastructure is as easy as updating the app. Because there is no new hardware to manage, payment providers can achieve unparalleled scale and speed in deploying secure payment functionality across millions of endpoints instantly.

LISNR is the worldwide leader in proximity verification because our software-first approach delivers the security and reliability the payments industry demands, without sacrificing the frictionless experience consumers expect.

Want to Learn more?

We’d love to learn more about your payment solution and discuss how data-over-sound can help improve your consumer experience. Learn more about our solutions in finance on our website or contact us to set up a meeting. 

 

 

The post 4 Ways Ultrasonic Proximity Solves the Security-Friction Trade-Off appeared first on LISNR.


Ockto

Efficiënter beoordelen zonder gedoe: documentloos is de nieuwe standaard

De tijd dat je stapels documenten nodig had om een klant goed te beoordelen, loopt op z’n einde. In een wereld waarin snelheid, compliance en klanttevredenheid steeds belangrijker worden, is werken met pdf’s, bijlagen en handmatige controles niet meer houdbaar. Zeker in de credit management sector leidt het oude proces tot vertraging, fouten en frustratie – voor zowel de klant als de or

De tijd dat je stapels documenten nodig had om een klant goed te beoordelen, loopt op z’n einde. In een wereld waarin snelheid, compliance en klanttevredenheid steeds belangrijker worden, is werken met pdf’s, bijlagen en handmatige controles niet meer houdbaar. Zeker in de credit management sector leidt het oude proces tot vertraging, fouten en frustratie – voor zowel de klant als de organisatie.


Ontology

Building What Matters

The Future of Web3 Communities Everyone in Web3 talks about community. It is the word every project uses. The badge everyone wears. But what does it actually mean? Too often, “community” becomes a checkbox. A Telegram channel. A Discord server with NFT giveaways. Some quick incentives to drive engagement. It looks alive, but it is often built on borrowed attention. When the rewards stop, so
The Future of Web3 Communities

Everyone in Web3 talks about community. It is the word every project uses. The badge everyone wears. But what does it actually mean?

Too often, “community” becomes a checkbox. A Telegram channel. A Discord server with NFT giveaways. Some quick incentives to drive engagement. It looks alive, but it is often built on borrowed attention. When the rewards stop, so does the activity.

That is not a community. That is marketing.

Real community building is slower. It is harder. It is the process of aligning people who build with people who use what is built. It is finding the point where incentives and intention meet. Because incentives bring people in, but intention keeps them there.

Ontology has been working at this intersection for years. Its ecosystem, Ontology Network, ONT ID, ONTO Wallet, and Orange Protocol, is designed to make digital identity, reputation, and ownership usable. The mission is not to promise a new world. It is to build the tools that make that world functional.

The challenge, and the opportunity, lies in connection. How do we connect the builders who create new infrastructure with the users who actually need it? How do we make sure that what gets built is not only possible, but wanted?

The Two Paths to Community

There are two basic ways to grow a Web3 community.

The first is bottom-up. Builders and users start together, often from an open-source idea or shared need. Growth is organic. The intent is pure. It can lead to real innovation, but it often lacks structure. Without incentives or direction, momentum slows. Projects fade before reaching scale.

The second is top-down. A project defines the mission, creates incentives, and drives participation. This works in the short term. It brings clear goals and resources. But it risks becoming transactional. When participation is driven only by reward, genuine buy-in disappears.

Ontology’s view is that neither path works alone. Bottom-up builds belief. Top-down brings clarity. The right approach mixes both. You need intent to guide action, and incentives to accelerate it.

Incentives Are Not the Enemy

Incentives get a bad reputation in Web3, mostly because they are often misused. Too much focus on token rewards can distort priorities. But incentives are not the problem. Misalignment is.

Used correctly, incentives can do what they are meant to do: attract attention, reward effort, and encourage collaboration. They should not replace purpose. They should amplify it.

A healthy Web3 community does not reward speculation. It rewards contribution. The best projects find ways to recognize value that is created, not just traded. That is where Ontology’s focus on verifiable identity and reputation becomes powerful.

Through tools like ONT ID and Orange Protocol, participants can prove who they are and what they have done. This makes contribution measurable. It lets communities recognize real participation, not just noise. Builders can see who their users are. Users can trust who they are working with.

That is how you turn incentives from a gimmick into a growth engine.

What People Need vs. What People Want

Every product in Web3 faces a simple question: do people need it, or do they want it?

The truth is that need alone is not enough. People need security, privacy, and control of their data, but they rarely act on those needs until they want the solution. Want drives action.

At the same time, want without need leads to hype. Short-term excitement, no lasting value.

The strongest projects meet both. They make people want what they need. That is the balance Ontology’s tools aim to strike. Identity and reputation are not new ideas, but in Web3 they become essential. Users are learning that decentralized identity is not just a feature. It is freedom. It is usability.

When developers build with that in mind, they create products that solve real problems. ONTO Wallet gives users control of their assets and identity in one place. Orange Protocol turns reputation into a building block for trust. ONT ID lets applications integrate secure, verifiable identity without friction.

These are not abstract innovations. They are the foundation for the next generation of apps, games, and communities.

The Bridge Between Builders and Users

Community building in Web3 is not just about size. It is about structure. Builders and users need to meet in the middle.

That is where Ontology wants to focus: creating spaces and systems where developers and users can collaborate directly. Builders should understand what users need before they design. Users should influence what gets built. The result is not just adoption, but alignment.

How that happens can vary. Incubators can bring early projects into focus. Incentives can reward experimentation. Retrospective funding can support what already works. The structure is flexible. The principle is constant. Connect intent with incentive.

Ontology’s ecosystem gives that structure a home. It already supports tools for identity, data, and trust. The next step is bringing those who build with those who use. Because Web3 only scales when both sides grow together.

From Incentives to Intent

The early years of Web3 were about speculation. The next phase is about utility. The projects that last will be the ones that shift from short-term incentives to long-term intent.

That means building for real people, not just wallets. It means communities where participation has meaning, and contribution has visibility. It means giving users a reason to stay even when rewards change.

Ontology’s technology is ready for that shift. But technology alone is not enough. It needs people. Builders who see the value of decentralized identity and reputation. Users who want control and trust. Contributors who believe in open collaboration.

The future of Web3 will not be built by one group or the other. It will be built by both, together.

The Next Step

If the goal of Web3 is freedom, then community is the mechanism that gets us there. Not through marketing or speculation, but through shared purpose.

Ontology is ready to help build that future. To connect the developers who create with the users who validate. To make collaboration not just possible, but natural.

It starts by asking the right question: what do people need, and what do they want? Then building where those answers overlap.

Let us bring you together. Builders, meet your users. Users, meet your builders. The next phase of Web3 begins with both.

Building What Matters was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


ComplyCube

19 Virtual Asset Providers Fined up to $163,000 by Dubai Regulators

Nineteen Virtual Asset firms in Dubai have been charged with penalties amounting to $163,000. These firms were fined for operating without a Virtual Assets Regulatory Authority (VARA) license and breaching Dubai's marketing rules. The post 19 Virtual Asset Providers Fined up to $163,000 by Dubai Regulators first appeared on ComplyCube.

Nineteen Virtual Asset firms in Dubai have been charged with penalties amounting to $163,000. These firms were fined for operating without a Virtual Assets Regulatory Authority (VARA) license and breaching Dubai's marketing rules.

The post 19 Virtual Asset Providers Fined up to $163,000 by Dubai Regulators first appeared on ComplyCube.


Recognito Vision

Why ID Verification Services Are the Smart Choice for Businesses Verifying Customers

You know that moment when a new app asks for your ID and selfie before letting you in? You sigh, snap the photo, and in seconds it says “You’re verified!” It feels simple, but behind that small step sits an advanced system called ID verification services that keeps businesses safe and fraudsters out. In today’s...

You know that moment when a new app asks for your ID and selfie before letting you in? You sigh, snap the photo, and in seconds it says “You’re verified!” It feels simple, but behind that small step sits an advanced system called ID verification services that keeps businesses safe and fraudsters out.

In today’s digital world, identity verification isn’t a luxury. It’s a necessity. Without it, online platforms would be a playground for scammers. That’s why more companies are turning to digital ID verification to secure their platforms while keeping user experiences smooth and fast.

 

How ID Verification Evolved into a Digital Superpower

Not too long ago, verifying someone’s identity meant visiting a bank, filling out forms, and waiting days for approval. It was slow and painful. Today, online identity verification has turned that ordeal into a 10-second selfie check.

Feature Traditional ID Checks Digital ID Verification Time Days or weeks Seconds or minutes Accuracy Prone to human error AI-powered precision Accessibility In-person only Anywhere, anytime Security Paper-based Encrypted and biometric

According to a Juniper Research 2024 report, businesses using digital identity checks have reduced onboarding times by 55% and cut fraud by nearly 40%. That’s not an upgrade, that’s a revolution.

 

How ID Verification Services Actually Work

It looks easy on your screen, but behind the scenes, it’s like a full orchestra performing perfectly in sync. When you upload your ID, OCR technology instantly extracts your details. Then, facial recognition compares your selfie to the photo on your document, while an ID verification check cross-references the data with secure global databases.

All this happens faster than your coffee order at Starbucks. And yes, it’s fully encrypted from start to finish.

If you want to see how global accuracy standards are tested, visit the NIST Face Recognition Vendor Test (FRVT). This benchmark helps developers measure the precision of their facial recognition algorithms.

 

Why Businesses Are Making the Shift

Let’s be honest, no one likes waiting days to get verified. Businesses know that, and users expect speed. So, they’re shifting from manual checks to identity verification solutions that deliver results in real time.

ID verification software gives businesses an edge by:

Cutting down on manual reviews

Reducing fraud risks through AI analysis

Staying compliant with rules like GDPR

Enhancing global accessibility

A McKinsey & Company study found that businesses using automated ID verification checks experienced up to 70% fewer fraudulent sign-ups. Another Gartner analysis (2023) reported that automation in verification reduces onboarding costs by over 50%.

So, businesses aren’t just going digital for fun; they’re doing it to stay alive in a market where users expect instant trust.

 

The Technology Making It All Possible

Every smooth verification hides some serious tech genius. Artificial intelligence detects tampered IDs or fake lighting, while machine learning improves detection accuracy over time. Facial recognition compares live selfies to document photos, even if your hair color or background lighting changes.

The FRVT 1:1 results show that today’s best facial recognition models are over 20 times more accurate than they were a decade ago, according to NIST.

Optical Character Recognition (OCR) handles the text on IDs, and encryption ensures data privacy. It’s these small but powerful innovations that make modern ID document verification fast, secure, and scalable.

Want to explore real-world tech examples? Visit the Recognito Vision GitHub, where you can see how advanced verification systems are built from the ground up.

 

Why It’s a Smart Investment

Investing in reliable ID verification solutions isn’t just about compliance, it’s about building customer trust. When users feel safe, they’re more likely to finish sign-ups and come back.

According to Statista’s 2024 Digital Trust Report, companies using digital identity verification saw conversion rates increase by 30–35%. That’s because users today value both speed and security.

So, when you invest in this technology, you’re not just protecting your business. You’re giving users the confidence to engage without hesitation.

Where ID Verification Shines

The beauty of user ID verification is that it works across every industry. It’s not just for banks or fintech startups.

In finance, it prevents money laundering and fraud.

In healthcare, it confirms patient identities for telemedicine.

In eCommerce, it helps fight fake orders and stolen cards.

In gaming, it enforces age restrictions.

In ridesharing and rentals, it keeps both parties safe.

According to a 2022 IBM Security Study, 82% of users say they trust companies more when those companies use digital identity checks. That’s how powerful this technology is; it builds credibility while keeping everyone safe.

 

Recognito Vision’s Role in Modern Verification

For businesses ready to step into the future, Recognito Vision makes it simple. Their ID document recognition SDK helps developers integrate verification directly into apps, while the ID document verification playground lets anyone test the process firsthand.

Recognito’s platform blends AI accuracy, fast processing, and user-friendly design. The result? Businesses verify customers securely while users hardly notice it’s happening. That’s efficiency at its best.

 

Challenges to Consider

Of course, nothing’s perfect. Some users hesitate to share IDs online, and global documents come in thousands of formats. Integrating verification tools into older systems can also feel tricky.

However, choosing a trustworthy ID verification provider can solve most of these issues. As Gartner’s 2024 Cybersecurity Trends Report points out, companies that adopt verified digital identity frameworks see significantly fewer data breaches than those using manual checks.

So while there are challenges, the benefits easily outweigh them.

 

The Road Ahead

The next phase of digital identity verification is all about control and privacy. Imagine verifying yourself without even sharing your ID. That’s what decentralized identity systems and zero-knowledge proofs are bringing to life.

According to the PwC Global Economic Crime Report 2024, widespread digital ID verification could save over $1 trillion in fraud losses by 2030. That’s not science fiction, it’s happening right now.

The world is heading toward frictionless, instant trust. And businesses that adopt early will lead the pack.

 

Final Thoughts

At its core, ID verification services aren’t just about checking who someone is. They’re about creating confidence for users, for businesses, and for the digital world as a whole.

If you’re a company ready to modernize and protect your platform, explore Recognito Vision’s identity verification solutions. Because in an era of deepfakes, scams, and cyber tricks, the smartest move is simply knowing who you’re dealing with safely, quickly, and confidently.

 

Frequently Asked Questions

 

1. What are ID verification services and how do they work?

ID verification services confirm a person’s identity by analyzing official ID documents and matching them with facial or biometric data using AI technology.

 

2. Why are ID verification services important for businesses?

They help businesses prevent fraud, comply with KYC regulations, and build customer trust through secure and fast verification processes.

 

3. Is digital ID verification secure for users?

Yes, digital ID verification is highly secure because it uses encryption, biometric checks, and data protection standards to keep user information safe.

 

4. How do ID verification services help reduce fraud?

They detect fake or stolen IDs, verify real users instantly, and prevent unauthorized access, reducing fraud risk significantly.

 

5. What should businesses look for in an ID verification provider?

Businesses should look for providers that offer fast results, global document support, strong data security, and full regulatory compliance.

Wednesday, 15. October 2025

Anonym

DVAM 2025: MySudo discount for survivors of domestic violence

October is National Domestic Violence Awareness Month (DVAM), an annual event dedicated to shedding light on the devastating impact of domestic violence and advocating for those affected.  The theme for DVAM 2025 is With Survivors, Always, which is exploring what it means to be in partnership with survivors towards safety, support, and solidarity. Anonyome Labs […] The post DVAM 2025: MySud

October is National Domestic Violence Awareness Month (DVAM), an annual event dedicated to shedding light on the devastating impact of domestic violence and advocating for those affected. 

The theme for DVAM 2025 is With Survivors, Always, which is exploring what it means to be in partnership with survivors towards safety, support, and solidarity.

Anonyome Labs stands #WithSurvivors this National Domestic Violence Awareness Month and every day—and is proud to help empower safety through privacy for survivors of domestic violence via our Sudo Safe Initiative.

What is the Sudo Safe Initiative?

The Sudo Safe Initiative is a program developed to bring privacy to those at higher risk of verbal harassment or physical violence.

Sudo Safe offers introductory discounts on the MySudo privacy app, to help people to keep their personally identifiable information private.

You can get a special introductory discount to try MySudoby becoming a Sudo Safe Advocate.

Here’s how it works:

Visit our website at anonyome.com. Sign up to be a Sudo Safe Advocate — it’s quick and easy. Once you’re signed up, you’ll receive details on how to access your exclusive discount and start using MySudo.

In addition to survivors of domestic violence, the Sudo Safe Initiative also empowers safety through privacy for:

Healthcare professionals Teachers Foster care workers Volunteers Survivors of violence, bullying, or stalking.

How can MySudo help survivors of domestic violence?

MySudo allows people to communicate with others without using their own phone number and email address, to reduce the risk of that information being used for tracking or stalking.

With MySudo, a user creates secure digital profiles called Sudos. Each Sudo has a unique phone number, handle, and email address for communicating privately and securely.

The user can avoid making calls and sending texts and emails from their personal phone line and email inbox by using the secure alternative contact details in their Sudos.

No personal information is required to create an account with MySudo through the app stores. 

Download MySudo

Four other ways to help survivors of domestic violence Educate yourself and others

Learn and share the different types of abuse (physical, emotional, sexual, financial, and technology-facilitated) and how to find local resources and support services. 

Listen without judgment

One of the most powerful things you can offer a domestic violence survivor is support, by doing things like:

Creating a safe space for them to share their experiences without fear of judgment or blame Letting them express their feelings while validating their emotions Being willing to listen  Helping them create a safety plan.

Encourage professional support

Encourage your friend or family experiencing domestic violence to seek help from counselors, therapists, or support groups that specialize in trauma and abuse. You can assist by researching local resources, offering to accompany them to appointments, or helping them find online support communities. Professional guidance can provide victims with the tools they need to rebuild their lives.

Raise awareness and advocate for change

Support survivors not just during DVAM, but year-round. Find ideas here and learn about the National Domestic Violence Awareness Project.

Become a Sudo Safe Advocate

If your organization can help us spread the word about how MySudo allows at-risk people to interact with others without giving away their phone number, email address, and other personal details, we invite you to become a Sudo Safe Advocate.

As an advocate, you’ll receive:

A toolkit of shareable privacy resources A guide to safer communication Special MySudo promotions Your own digital badge.

Become a Sudo Safe Advocate today.

More information

Contact the National Domestic Violence Hotline.

Learn about the National Domestic Violence Awareness Project.

Learn more about Sudo Safe Initiative and Anonyome Labs.

Anonyome Labs is also a proud partner of the Coalition Against Stalkerware.

The post DVAM 2025: MySudo discount for survivors of domestic violence appeared first on Anonyome Labs.


HYPR

HYPR Delivers the First True Enterprise Passkey for Microsoft Entra ID

For years, the promise of a truly passwordless enterprise has felt just out of reach. We’ve had passwordless for web apps, but the desktop remained a stubborn holdout. We’ve seen the consumer world embrace passkeys, but the solutions were built for convenience, not the rigorous security and compliance demands of the enterprise. This created a dangerous gap, a world where employees could

For years, the promise of a truly passwordless enterprise has felt just out of reach. We’ve had passwordless for web apps, but the desktop remained a stubborn holdout. We’ve seen the consumer world embrace passkeys, but the solutions were built for convenience, not the rigorous security and compliance demands of the enterprise. This created a dangerous gap, a world where employees could access a sensitive cloud application with a phishing-resistant passkey, only to log in to their workstation with a phishable password.

That gap closes today.

HYPR is proud to announce our partnership with Microsoft to deliver the industry's first true enterprise-grade passkey solution. By integrating HYPR’s non-syncable, FIDO2 passkeys directly with Microsoft Entra ID, we are finally eliminating the last password and providing a unified, phishing-resistant authentication experience from the desktop to the cloud.

What is the Difference Between Enterprise and Other Passkeys?

The term "passkey" has become a buzzword, but not all passkeys are created equal. The synced, consumer-grade passkeys offered by large tech providers are a fantastic step forward for the public, but they present significant challenges for the enterprise:

Loss of Control: Synced passkeys are stored in third-party consumer cloud accounts, outside of enterprise control and visibility. Security Gaps: They are designed to be shared and synced by users, which can break the chain of trust required for corporate assets. The Workstation Problem: They do not natively support passwordless login for enterprise workstations (Windows/macOS), leaving the most critical entry point vulnerable.

For the enterprise, you need more than convenience. You need control, visibility, and end-to-end security. You need an enterprise passkey.

Introducing HYPR Enterprise Passkeys for Microsoft Entra ID

HYPR’s partnership with Microsoft directly addresses the enterprise passkey gap. Our solution is purpose-built for the demands of large-scale, complex IT environments that rely on Microsoft for their identity infrastructure.

This isn't a retrofitted consumer product. It's a FIDO2-based, non-syncable passkey that is stored on the user's device, not in a third-party cloud. This ensures that your organization retains full ownership and control over the credential lifecycle.

With a single, fast registration, your employees can use one phishing-resistant credential to unlock everything they need:

Passwordless Desktop Login: Users log in to their Entra ID-joined Windows workstations using the HYPR Enterprise Passkey on their phone. No password, no phishing, no push-bombing.
Seamless SSO and App Access: That same secure login event grants them a Primary Refresh Token (PRT), seamlessly signing them into all their Entra ID-protected applications without needing to authenticate again. Why Is This a Game-Changer for Microsoft Environments?

This partnership isn't just about adding another MFA option; it's about fundamentally upgrading the security posture of your entire Microsoft ecosystem.

Effortless Deployment: Go Passwordless in Days, Not Quarters

You’ve invested heavily in the Microsoft ecosystem. Now, you can finally maximize that investment by eliminating the #1 cause of breaches: the password. The HYPR and Microsoft partnership makes true, end-to-end passwordless authentication a reality.

There are no complex federation requirements, no painful certificate management, and no AD dependencies. It's a simple, lightweight deployment that allows you to roll out phishing-resistant MFA across your entire workforce in days, not quarters.

Empower your employees with fast, frictionless access that works everywhere they do. And empower your security team with the control and assurance that only a true enterprise passkey can provide.

Ready to bring enterprise-grade passkeys to your Microsoft environment? Schedule your personalized demo today.

Enterprise Passkey FAQ

Q: What is a "non-syncable" passkey?

A:  A non-syncable passkey is a FIDO2 credential that is bound to the user's physical device and cannot be copied, shared, or backed up to a third-party cloud. This provides a higher level of security and assurance because the enterprise maintains control over where the credential resides.

Q: How is this different from using an authenticator app for MFA?

A: Authenticator apps that use OTPs or push notifications are still susceptible to phishing and push-bombing attacks. HYPR Enterprise Passkeys are based on the FIDO2 standard, which is cryptographically resistant to phishing, man-in-the-middle, and other credential theft attacks

Q: What does the deployment process look like?

A: Deployment is designed to be fast and lightweight. It involves deploying the HYPR client to workstations and configuring the integration within your Microsoft Entra ID tenant. Because there are no federation servers or complex certificate requirements, many organizations can go from proof-of-concept to production rollout in a matter of days.

Q: Does this support Bring-Your-Own-Device (BYOD) scenarios?

A: Yes. The solution is vendor-agnostic and supports both corporate-managed and employee-owned (BYOD) devices, providing a simple, IT-approved self-service recovery flow that keeps users productive without compromising security.


Ocean Protocol

CivicLens : Building the First Structured Dataset of EU Parliamentary Speeches

CivicLens : Building the First Structured Dataset of EU Parliamentary Speeches A new Annotators Hub challenge The European Parliament generates thousands of speeches, covering everything from local affairs to international diplomacy. These speeches shape policies that impact millions across Europe and beyond. Yet, much of this discourse remains unstructured, hard to track, and difficult to
CivicLens : Building the First Structured Dataset of EU Parliamentary Speeches

A new Annotators Hub challenge

The European Parliament generates thousands of speeches, covering everything from local affairs to international diplomacy. These speeches shape policies that impact millions across Europe and beyond. Yet, much of this discourse remains unstructured, hard to track, and difficult to analyze at scale.

CivicLens, the second and latest task in the Annotators Hub, invites contributors to help change that. Together with Lunor, Ocean is building a structured, research-grade dataset based on real EU plenary speeches. Your annotations will support civic tech, media explainers, and political AI, and will give you the chance to earn a share of the $10,000 USDC prize pool.

What you’ll do

You’ll read short excerpts from speeches and answer a small set of targeted questions:

Vote Intent: Does the speaker explicitly state how they will vote (yes/no/abstain/unclear)? Tone: Is the rhetoric cooperative, neutral, or confrontational? Scope of Focus: Is the emphasis on the EU, the speaker’s country, or both? Verifiable Claims: Does the excerpt contain a factual, checkable claim (flag and highlight the span)? Topics (multi-label): e.g., economy, fairness/rights, security/defense, environment/energy, governance/procedure, health/education, technology/industry. Ideological Signal (if any): Is there an inferable stance or framing (e.g., pro-integration, national interest first, market-oriented, social welfare-oriented), or no clear signal?

Each task follows a consistent schema with clear tooltips and examples. Quality is ensured through overlap assignments, consensus checks, and spot audits.

Requirements Good command of written English (reading comprehension and vocabulary) Ability to recognize when political or ideological arguments are being made Basic understanding of common political dimensions (e.g., left vs. right, authoritarian vs. libertarian) Minimum knowledge of international organizations and relations (e.g., what the EU is, roles of member states) Awareness of what parliamentary speeches are and their general purpose in the context of EU roll call votes Why it matters

Your contributions will help researchers and civic organizations better understand political debates, predict voting behavior, and make parliamentary discussions more transparent and accessible.

The resulting dataset isn’t just for political analysis, but it has broad, real-world applications:

Fact-checking automation: AI models trained on this data can learn to distinguish checkable assertions from opinions or vague claims, helping organizations like PolitiFact, Snopes, or Full Fact prioritize their verification workload Compliance and policy tracking: Financial compliance platforms, watchdog groups, and regtech firms can detect and monitor predictive or market-moving statements in political and economic discourse Content understanding and education: News aggregators, summarization tools, and AI assistants (like Feedly or Artifact) can better tag and summarize political content. The same methods can also power educational apps that teach critical thinking and media literacy Rewards

A total prize pool of $10,000 USDC is available for contributors.

Rewards are distributed linearly based on validated submissions, using the formula:

Your Reward = (Your Score ÷ Sum of All Scores) × Total Prize Pool

The higher the quality and volume of your accepted annotations, the higher your share.

For full participation details, submission rules, and instructions, visit the quest details page on Lunor Quest.

CivicLens : Building the First Structured Dataset of EU Parliamentary Speeches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Infocert

VPN for lawyers, labour consultants, accountants

Lawyers, labour consultants, accountants: 5 practical ways in which a Business VPN can protect your work and data   Are you a lawyer working away? A smartworking accountant? Do you provide consulting services at clients’ premises? If so, read this article to learn why you should use a Business VPN to connect to a network […] The post VPN for lawyers, labour consultants, accountants appeared
Lawyers, labour consultants, accountants: 5 practical ways in which a Business VPN can protect your work and data

 

Are you a lawyer working away? A smartworking accountant? Do you provide consulting services at clients’ premises? If so, read this article to learn why you should use a Business VPN to connect to a network other than your own. A VPN for business is a valuable professional ally because it helps protect highly sensitive information while guaranteeing secure remote access to professional content wherever you are, even abroad.

 

So, what exactly can a VPN – a virtual private network – do for you when you work remotely? Here are five practical ways in which using a VPN for remote work can make a difference to professionals and small businesses.

Work from home security

You are working from home and, as always, you have to access business management systems, dashboards, customer and supplier databases. You may also need to consult or send confidential documents like balance sheets, contracts and court procedures. You even have crucial calls and meetings on your agenda to finalise agreements or submit reports. To do all this, you rely on your home router and perhaps use your own laptop or smartphone. Without a VPN to protect your connection, your home network can become a point of vulnerability – a potential entry point for eavesdropping and data breaches. Have you ever thought what would happen if all the information you work with were to fall into the wrong hands? Your clients’ confidentiality, the security of your work and your own professional reputation would be severely compromised.

 

A VPN creates an encrypted and therefore secure tunnel between your device and company servers, ensuring cybersecurity and protecting resources and internal communications. In this way, even remotely sharing files with co-workers or customers is absolutely secure. Many premium VPNs also offer additional security tools that protect you from malware, intrusive advertisements, dangerous sites and trackers, and warn you in case of data leaks.

 

Public Wi-Fi security

On a business trip, you are highly likely to use hotel or airport lounge Wi-Fi to complete a presentation or access your corporate cloud. What could happen without a VPN? Imagine you are waiting for your flight and want to check your email. The moment you connect to the public network and access your mail server, a hacker intercepts your traffic, reads your email and steals your login credentials. You don’t know it, but you have just suffered what is called a man-in-the-middle attack. With a virtual private network, no hacker can see what you do online, even on open Wi-Fi networks.

Accessing national services and portals, even abroad

If you are abroad and need to access essential international websites, portals and services like National Insurance, Inland Revenue, or corporate intranets, you may encounter access limitations and geo-blocking. This is because, for security reasons, some public portals and corporate networks choose to restrict access from foreign IPs. In some cases, the site may not function properly or may not show certain sections.

 

In these cases, a VPN is absolutely indispensable. Irrespective of where you are physically located, all you need to do is connect to a server in another country to simulate a presence there, bypass geo-blocking and gain access the content you want, while still enjoying an encrypted and protected connection.

Privacy and data security

This aspect is often overlooked. Surfing online without adequate protection endangers the security not only of your own information but also that of your employees, collaborators, suppliers and customers, risking potentially enormous economic and reputational damage.

 

If you think data breaches only concern big tech companies like Meta, Amazon and Google, you are wrong. Very often hackers and cybercriminals choose to target professional firms or small businesses that fail to pay attention to IT security, underestimating the need for proper tools and protective infrastructures to prevent data breaches.

 

When dealing with sensitive data, health, legal or financial information on a daily basis, keeping it secure is not just common sense in today’s fully digitalised world, but a legal duty.

 

Data privacy is as crucial for individuals as it is for companies, because it represents a key element of protection, trust and accountability. It means maintaining control over your personal information and protecting yourself against abuse or misuse that may damage brand reputation or personal security.

 

Using a VPN for business travel is one of the tools that cybersecurity experts recommend to protect privacy and client data, since, as we have seen, VPNs change your IP address and encrypt your Internet connection, preventing potential intrusions.

Access to international websites and content

If you work with international customers or suppliers, a virtual private network is indispensable. As we have seen, for security reasons, some institutional and professional sites and portals restrict access based on your geographical location. With a VPN, you can simulate your presence in a country other than the one in which you are physically located.

For instance, do you ever need to consult public registers or legal databases in non-EU countries, access tax or customs portals, use SaaS software for foreign markets or monitor the pricing strategies of foreign competitors by accessing local versions of their sites? With a VPN you only need to connect to the server of the country or geographical area you are interested in to bypass geo-blocking and access the financial resources you need.

 

Whatever your profession, whatever the size of your company, and wherever you are, a VPN is indispensable to the security and privacy of your work.

The post VPN for lawyers, labour consultants, accountants appeared first on Tinexta Infocert international website.


VPN: a non-technical guide for professionals

What is a VPN? A non-technical guide for professionals We have been living in a vast digital workplace for some time now, a permanently connected environment that transcends the boundaries of the traditional office to include the sofa at home, airport lounges, hotel rooms, coffee shops and train carriages. In this fluid and constantly evolving […] The post VPN: a non-technical guide for professi
What is a VPN? A non-technical guide for professionals

We have been living in a vast digital workplace for some time now, a permanently connected environment that transcends the boundaries of the traditional office to include the sofa at home, airport lounges, hotel rooms, coffee shops and train carriages. In this fluid and constantly evolving digital space, you read the news, shop online, download apps, participate in calls and meetings, answer emails, access sensitive data, perform banking transactions, and more besides, on a daily basis. But do you ever wonder what happens to your data while you are online? Are you really in control of the information you share, the sites you visit, and the actions you take? Spoiler: a large number of others can see what you do during your daily visits to the Internet. Unless, of course, you use a VPN – a Virtual Private Network to protect your Internet connection and online privacy. So, how does a VPN work? A VPN acts as a vigilant and attentive guardian to protect you from prying eyes and malicious attacks.

Who can see what you do online?

Though it might seem so, surfing online is by no means private. Every click you make leaves a trace. These traces form what is called a “digital shadow” or fingerprint. Every time you “touch” something online, many actors monitor, collect or intercept what you do. Who are these people?

 

1. Your Internet Service Provider (ISP): your provider can track all the sites you visit, when you visit them, and for how long. Not only that, but your provider may store and share certain information with third parties (not only the police and judicial authorities, but even advertisers) for a variable period of time, depending on the type of content, the consent you have given, internal policies and legislation (national and European). In Italy, for example, Internet service providers may retain certain data for up to 10 years.

 

2. Network administrators: if you connect to corporate or public Wi-Fi, e.g. a hotel network, the network administrator can monitor its traffic and thus have access to information on your online activities.

 

3. Websites and online platforms: many sites collect browsing data, including through cookies (just think of all those pop-ups that constantly interrupt your browsing), pixels and trackers. This allows them to profile you in order to show you personalised advertisements or sell your data to third parties.

 

4. Search engines: if you use a traditional search engine like Google, Bing or Yahoo, everything you do is traceable – even if you use “Incognito mode”. If you want to keep your searches private, we suggest using non-traceable search engines such as DuckDuckGo, Qwant, Startpage or Swisscows.

 

5. Hackers and criminals: surfing online exposes you to daily risks, especially when you choose to connect to unprotected public Wi-Fi networks or surf without the use of security tools like antivirus software, VPNs or anti-malware tools. Credentials, emails, bank details, even your identity, are valuable commodities.

The Internet is not a private house; it is a public square.

Every time you connect to the Internet, your device uses an Internet Protocol (IP) address, which can reveal not only your online identity, but also the location from which you connect. Technically, an IP address is a numerical label assigned by the Internet service provider. Because it is used to identify individual devices among billions of others, it can be regarded as a postal address in the digital world.

 

When you enter the name of a website (example.com) in your browser’s address bar, your computer has to perform certain operations because it cannot actually read words, only numbers. First of all, the browser locates the IP address corresponding to the site you want (example.com = 192.168.1.1), then, once the location is found, it loads the site onto the screen. An IP address functions like a home address, ensuring that data sent over the Internet always reaches the correct destination.

 

This identifier is visible to all the subjects listed above.

 

Not only that, but the information you routinely exchange online – passwords, emails, documents and sensitive data – often travel in “plaintext” i.e. without being encrypted. This means that anyone who manages to intercept them on their way through the network can read or copy them. Think of sending a postcard: anyone intercepting it on the way can read its contents, your name, the recipient’s address and so on. The same happens with your online data. Not using adequate protection systems, like a VPN, is like leaving your front door open. Would you ever do that?

How does a VPN work?

Typically, when you attempt to access a website, your Internet provider receives the request and directs it straight to the desired destination. A VPN, however, directs your Internet traffic through a remote server before sending it on to its destination, creating an encrypted tunnel between your device and the Internet. This tunnel not only secures the data you send and receive, but also hides it from outside eyes, providing you with greater privacy and online security. A VPN also changes your real IP address (i.e. your digital location), e.g. Milan, and replaces it with that of the remote server you have chosen to connect to, e.g. Tokyo. In this way, no one – neither your Internet provider, nor the sites you visit, nor any malicious attackers – can know where you are really connecting from.

 

It is as if the virtual public square, where everyone sees and listens, turns into a closed room, invisible to those outside, at the click of a button.

 

This, in brief, is how a virtual private network works:

 

1. First, the VPN server identifies you by authenticating your client.

2. The VPN server applies an encryption protocol to all the data you send and receive, making it unreadable to anyone trying to intercept it.

3. The VPN creates a virtual, secure “tunnel” through which your data travels to its destination, so that no one can access it without authorisation.

4. The VPN wraps each data packet inside an external packet (an “envelope”) which is encrypted by encapsulation. The envelope is the essential element of the VPN tunnel that keeps your data safe during transfer.

5. When the data reaches the server, the external packet is removed through a decryption process.

Using a VPN should be part of your digital hygiene

Every professional should use a VPN, not only when working remotely or using public Wi-Fi, but as an essential tool to surf more securely, privately and responsibly, day after day. You can think of a VPN as a habit of digital hygiene that provides greater privacy and an additional layer of protection against potential online threats.

A VPN:

 

● encrypts your data, protecting you from prying eyes
● changes your real IP, protecting your identity
● routes your data through remote servers, creating a secure and private tunnel
● stops your Internet provider and other third parties tracking your data.

 

To sum up, a VPN is not just a tool for special situations, like using public Wi-Fi, accessing restricted content. Neither is it only for experienced users and cybersecurity enthusiasts. On the contrary, it is an essential tool – a “must-have” – for all professionals and individuals who want to inhabit the digital space that surrounds us with greater awareness and less fear.

The post VPN: a non-technical guide for professionals appeared first on Tinexta Infocert international website.


auth0

Understanding ReBAC and ABAC Through OpenFGA and Cedar

In this blog post, we’ll explore the differences between ReBAC and ABAC with an in-depth comparison of OpenFGA and Cedar
In this blog post, we’ll explore the differences between ReBAC and ABAC with an in-depth comparison of OpenFGA and Cedar

Tuesday, 14. October 2025

Spruce Systems

Digital Identity Policy Momentum

This article is the second installment of our series: The Future of Digital Identity in America.

Read the first installment in our series on The Future of Digital Identity in America here.

Technology alone doesn’t change societies; policy does. Every leap forward in digital infrastructure (whether electrification, the internet, or mobile payments) has been accelerated or slowed by policy. The same is true for verifiable digital identity. The question today isn’t whether the technology works; it does. The question is whether policy frameworks will make it accessible, trusted, and interoperable across industries and borders.

Momentum is building quickly. State legislatures, federal agencies, and international bodies are beginning to treat verifiable digital identity not as a niche experiment, but as critical public infrastructure. In this post, we’ll explore how policy is shaping digital identity, from U.S. state laws to European regulations, and why governments care now more than ever.

States Leading the Way: Laboratories of Democracy

In the U.S., states have become the proving ground for verifiable digital identity. Seventeen states, including California, New York, and Georgia, already issue mobile driver’s licenses (mDLs) that are accepted at more than 250 TSA checkpoints. By 2026, that number is expected to double, with projections of 143 million mDL holders by 2030, according to ABI Research forecasts.

Seventeen states now issue mobile driver’s licenses accepted at more than 250 TSA checkpoints - digital ID is already real, growing faster than many expected.

California’s DMV Wallet offers one of the most comprehensive examples. In less than two years, over two million Californians have provisioned mobile driver’s licenses, which can be used at TSA checkpoints, in convenience stores for age-restricted purchases, and even online to access government services—real, everyday transactions that people recognize. In addition to the digital licenses, more than thirty million vehicle titles have been digitized using blockchain, making it easier for people to transfer ownership, register cars, or prove title history without mountains of paperwork. Businesses can verify credentials directly, residents can present them online or in person, and the system is designed to work across states and industries. In other words, this program demonstrates proof that digital identity can scale to millions of people and millions of records while solving real problems.

California’s DMV Wallet has issued over two million mDLs and has digitized over 42 million vehicle titles using blockchain - demonstrating trustworthiness at scale.

Utah took a different approach by legislating principles before widespread deployment. SB 260, passed in 2025, lays down a bill of rights for digital identity. Citizens cannot be forced to unlock their phones to present a digital ID. Verifiers cannot track or build profiles from ID use. Selective disclosure must be supported, allowing people to prove an attribute, like being over 21, without revealing unnecessary details. Digital IDs remain optional, and physical IDs must continue to be accepted. Utah’s framework shows how policy can proactively protect civil liberties while enabling innovation.

Utah’s SB 260 doesn’t just pilot identity tech - it builds in privacy and choice from day one, naming those values as rights.

Together, California and Utah illustrate a spectrum of policymaking. One demonstrates what’s possible with rapid deployment at scale - how quickly millions of people can adopt new credentials when the technology is made practical and widely available. The other shows how legislation can proactively embed privacy and choice into the foundations of digital identity, creating durable protections that guard against misuse as adoption grows. Both approaches are valuable: California proves the model can work in practice, while Utah ensures it works on terms that respect civil liberties. Taken together, they show that speed and safeguards are not opposing forces, but complementary strategies that, if aligned, can accelerate trust and adoption nationwide.

Federal Engagement: Trust, Security, and Compliance

Federal agencies are also stepping in, linking digital identity to national security and resilience. The Department of Homeland Security (DHS) is piloting verifiable digital credentials for immigration—a use case where both accuracy and accessibility are essential.

Meanwhile, the National Institute of Standards and Technology (NIST), through its National Cybersecurity Center of Excellence (NCCoE), has launched a hands-on mDL initiative. In collaboration with banks, state agencies, and technology vendors (including 1Password, Capital One, Microsoft, and SpruceID, among others), the project is building a reference architecture demonstrating how mobile driver’s licenses and verifiable credentials can be applied in real-world use cases: CIP/KYC onboarding, federated credential service providers, and healthcare/e-prescribing workflows. The NCCoE has already published draft CIP/KYC use-case criteria, wireframe flows, and a sample bank mDL information page to show how a financial institution might integrate and present mDLs to customers—bringing theory into usable models for regulation and deployment. 

Why the urgency? Centralized identity systems are prime targets for adversaries. Breach one large database, and millions of people’s information is compromised. Decentralized approaches change that risk equation by sharding and encrypting user data, reducing the value of any single “crown jewel” target.

Decentralized identity reshapes the risk equation—no single crown jewel database for adversaries to breach.

Policy is also catching up to compliance challenges in financial services. In July 2025, Congress passed the Guiding and Establishing National Innovation for U.S. Stablecoins (GENIUS) Act, which, among other provisions, directs the U.S. Treasury to treat stablecoin issuers as financial institutions under the Bank Secrecy Act (BSA). Section 9 of the Act requires Treasury to solicit public comment on innovative methods to detect illicit finance in digital assets, including APIs, artificial intelligence, blockchain monitoring, and (critically) digital identity verification.

Treasury’s August 2025 Request for Comment (RFC) builds directly on this mandate. It seeks input on how portable, privacy-preserving digital identity credentials can support AML/CFT and sanctions compliance, reduce fraud, and lower compliance costs for financial institutions. Importantly, the RFC recognizes privacy as a design factor, asking specifically about risks from over-collection of personal data, the sensitivity of information reviewed, and how to implement safeguards alongside compliance.

This is a significant shift: digital identity is not only being framed as a user-rights issue or a convenience feature, but also as a national security and financial stability priority. By embedding identity into the GENIUS Act’s framework for stablecoins and BSA modernization, policymakers are effectively saying that modernized, cryptographically anchored identity is essential for the resilience of U.S. markets.

The European Example: eIDAS 2.0

While the U.S. pursues a patchwork of state pilots and federal engagement, Europe has opted for a coordinated regulatory approach. In May 2024, eIDAS 2.0 came into force, requiring every EU Member State to issue a European Digital Identity Wallet by 2026.

The regulation mandates acceptance across public services and major private sectors like banks, telecoms, and large online platforms. Privacy is baked into the requirements: wallets must be voluntary and free for citizens, support selective disclosure, and avoid central databases. Offline QR options are also mandated, ensuring usability even without connectivity.

Europe is treating digital identity as a right: free, voluntary, private, and accepted across borders.

Why does this matter? For citizens, it means one-click onboarding across borders. For businesses, it means lower compliance costs and reduced fraud. For the EU, it’s a step toward digital sovereignty, reducing dependency on foreign platforms and asserting leadership in global standards.

Identity as Infrastructure

Look closely, and a pattern emerges: policymakers are treating identity as infrastructure. Like roads, grids, or communications networks, identity is a shared resource that underpins everything else. Without it, markets stumble, governments waste resources, and citizens lose trust. With it, economies run smoother, fraud drops, and individuals gain autonomy.

Identity is infrastructure—like roads or grids, it underpins every modern economy and democracy.

This framing (identity as infrastructure) helps explain why governments care now. Fraud losses are staggering, trust in institutions is fragile, and AI is amplifying risks at unprecedented speed. Policy is not just reacting to technology; it’s shaping the conditions for decentralized identity to succeed.

Risks of Policy Done Wrong

Of course, not all policy is good policy. Poorly designed frameworks could centralize power, entrench surveillance, or create vendor lock-in. Imagine if a single state-issued wallet were mandatory for all services, or if verifiers were allowed to log every credential presentation. The result would be digital identity as a tool of control, not freedom.

That’s why principles matter. Utah’s SB 260 is instructive: user consent, no tracking, no profiling, open standards, and continued availability of physical IDs. These are not just policy features; they are guardrails to keep digital identity aligned with democratic values.

Privacy as Policy: Guardrails Before Growth

Alongside momentum in statehouses and federal pilots, civil liberties organizations have raised a critical warning: digital identity cannot scale without strong privacy guardrails. Groups like the ACLU, EFF, and EPIC have cautioned that mobile driver’s licenses (mDLs) and other digital ID systems risk entrenching surveillance if designed poorly.

The ACLU’s Digital ID State Legislative Recommendations outline twelve essential protections: from banning “phone-home” tracking and requiring selective disclosure, to preserving the right to paper credentials and ensuring a private right of action for violations. EFF warns that without these safeguards, digital IDs could “normalize ID checks” and make identity presentation more frequent in American life .

The message is clear: technology alone isn’t enough. Policy must enshrine privacy-preserving features as requirements, not optional features. Utah’s SB 260 points in this direction by mandating selective disclosure and prohibiting tracking. But the broader U.S. landscape will need consistent frameworks if decentralized identity is to earn public trust.

We'll explore these principles in greater depth in a later post in this series, where we examine how civil liberties critiques shape the design of decentralized identity and why policy and technology must work together to prevent surveillance creep.

SpruceID’s Perspective

At SpruceID, we sit at the intersection of policy and technology. We’ve helped launch California’s DMV Wallet, partnered on Utah’s statewide verifiable digital credentialing framework, and collaborated with DHS on verifiable digital immigration credentials. We also contribute to global standards bodies, such as the W3C and the OpenID Foundation, ensuring interoperability across jurisdictions.

Our perspective is simple: decentralized identity must remain interoperable, privacy-preserving, and aligned with democratic principles. Policy can either accelerate this vision or derail it. The frameworks being shaped today will determine whether decentralized identity becomes a tool for empowerment or for surveillance.

Why Governments Care Now

The urgency comes down to four forces converging at once:

Fraud costs are exploding. In 2024, Americans reported record losses - $16.6 billion to internet crime (FBI IC3) and $12.5 billion to consumer fraud (FTC). On the institutional side, the average U.S. data breach cost hit $10.22 million in 2025, the highest ever recorded (IBM). AI is raising the stakes. Synthetic identity fraud alone accounted for $35 billion in losses in 2023 (Federal Reserve). FinCEN has warned that criminals are now using generative AI to create deepfake videos, synthetic documents, and realistic audio to bypass identity checks and exploit financial systems at scale. Global trade requires interoperability. Cross-border commerce depends on reliable, shared frameworks for verifying identity. Without them, compliance costs balloon and innovation slows. Citizens expect both privacy and convenience. People want frictionless, consumer-grade experiences from digital services, but they will not tolerate surveillance or being forced into a single system.

Policymakers increasingly see decentralized identity as a way to respond to all four at once. By reducing fraud, strengthening democratic resilience, supporting global trade, and protecting privacy, decentralized identity offers governments both defensive and offensive advantages.

The Policy Frontier

We are standing at the frontier of decentralized identity. States are pioneering real deployments. Federal agencies are tying identity to national security and compliance. The EU is mandating wallets as infrastructure. Around the world, policymakers are realizing that identity is not just a product, it’s the scaffolding for digital trust.

The decisions made in statehouses, federal agencies, and international bodies over the next few years will shape how identity works for decades. Done right, verifiable digital identity can become the invisible infrastructure of freedom, convenience, and security. Done wrong, it risks becoming another layer of surveillance and control.

That’s why SpruceID is working to align policy with technology, ensuring that verifiable digital identity is built on open standards, privacy-first principles, and user control. Governments care now because the stakes have never been higher. And the time to act is now.

This article is part of SpruceID’s series on the future of digital identity in America.

Subscribe to be notified when we publish the next installment.


Elliptic

$15 billion seized by US originates from Iran/China bitcoin miner "theft"

The US Department of Justice (DOJ) today announced the seizure of bitcoin worth $15 billion from Prince Group's operation of forced-labor scam compounds across Cambodia. Elliptic’s analysis shows that these bitcoins were “stolen” in 2020 from LuBian, a bitcoin mining business with operations in China and Iran.  
The US Department of Justice (DOJ) today announced the seizure of bitcoin worth $15 billion from Prince Group's operation of forced-labor scam compounds across Cambodia.

Elliptic’s analysis shows that these bitcoins were “stolen” in 2020 from LuBian, a bitcoin mining business with operations in China and Iran.

 


Prince Group targeted with $15B crypto seizure and sanctions for pig butchering operations

New sanctions target the Prince Group Transnational Criminal Organization, for its involvement in online scams such as pig butchering. Elliptic has identified crypto wallets associated with the newly-sanctioned entities, which have received transactions worth billions of dollars. Prince Group chairman Chen Zhi was also indicted in a U.S. court today, and has had $15 bill
New sanctions target the Prince Group Transnational Criminal Organization, for its involvement in online scams such as pig butchering.

Elliptic has identified crypto wallets associated with the newly-sanctioned entities, which have received transactions worth billions of dollars.

Prince Group chairman Chen Zhi was also indicted in a U.S. court today, and has had $15 billion in Bitcoin seized. These bitcoins were previously "stolen" from a Chinese bitcoin mining business.

 


Crypto regulatory affairs: UK lifts ban on crypto ETNs for retail investors as government makes digital asset innovation push

UK lifts ban on crypto ETNs for retail investors as government makes digital asset innovation push The UK’s Financial Conduct Authority (FCA) has formally lifted a ban on the offering of cryptoasset exchange traded notes (cETNs) to retail investors - an important indication that the UK is responding to changing market dynamics in an effort to boost innovation and growth. 
UK lifts ban on crypto ETNs for retail investors as government makes digital asset innovation push

The UK’s Financial Conduct Authority (FCA) has formally lifted a ban on the offering of cryptoasset exchange traded notes (cETNs) to retail investors - an important indication that the UK is responding to changing market dynamics in an effort to boost innovation and growth. 


Ocean Protocol

DF159 Completes and DF160 Launches

Predictoor DF159 rewards available. DF160 runs October 16th — October 23rd, 2025 1. Overview Data Farming (DF) is an incentives program initiated by Ocean Protocol. In DF, you can earn OCEAN rewards by making predictions via Predictoor. Data Farming Round 159 (DF159) has completed. DF160 is live, October 16th. It concludes on October 23rd. For this DF round, Predictoor DF has 3,750 OCEAN
Predictoor DF159 rewards available. DF160 runs October 16th — October 23rd, 2025 1. Overview

Data Farming (DF) is an incentives program initiated by Ocean Protocol. In DF, you can earn OCEAN rewards by making predictions via Predictoor.

Data Farming Round 159 (DF159) has completed.

DF160 is live, October 16th. It concludes on October 23rd. For this DF round, Predictoor DF has 3,750 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF160 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF160

Budget. Predictoor DF: 3.75K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean and DF Predictoor

Ocean Protocol was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF159 Completes and DF160 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Spherical Cow Consulting

Why Tech Supply Chains, Not Protocols, Set the Limits on AI and the Internet

I had one of those chance airplane conversations recently—the kind that sticks in your mind longer than the flight itself. My seatmate was reading a book about artificial intelligence, and at one point they described the idea of an “infinitely growing AI.” I couldn’t help but giggle a bit. The post Why Tech Supply Chains, Not Protocols, Set the Limits on AI and the Internet appeared first on Sph

“I had one of those chance airplane conversations recently—the kind that sticks in your mind longer than the flight itself.”

My seatmate was reading a book about artificial intelligence, and at one point, they described the idea of an “infinitely growing AI.” I couldn’t help but giggle a bit. Not at them, but at the premise.

An AI cannot be infinite. Computers are not infinite. We don’t live in a world where matter and energy are limitless. There aren’t enough chips, fabs, minerals, power plants, or trained engineers to sustain an infinite anything.

This isn’t just a nitpicky detail about science fiction. It gets at something I’ve written about before:

In Who Really Pays When AI Agents Run Wild? I noted that scaling AI systems isn’t just about clever protocols or smarter algorithms. Every prompt, every model run, every inference carries a cost in water, energy, and hardware cycles. In The End of the Global Internet, I argued that we are already moving toward a fractured network where national and regional policies shape what’s possible online.

The “infinite AI” conversation is an example that ties both threads together. We may dream about global systems that grow without end, but the reality is that technology is built on finite supply chains. It’s those supply chains that are turning out to be the real bottleneck for the future of the Internet.

A Digital Identity Digest Why Tech Supply Chains, Not Protocols, Set the Limits on AI and the Internet Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:15:19 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link Embed

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

The real limits aren’t protocols

When people in the identity and Internet standards space talk about limits, we often point to protocols. Can the protocol scale? Will a new protocol successfully replace cookies? Can we use existing protocols to manage delegation across ecosystems?

These are important questions, but they are not the limiting factor. Protocols, after all, are words in documents and lines of code. They can be revised, extended, and reinvented. The hard limits come from the physical world.

Chips and fabs. Advanced semiconductors require fabrication plants that cost tens of billions of dollars and take years to build. Extreme ultraviolet lithography machines (say that five times, fast) are produced (as of 2023) by exactly one company in the Netherlands—ASML—and delivery schedules are measured in years. Minerals and materials. Every computer depends on a handful of rare inputs: lithium for batteries, cobalt for electrodes, rare earth elements for magnets, neon for chipmaking lasers, high-purity quartz for wafers. These are not evenly distributed across the globe. China dominates rare earth refining, while Ukraine has been a critical source of neon. And there is no substitute for water in semiconductor production. Power and cooling. Training a frontier AI model consumes gigawatt-hours of electricity. Running hyperscale data centers requires water for cooling that rivals the consumption of entire towns. When power grids are strained, there’s no protocol that can fix it. People. None of this runs itself. Chip designers, process engineers, cleanroom technicians, miners, metallurgists—these are highly specialized roles. Many countries are facing demographic changes that include aging workforces and immigration restrictions for the current tech giants and uneven education where the populations are booming.

You can’t standardize your way out of these shortages. You can only manage, redistribute, or adapt to them.

Geopolitics and demographics

The Internet was often described as “borderless,” but the hardware that makes it run is anything but. Supply chains for semiconductors, network equipment, and the minerals that feed them are deeply entangled with geopolitics and demographics.

No region has a fully independent pipeline:

The US leads in chip design but depends on the Indo-Pacific region for chip manufacturing. China dominates rare earth refining but relies on imports of high-end chipmaking tools it cannot yet build domestically. Europe has niche strengths in lithography and specialty equipment but lacks the scale for end-to-end independence. Countries like Japan, India, and Australia supply critical inputs—from silicon wafers to rare earth ores—but not the whole stack.

This interdependence is not an accident. Globalization optimized supply chains for efficiency, not resilience. Each region specialized in the step where it had a comparative advantage, creating a finely tuned but fragile web.

Demographics add another layer. Many of the most skilled engineers in chip design and manufacturing are reaching retirement age. The same is true for technical standards architects; they are an aging group. Training replacements takes years, not months. Immigration restrictions in key economies further shrink the talent pool. Even if we had the minerals and the fabs, we might not have the people to keep the pipelines running.

The illusion of global resilience

For decades, efficiency reigned supreme. Tech companies embraced just-in-time supply chains. Manufacturers outsourced to the cheapest reliable suppliers. Investors punished redundancy as waste.

That efficiency gave us cheap smartphones, affordable cloud services, and rapid AI innovation. But it also created a brittle system. When one link in the chain breaks, the effects cascade:

A tsunami in Japan or a drought in Taiwan can disrupt global chip supply. A geopolitical dispute can halt exports of critical minerals overnight. A labor strike at a port can ripple through shipping networks for months.

We saw this during the 2020–2023 global chip shortage. A pandemic-driven demand spike collided with supply chain shocks: a fire at a Japanese chip plant, drought in Taiwan, and war in Ukraine cutting off neon supplies. Automakers idled plants. Consumer electronics prices rose. Lead times stretched into years.

AI at scale only magnifies the problem. Training one large model requires thousands of specialized GPUs. If one upstream material is constrained—say, the gallium used in semiconductors—it doesn’t matter how advanced your algorithms are. The model doesn’t get trained.

Cross-border dependencies never vanish

This is where the conversation loops back to the idea of a “global Internet.” Even if the Internet fragments into national or regional spheres—the “splinternet” scenario—supply chains remain irreducibly cross-border.

You can build your own national identity system. You can wall off your data flows. But you cannot build advanced technology entirely within your own borders without enormous tradeoffs.

A U.S. data center may run on American-designed chips, but those chips likely contain rare earths refined in China. A Chinese smartphone may use domestically assembled components, but the photolithography machine that patterned its chips came from Europe. An EU-based AI startup may host its models on European servers, but the GPUs were packaged and tested in Southeast Asia.

Fragmentation at the protocol and governance level doesn’t erase these dependencies. It only adds new layers of complexity as governments try to manage who trades with whom, under what terms, and with what safeguards.

The myth of “digital sovereignty” often ignores the material foundations of technology. Sovereignty over protocols does not equal sovereignty over minerals, fabs, or skilled labor.

Opportunities in regional diversity

If infinite AI is impossible and total independence is unrealistic, what’s left? One answer is regional diversity.

Instead of assuming we can build one perfectly resilient global supply chain, we can design multiple overlapping regional ones. Each may not be fully independent, but together they reduce the risk of “one failure breaks all.”

Examples already in motion:

United States. The CHIPS and Science Act is pouring billions into domestic semiconductor manufacturing (though how long that act will be in place is in question). The U.S. is also investing in rare earth mining and processing though environmental and permitting challenges remain. European Union. The EU Raw Materials Alliance is working to secure critical mineral supply and recycling. European firms already lead in certain high-end equipment niches. Japan and South Korea. Both countries are investing in duplicating supply chain segments currently dominated by China, such as battery materials. India. This country has ambitious plans to build local chip fabs and become a global assembly hub. Australia and Canada. Positioned as suppliers of critical minerals, Australia and Canada are working to move beyond extraction to refining.

Regional chains come with tradeoffs: higher costs, slower rollout, and sometimes redundant investments. But they create buffers. If one region falters, others can pick up slack.

They also open the door to more design diversity. Different regions may approach problems in distinct ways, leading to innovation not just in technology but in governance, regulation, and labor practices.

Reframing the narrative

So let’s come back to that airplane conversation. The myth of infinite AI (or infinite cloud computing, for that matter) isn’t just bad science fiction. It’s a misunderstanding of how technology actually grows.

AI, like the Internet itself, is bounded by the real world. Protocols matter, but they are only the top layer. Beneath them are the chips, the minerals, the power, and the people. Those are the constraints that will shape the next decade.

Which leads us to the current irony in all of this: even as the Internet fragments along political and regulatory lines, the supply chains that support it remain irreducibly global. We can argue about governance models and sovereignty all we like and target tariffs at a whim, but a smartphone or a GPU is still a planetary collaboration.

The challenge, then, isn’t to pretend we can achieve total independence. It’s to design supply chains—local, regional, and global—that acknowledge these limits and build resilience into them.

Looking ahead

When I wrote about The End of the Global Internet, I wanted to show that fragmentation is not just possible, but already happening. But fragmentation doesn’t erase interdependence. It just makes it messier.

When I wrote about Who Pays When AI Agents Run Wild? I wanted to point out that scaling computation is not a free lunch. It comes with bills measured in electricity, water, and silicon.

This post ties both threads together: the real bottlenecks in technology are not the protocols we argue about in standards meetings. They are the supply chains that determine whether the chips, power, minerals, and people exist in the first place.

AI is a vivid example because its appetite is so enormous. But the lesson applies more broadly. The Internet is fracturing into spheres of influence, but those spheres will remain bound by the physical pipelines that crisscross borders.

So the next time someone suggests an infinite AI, or a fully sovereign domestic Internet, remember: computers aren’t infinite. Supply chains aren’t sovereign. The real question isn’t whether we can break free of those facts, it’s how we design systems that can thrive within them.

If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript

[00:00:29] Welcome back to The Digital Identity Digest. I’m Heather Flanagan, and today, we’re going to dig into one of those invisible but very real limits on our digital future — supply chains.

[00:00:42] Now, I know supply chains don’t sound nearly as exciting as AI agents or new Internet protocols. But stay with me — because without the physical stuff (chips, minerals, power, and people), all of those clever protocols and powerful algorithms don’t amount to much.

[00:01:00] This episode builds on two earlier posts:

Who Really Pays for AI? — exploring how AI comes with a bill in water, electricity, and silicon. The End of the Global Internet — examining how fragmentation is reshaping the network itself.

Both lead us here: the supply chain is one of the biggest constraints on how far both AI and the Internet can actually go.

[00:01:27] So, if you really want to understand the future of technology, you can’t just look at the code or the protocols.

[00:01:35] You have to look at the supply chains.

The Reality Check: Technology Needs Stuff

[00:01:38] Let’s start with a story. On a recent flight, my seatmate was reading a book about artificial intelligence. Go him.

[00:01:49] At one point, he leaned over and described an idea of an infinitely growing AI.

[00:01:56] I couldn’t help but laugh a little — because computers are not infinite.

[00:02:04] There just aren’t enough chips, fabs, minerals, power plants, or trained people on the planet to sustain infinite anything. It’s not imagination — it’s physics, chemistry, and labor.

[00:02:20] That exchange captured something I keep seeing in conversations about AI, identity, and the Internet. We treat protocols as if they’re the bottleneck. But ultimately, it’s the supply chains underneath that constrain everything.

Chips, Fabs, and the Fragility of Progress

[00:02:38] Let’s break that down — starting with chips and fabricators, also known as fabs.

[00:02:44] The most advanced semiconductors come from fabrication plants that cost tens of billions of dollars to build — and take years, even a decade, to come online.

[00:02:56] And the entire process hinges on one company — ASML in the Netherlands.

[00:03:03] They’re the only supplier of extreme ultraviolet lithography machines. Without those, you simply can’t make the latest generation of chips. The backlog? Measured in years.

[00:03:21] Then there’s the issue of minerals and materials:

Lithium for batteries Cobalt for electrodes Rare earth elements for magnets Neon for chipmaking lasers High-purity quartz for wafers

[00:03:44] These resources aren’t evenly distributed. China refines most rare earths. Ukraine supplies much of the world’s neon. And water — another critical input — is also unevenly available.

Power, People, and Production

[00:04:05] A frontier AI model doesn’t just use a lot of electricity — it uses gigawatt-hours of power.

[00:04:26] Running a hyperscale data center can consume as much water as a small city. And when power grids are strained, no clever standard can conjure new electrons out of thin air.

[00:04:26] Then there’s the people. None of this runs itself:

Chip designers Process engineers Clean room technicians Miners and metallurgists

[00:04:57] These are highly specialized roles — and many experts are nearing retirement. Replacing them takes years, not months. Immigration limits compound the challenge.

[00:05:05] So yes, protocols matter — but the real limits come from the physical world.

Geopolitics and the Global Supply Web

[00:05:16] The Internet may feel borderless, but the hardware that makes it work is not.

[00:05:26] Every link in the supply chain is tangled in geopolitics:

The U.S. leads in chip design but depends on Taiwan and South Korea for manufacturing. China dominates rare earth refining but still relies on imported chipmaking tools. Europe has niche strengths in lithography but lacks materials for full independence. Japan, India, and Australia provide key raw inputs but not the entire production stack.

[00:06:16] This global interdependence made systems efficient — but also fragile.

Demographics: The Aging Workforce

[00:06:21] There’s also a demographic angle. Skilled engineers and technicians are aging out.

[00:06:35] In about 15 years, we’ll see significant skill gaps. Even if minerals and fabs are available, we might not have the people to keep things running.

[00:06:58] The story isn’t just about where resources are — it’s about who can use them.

The Illusion of Resilience

[00:07:06] For decades, efficiency ruled. Tech companies built “just-in-time” supply chains, outsourcing to low-cost, reliable suppliers.

[00:07:21] That gave us cheap smartphones and rapid innovation — but also brittle systems.

[00:07:38] A few reminders of fragility:

2011: Tsunami in Japan disrupts semiconductor production. 2021: Drought in Taiwan forces fabs to truck in water. 2022: War in Ukraine cuts off neon supplies. 2020–2023: Global chip shortage reveals how fragile everything truly is.

[00:08:18] AI at scale only magnifies this fragility. Even one constrained resource, like gallium, can halt model training — regardless of how advanced the algorithms are.

The Splinternet Still Needs a Global Supply Chain

[00:08:48] Even as the Internet fragments into regional “Splinternets,” supply chains remain global.

[00:09:18] You can wall off your data, but you can’t build advanced tech entirely within one nation’s borders.

Examples include:

A U.S. data center using chips refined with Chinese minerals. A Chinese smartphone using European lithography tools. An EU startup running on GPUs packaged in Southeast Asia.

[00:09:46] Fragmentation adds complexity, not independence.

The Myth of Digital Sovereignty

[00:09:46] The idea of total “digital sovereignty” sounds empowering — but it’s misleading.

[00:10:07] You can control protocols, standards, and regulations.
But you can’t control:

Minerals you don’t have Fabricators you can’t build Workforces you can’t train Designing Resilient Regional Systems

[00:10:14] So, what’s the alternative? Regional diversity.

Instead of one global, fragile chain, we can build multiple overlapping regional systems:

U.S.: The CHIPS and Science Act investing in domestic semiconductor manufacturing. EU: The Raw Materials Alliance strengthening mineral supply and recycling. Japan & South Korea: Building redundancy in battery and material supply. India: Launching its “Semiconductor Mission.” Australia & Canada: Expanding refining capacity for critical minerals.

[00:11:38] Yes, these efforts are costlier and slower — but they build buffers. If one region falters, another can pick up the slack.

The Takeaway: Infinite AI is a Myth

[00:12:06] That airplane conversation sums it up. The myth of infinite AI isn’t just science fiction — it’s a misunderstanding of how technology works.

[00:12:17] AI, like the Internet, is bounded by the real world — by chips, minerals, power, and people.

[00:12:45] Even as the Internet fragments, its supply chains remain irreducibly global.

[00:13:02] The challenge isn’t escaping these limits — it’s designing systems that thrive within them.

Closing Thoughts

[00:13:27] The real bottleneck in technology isn’t protocols — it’s supply chains.

[00:13:48] AI is just the most visible example of how finite our digital ambitions are.

[00:14:13] So, the next time you hear someone talk about “infinite AI” or a “sovereign Internet,” remember:

Computers are not infinite. Supply chains cannot be sovereign.

[00:14:19] The real question isn’t how to escape those facts — it’s how to build systems that can thrive within them.

Outro

[00:14:19] Thanks for listening to The Digital Identity Digest.

If you enjoyed the episode:

Share it with a colleague or friend. Connect with me on LinkedIn @hlflanagan. Subscribe and leave a rating wherever you listen to podcasts.

[00:15:02] You can also find the full written post at sphericalcowconsulting.com.

Stay curious, stay engaged — and let’s keep the conversation going.

The post Why Tech Supply Chains, Not Protocols, Set the Limits on AI and the Internet appeared first on Spherical Cow Consulting.


Recognito Vision

The Complete Guide to KYC Verification Online and How It Protects Your Identity

You’ve probably seen that pop-up asking you to verify your identity when signing up for a new banking app or wallet. That’s KYC, short for Know Your Customer. It helps businesses confirm that users are real, not digital impostors trying to pull a fast one. In the old days, this meant long queues, forms, and...

You’ve probably seen that pop-up asking you to verify your identity when signing up for a new banking app or wallet. That’s KYC, short for Know Your Customer. It helps businesses confirm that users are real, not digital impostors trying to pull a fast one.

In the old days, this meant long queues, forms, and signatures. Today, KYC verification online makes that process digital, instant, and painless.

Here’s how the two compare.

Feature Traditional KYC Online KYC Verification Time Taken Days or weeks A few minutes Method Manual paperwork Automated verification Accuracy Prone to error AI-based precision Accessibility Branch visits required Anywhere, anytime Security Paper-based Encrypted and biometric

According to Deloitte’s “Revolutionising Due Diligence for the Digital Age”, digital verification and automation can drastically improve compliance efficiency and customer experience, both of which are central to modern financial services.

That’s why KYC verification online has become the backbone of secure onboarding for fintechs, banks, and even government platforms.

 

How KYC Verification Online Actually Works

When you perform a KYC check online, it feels quick and effortless, but behind that simple process, powerful AI is doing the hard work. It matches your selfie with your ID, reads your details using OCR, and cross-checks everything with trusted databases, all in seconds.

Here’s what’s really happening:

You upload your ID (passport, driver’s license, or national ID).

You take a quick selfie using your phone camera.

The system compares your selfie to the photo on your ID using advanced facial recognition.

OCR (Optical Character Recognition) extracts the text from your ID to verify your name, address, and date of birth.

Data is validated against government or regulatory databases.

You get approved often in under two minutes.

That’s KYC authentication in action: fast, secure, and contact-free.

According to the NIST Face Recognition Vendor Test (FRVT), today’s leading algorithms are over 20 times more accurate than those used just a decade ago. That leap in precision is one reason why eKYC verification is now trusted by global banks and fintech companies.

Why Businesses Are Switching to KYC Verification Online

No one enjoys filling out endless forms or waiting days for approvals. That’s why businesses everywhere are turning to KYC verify online systems; they make onboarding smoother for customers while cutting costs for organizations.

Some of the biggest reasons behind this shift include:

Faster onboarding times that enhance customer experience.

Greater accuracy from AI-powered checks.

Enhanced fraud detection through biometric validation.

Regulatory compliance with frameworks like GDPR.

Global accessibility for users to verify KYC online anytime, anywhere.

Research by Deloitte Insights notes that organizations automating due diligence and verification processes reduce manual costs while increasing compliance accuracy, a huge win for financial institutions managing high user volumes.

Simply put, online KYC check systems help companies onboard customers faster while minimizing human error and fraud.

Technology Behind Modern KYC Verification Solutions

Every smooth verification process is powered by some serious tech muscle.

Artificial Intelligence (AI) helps detect fraudulent IDs and spot manipulation patterns in photos. Machine learning continuously improves accuracy by learning from new data. Facial recognition verifies your selfie against your ID photo with pinpoint precision, tested under the NIST FRVT benchmark.

Meanwhile, Optical Character Recognition (OCR) pulls data from your documents instantly, and encryption technologies protect that data as it moves across systems.

For developers and organizations wanting to implement their own KYC verification solutions, Recognito’s face recognition SDK and ID document recognition SDK are reliable tools that simplify integration.

You can also explore Recognito’s GitHub repository to see how real-time AI verification systems evolve in practice.

 

How to Verify Your KYC Online Without the Hassle

If you haven’t tried KYC verification online yet, it’s simpler than you think. Just open the app, upload your ID, take a selfie, and let the system handle the rest.

Most platforms now allow you to check online KYC status in real time. You’ll see exactly when your verification moves from “in review” to “approved.”

Curious about how it all works behind the scenes? Try the ID Document Verification Playground. It’s an interactive way to see how modern KYC systems scan, process, and authenticate IDs no real data required.

According to Allied Market Research, the global eKYC verification market is expected to reach nearly $2.4 billion by 2030, growing at over 22% CAGR. That surge shows just how essential digital KYC has become to the future of online services.

The Future of KYC Authentication

The next generation of KYC authentication is going to feel almost invisible. Biometric technology and AI are merging to make verification instant; imagine unlocking your account just by looking at your camera.

In India, systems like UIDAI’s Aadhaar e-KYC have already transformed how millions of users open bank accounts and access government services. It’s fast, paperless, and secure.

Global research by PwC on Digital Identity predicts that the world is moving toward a unified digital identity model, one verified profile for all services, from banking to healthcare.

This is the future of KYC identity verification: a seamless, secure, and user-friendly process that builds trust without slowing you down.

 

Final Thoughts

In the end, KYC verification online is about more than compliance; it’s about confidence. It ensures that businesses and customers can interact safely in an increasingly digital world.

It eliminates paperwork, reduces fraud, and makes onboarding faster and smarter. That’s progress everyone can appreciate.

If you’re a business exploring modern KYC verification solutions, check out Recognito. Their AI-powered technology helps companies verify identities accurately, comply with regulations, and create frictionless user experiences.

 

Frequently Asked Questions

 

1. How does KYC verification online work?

You upload your ID, take a selfie, and the system checks both using AI. KYC verification online confirms your identity in just a few minutes.

 

2. Is eKYC verification safe to use?

Yes, eKYC verification is secure since it uses encryption and biometric checks. Your personal data stays protected throughout the process.

 

3. What do I need to verify my KYC online?

To verify KYC online, you only need a valid government ID and a selfie. The rest is handled automatically by the system.

 

4. Why are companies using online KYC checks now?

Businesses use online KYC check systems because they’re faster and help prevent fraud. It also makes onboarding easier for users.

 

5. What makes a good KYC verification solution?

A great KYC verification solution should be fast, accurate, and compliant with privacy laws. It should make KYC identity verification simple for both users and companies.

Monday, 13. October 2025

1Kosmos BlockID

FedRAMP Levels Explained & Compared (with Recommendations)

The post FedRAMP Levels Explained & Compared (with Recommendations) appeared first on 1Kosmos.

HYPR

The Salesforce Breach Is Every RevOps Leader’s Nightmare: How to Secure Connected Apps

The RevOps Tightrope: When "Just Connect It" Becomes a Breach Vector If you're in Revenue Operations, Marketing Ops, or Sales Ops, your core mandate is velocity. Every week, someone needs to integrate a new tool: "Can we connect Drift to Salesforce?" "Can we push this data into HubSpot?" "Can you just give marketing API access?" You approve the OAuth tokens, you connect the "trusted"
The RevOps Tightrope: When "Just Connect It" Becomes a Breach Vector

If you're in Revenue Operations, Marketing Ops, or Sales Ops, your core mandate is velocity. Every week, someone needs to integrate a new tool: "Can we connect Drift to Salesforce?" "Can we push this data into HubSpot?" "Can you just give marketing API access?" You approve the OAuth tokens, you connect the "trusted" apps, and you enable the business to move fast. You assume the security team has your back.

But the ShinyHunters extortion spree that surfaced this year, targeting Salesforce customer data, exposed the deadly vulnerability built into that convenience-first trust model. This wasn't just a "cyber event" for the security team; it was a devastating wake-up call for every operator who relies on that data. Suddenly, every connected app looks like a ticking time bomb, filled with sensitive PII, contact records, and pipeline data.

Anatomy of the Attack: Hacking Authorization, Not Authentication

The success of the ShinyHunters campaign wasn't about a software bug or a cracked password. It was about trusting the wrong thing. The attackers strategically bypassed traditional MFA by exploiting two key vectors: OAuth consent and API token reuse.

Path 1: The Fake "Data Loader" That Wasn't (OAuth Phishing)

The most insidious vector involved manipulating human behavior through advanced vishing (voice phishing).

Attackers impersonated internal IT support, creating urgency to trick an administrator. Under the pretext of fixing an urgent issue, the victim was directed to approve a malicious Connected App—often disguised as a legitimate tool like a Data Loader.

The result was the same as a physical breach: the employee, under false pretenses, granted the attacker’s malicious app a valid, persistent OAuth access token. This token is the backstage pass—it gave the attacker free rein to pull vast amounts of CRM data via legitimate APIs, quietly and without triggering MFA or login-based alerts.

Path 2: Token Theft in the Shadows (API Credential Reuse)

The parallel vector targeted tokens from already integrated third-party applications, such as Drift or Salesloft.

Attackers compromised these services to steal their existing OAuth tokens or API keys used for the Salesforce integration. These stolen tokens act like session cookies: they are valid, silent, and allow persistent access to Salesforce data without ever touching a login page. Crucially, once stolen, these tokens can be reused until revoked, representing an open back door into your most valuable data.

Both paths point to a single conclusion: your digital ecosystem is built on convenience-first trust, and in the hands of sophisticated attackers, trust is the ultimate exploitable vulnerability.

The Trust Problem: Securing Logins, Not Logic

For years, security focused on enforcing strong MFA and password rotation. But the ShinyHunters campaign proved that this focus is too narrow.

You can enforce the best MFA, rotate passwords monthly, and check all your compliance boxes. But if an attacker can:

Convince an employee to approve a fake OAuth app, or Steal a token that never expires from an integration

...then everything else is just window dressing.

The uncomfortable truth for RevOps is that attackers are not exploiting a zero-day; they are hacking how you work. The industry-wide shift now, led by NIST and CISA, is toward phishing-resistant authentication. Why? Because the weak spots exploited in this breach - reusable passwords and phishable MFA - are eliminated when you replace them with cryptographic, device-bound credentials.

Where HYPR Fits In: Making Identity Deterministic, Not Trust-Based

HYPR was built for moments like this—when the mantra "never trust, always verify" must transition from a slogan into an operational necessity. Our Identity Assurance platform delivers the deterministic certainty needed to stop both forms of token theft cold.

Here’s how HYPR's approach prevents these breach vectors:

Eliminating Shared Secrets: HYPR Authenticate uses FIDO2-certified passwordless authentication. There is no password or shared secret for attackers to steal, replay, or trick a user into approving. This automatically eliminates the phishable vector used in Path 1. Domain Binding Stops OAuth Phishing: FIDO Passkeys are cryptographically bound to the specific URL of the service. If an attacker tries to trick a user into authenticating on a malicious domain (OAuth phishing), the key will not match the registered domain, and the authentication will fail instantly and silently. Deterministic Identity Proofing for High-Risk Actions (HYPR Affirm): Granting new app privileges is a high-risk action. HYPR Affirm brings deterministic identity proofing—using live liveness checks, biometric verification, and document validation—before any credential or app authorization is granted. This stops social engineering attacks aimed at the help desk or an administrator because you ensure the person making the request is the rightful account owner. No Unchecked Trust (HYPR Adapt): Every high-risk action - whether it’s a new device enrollment, a token reset, or a highly-privileged connected app approval - can trigger identity re-verification. If your HYPR Adapt risk engine detects anomalous API activity (Path 2), it can dynamically challenge the user to re-authenticate with a phishing-resistant passkey, immediately revoking the session/token until certainty is established.

This platform isn't about simply locking things down; it's about building secure, efficient systems that can verify who is on the other end with cryptographic certainty.

Next Steps for RevOps: Championing the Identity Perimeter

The Salesforce breach was about trust at scale. As RevOps leaders, you need to protect not just the data, but how that data is accessed and shared.

Here is what you must prioritize now:

Revisit Your Integrations: Know which connected apps have offline access and broad permissions (e.g., refresh_token, full) to your Salesforce data - and ruthlessly trim the list to only essential tools. Automate Least Privilege: Implement a policy for temporary tokens and expiring scopes. Move away from permanent credentials where possible, forcing periodic re-consent. Champion Phishing-Resistant MFA: Make FIDO2 Passkeys the minimum baseline for every high-value user and administrator. Anything less is a calculated risk you can’t afford.

The uncomfortable truth is: Attackers did not utilize brute force - they strategically weaponized OAuth consent and token theft. The good news is that passwordless, phishing-resistant authentication would have stopped both paths cold.

Unlock the pipeline velocity you need with the deterministic security you can trust.

👉 Request a Demo of the HYPR Identity Assurance Platform Today.


Holochain

Dev Pulse 152: Wind Tunnel Updates, Holo Edge Node Container

Dev Pulse 152
Wind Tunnel gets reports, automation, multiple conductors

All the hard work put into Wind Tunnel, our scale testing suite, is starting to become visible! We’re now collecting metrics from both the host OS and Holochain, in addition to the scenario metrics we’d already been collecting (where zome call time and arbitrary scenario-defined metrics could be measured). We’re also running scenarios on an automated schedule and generating reports from them. Our ultimate goals are to be able to:

monitor releases for performance improvements and regressions, identify bottlenecks for improvement, and turn report data into release-specific information you can use and act upon in your app development process.

Finally, Wind Tunnel is getting the ability to select a specific version of Holochain right from the test scenario, which will be useful for running tests on a network with a mix of different conductors. It also saves us some dev ops headaches, because the right version for a test can be downloaded automatically as needed.

Holochain 0.6: roughly two (ideal) weeks remaining

Our current estimates predict that Holochain 0.6’s first release will take about two team-weeks to complete. Some of the dev team is focused on Wind Tunnel and other tasks, so this may not mean two calendar weeks, but it’s getting closer. To recap what we’ve shared in past Dev Pulses, 0.6 will focus on:

Warrants — reporting validation failures to agent activity authorities, who collect and supply these warrants to anyone who asks for them. As soon as an agent sees and validates a warrant, they retain it and block the bad agent, even if they aren’t responsible for validating the agent’s data. If the warrant itself is invalid (that is, the warranted data is valid), the authority issuing the warrant will be blocked. Currently warrants are only sent in response to a get_agent_activity query; in the future, they’ll be sent in response to other DHT queries too. Blocking — the kitsune2 networking layer will allow communication with remote agents to be blocked, and the Holochain layer will use this to block agents after a warrant against them is discovered. Performance improvements — working with Unyt, we’ve discovered some performance issues with must_get_agent_activity and get_agent_activity which we’re working on improving.
Open-source Holo Edge Node

You have probably already seen the recent announcements from Holochain and Holo (or the livestream), but if not, here’s the news from the org: Holo is open-sourcing its always-on node software in an OCI-compliant container called Edge Node.

This is going to do a couple things for hApp developers:

make it easier to spin up always-on nodes to provide data availability and redundancy for your hApp networks, provide a base dockerfile for devs to add other services to — maybe an SMS, email, payment, or HTTP gateway for your hApp, and allow more hosts to set up nodes, because Docker is a familiar distribution format

I think this new release connects Holo back to its roots — the decentralised, open-source values that gave birth to it — and we hope that’ll mean more innovation in the software that powers the Holo network. HoloPort owners will need to be handy with the command line, but a recent survey found that almost four fifths of them already are.

So if you want to get involved, either to bootstrap your own infrastructure or support other hApp creators and users, here’s what you can do:

Download the latest HolOS ISO for HoloPorts, other hardware, VMs, and cloud instances. Download the Edge Node container for Docker, Kubernetes, etc. Get in touch with Rob from Holo on the Holo Forum, the Holo Edge Node Support Telegram, Calendly, or the DEV.HC Discord (you’ll need to self-select Access to: Projects role in the #select-a-role channel, then go to the #always-on-nodes channel). Join the regular online Holo Huddle calls for support (get access to these calls by getting in touch with Rob above). Soon, there’ll be a series of Holo Forge calls for people who want to focus on building the ecosystem (testing, modifying the Edge Node container, etc).
Next Dev Office Hours: 15 Oct 2025

Join us on the DEV.HC Discord at 16:00 UTC for the next Dev Office Hours call — bring your ideas, questions, projects, bugs, and hApp development challenges to the dev team, where we’ll do our best to respond to them. See you there!


Dock

Introduction to Decentralized Identity [Video + Takeaways]

Decentralized identity is becoming the backbone of how organizations, governments, and individuals exchange trusted information. In this live workshop, Agne Caunt (Product Owner, Dock Labs) and Richard Esplin (Head of Product, Dock Labs) guided learners through the foundations of decentralized identity: how digital identity models have evolved, the Trust

Decentralized identity is becoming the backbone of how organizations, governments, and individuals exchange trusted information.

In this live workshop, Agne Caunt (Product Owner, Dock Labs) and Richard Esplin (Head of Product, Dock Labs) guided learners through the foundations of decentralized identity: how digital identity models have evolved, the Trust Triangle that powers verifiable data exchange, and the technologies behind it: from verifiable credentials to DIDs, wallets, and biometric-bound credentials.

Below are the core takeaways from the session.


Kin AI

Kinside Scoop 👀 #15

Accounts, memory, and more upcoming features

Hey folks 👋

Following the rapid-fire releases in the last few newsletters, we have a quieter one for you this edition.

Everyone’s busy working on some bigger features and editions to Kin, meaning not much has gone out in the last two weeks.

So instead, this’ll be a sneak peek into what’s coming really soon - with the usual super prompt at the end for you.

What (will be) new with Kin 🕑 Your Kin, expanded 🌱

The biggest change coming up is our rollout of Kin Accounts. Don’t worry: these accounts won’t store any of your conversation data - just some minimal basics that we’ll keep secure.

We’ll be introducing Kin Accounts to lay the groundwork for multi-device sync (which inches closer!), more integrations into Kin, and eventually Kin memberships.

More information on Kin Accounts, and what we mean by “minimal basics” will come out soon too, so you stay fully informed

More personal advisors and notifications 🧩

Off the back of the positive feedback for the advisor updates covered in the last edition, we’re continuing to expand their personalities and push notification abilities.

Very soon, you’ll notice that each advisor feels even more unique, more understanding of you, and more suited to their role - both in chat and in push notifications.

And in case you missed it, you have full control over the push notification frequency. If you want to hear from an advisor while outside Kin more, you can turn it up in each advisor’s edit tab from the home screen - and if you want to hear less from them, you can turn it down more.

Memory continues to grow 🧠

Memory appears in these updates almost every time - and that’s because we really are working on it almost every week.

The imminent update will continue to work toward our long-standing goal of making Kin the best personal AI at understand time in conversations - something we’ve explained in more depth in previous articles.

More on this when the next stage of the update rolls out!

Journaling, refined by you yet again 📓

Similarly, Journaling also makes another appearance as we continue to re-work it according to your feedback. Guided daily and weekly Journals will help you track your progress, more visible streak counts will help keep you involved, and a new prompting system will help entries feel more insightful. You’ll hear more about exactly what’s changing once we’ve released some of it.

Start a conversation 💬

I know this reminder is in every newsletter - but that’s because it’s integral to Kin.

Kin is built for you, with your ideas. So, your feedback is essential to helping us know whether we’re making things the way you like them.

The KIN team is always around at hello@mykin.ai for anything, from feature feedback to a bit of AI discussion (though support queries will be better helped over at support@mykin.ai).

To get more stuck in, the official Kin Discord is still the best place to interact with the Kin development team (as well as other users) about anything AI.

We have dedicated channels for Kin’s tech, networking users, sharing support tips, and for hanging out.

We also regularly run three casual calls every week - you’re welcome to join:

Monday Accountability Calls - 5pm GMT/BST
Share your plans and goals for the week, and learn tips about how Kin can help keep you on track.

Wednesday Hangout Calls - 5pm GMT/BST
No agenda, just good conversation and a chance to connect with other Kin users.

Friday Kin Q&A - 1pm GMT/BST
Drop in with any questions about Kin (the app or the company) and get live answers in real time.

Kin is yours, not ours. Help us build something you love!

Finally, you can also share your feedback in-app. Just screenshot to trigger the feedback form.

Our current reads 📚

Article: OpenAI admits to forcibly switching subscribers away from GTP 4 and 5 models in some situations
READ - techradar.com

Article: San Diego State University launch first AI responsibility degree in California
READ - San Diego State University

Article: Australia’s healthcare system adopting AI tools
READ - The Guardian

Article: California’s AI laws could balance innovation and regulation
READ - techcrunch.com

This edition’s super prompt 🤖

This week, your Kin will help you answer the question:

“How can I better prepare for change?”

If you have Kin installed and up to date, you can tap the link below (on mobile!) to explore how you think about pressure, and how you can keep cool under it.

As a reminder, you can do this on both iOS and Android.

Open prompt in Kin

We build Kin together 🤝

If you only ever take one thing away from these emails, it should be that you have as much say in Kin as we do (if not more).

So, please chat in our Discord, email us, or even just shake the app to get in contact with anything and everything you have to say about Kin.

With love,

The KIN Team

Sunday, 12. October 2025

Ockam

When You Run Out of Things to Say

The 6-month plan to go from zero to traction (with weekly tasks you can start today) Continue reading on Medium »

The 6-month plan to go from zero to traction (with weekly tasks you can start today)

Continue reading on Medium »


Dock

Why The US Won’t Allow “Phone Home” Digital IDs

In our recent live podcast, Richard Esplin (Dock Labs) sat down with Andrew Hughes (VP of Global Standards, FaceTec) and Ryan Williams (Program Manager of Digital Credentialing, AAMVA) to unpack the new ISO standards for mobile driver’s licenses (mDLs). One topic dominated the discussion: server retrieval

In our recent live podcast, Richard Esplin (Dock Labs) sat down with Andrew Hughes (VP of Global Standards, FaceTec) and Ryan Williams (Program Manager of Digital Credentialing, AAMVA) to unpack the new ISO standards for mobile driver’s licenses (mDLs).

One topic dominated the discussion: server retrieval.

Saturday, 11. October 2025

Ockam

The Gap Between Knowing and Doing

Why Knowledge Without Action Is Just Expensive Entertainment Continue reading on Medium »

Why Knowledge Without Action Is Just Expensive Entertainment

Continue reading on Medium »

Thursday, 07. August 2025

Radiant Logic

Radiant Logic’s SCIM Support Recognized in 2025 Gartner® Hype Cycle™ for Digital Identity

Discover how Radiant Logic’s SCIMv2 support simplifies identity management, enabling seamless automation, governance, and Zero Trust alignment across hybrid environments. The post Radiant Logic’s SCIM Support Recognized in 2025 Gartner® Hype Cycle™ for Digital Identity appeared first on Radiant Logic.

California’s Countdown to Zero Trust—A Practical Path Through Radiant Logic

California’s AB 869 Zero-Trust mandate demands unified, real-time identity data, and Radiant Logic’s platform provides the foundation to ensure smarter security and seamless compliance. The post California’s Countdown to Zero Trust—A Practical Path Through Radiant Logic appeared first on Radiant Logic.

AI for Access Administration: From Promise to Practice

Streamline access reviews and boost compliance with Radiant Logic’s AIDA AI—an assistant that transforms cumbersome reviews into quick, confident decisions for modern identity governance. The post AI for Access Administration: From Promise to Practice appeared first on Radiant Logic.

Gartner Recognizes Radiant Logic as Leader in Identity Visibility and Intelligence Platforms

Explore why Gartner sees Identity Visibility and Intelligence Platforms as critical for reducing risk and accelerating digital transformation with real-time observability and unified identity data. The post Gartner Recognizes Radiant Logic as Leader in Identity Visibility and Intelligence Platforms appeared first on Radiant Logic.

Gartner® Recognizes Radiant Logic in the 2025 Hype Cycle™ for Zero Trust

Discover why unified, accurate identity data is now at the heart of Zero Trust mandates and how organizations can overcome real-world barriers to implementation. The post Gartner® Recognizes Radiant Logic in the 2025 Hype Cycle™ for Zero Trust appeared first on Radiant Logic.

Identity: The Lifeline of Modern Healthcare

Discover how transforming identity management from a bottleneck into a secure, unified foundation can accelerate care delivery and protect healthcare organizations from mounting cyber threats. The post Identity: The Lifeline of Modern Healthcare appeared first on Radiant Logic.

Ockam

When Brands Break the Rules

The Art of Unconventional Advertising That Actually Works Continue reading on Clubwritter »

The Art of Unconventional Advertising That Actually Works

Continue reading on Clubwritter »


HYPR

It’s a Partnership, Not a Handoff: Doug McLaughlin on Navigating Enterprise Change

The journey from a signed contract to a fully deployed security solution is one of the most challenging in enterprise technology. For a mission-critical function like identity, the stakes are even higher. It requires more than just great technology; it demands a true partnership to drive change across massive, complex organizations.

The journey from a signed contract to a fully deployed security solution is one of the most challenging in enterprise technology. For a mission-critical function like identity, the stakes are even higher. It requires more than just great technology; it demands a true partnership to drive change across massive, complex organizations.

I sat down with HYPR’s SVP of Worldwide Sales, Doug McLaughlin, to discuss what it really takes to get from the initial sale to the finish line, and how HYPR works with customers to manage the complexities of procurement, organizational buy-in, and full-scale deployment for millions of users.

Let’s talk about the initial hurdles – procurement and legal. These processes can stall even the most enthusiastic projects. How do you get across that initial finish line?

Doug: By the time you get to procurement and legal, the business and security champions should be convinced of the solution's value. These teams aren't there to re-evaluate whether the solution is needed; they're there to vet who is providing it and under what terms. The biggest mistake you can make is treating them like a final sales gate.

Our approach is to be radically transparent and prepared. We have our security certifications, compliance documentation, and legal frameworks ready to go well in advance. We’ve already proven the business value and ROI to our champions, who then become our advocates in those internal procurement meetings. It’s about making their job as easy as possible. When you’ve built a strong, trust-based relationship across the organization, procurement becomes a process to manage efficiently, not an obstacle to overcome. The contract signature is less the "end" and more the "official beginning" of the real work.

You’ve navigated some of the largest passwordless deployments in history. Many people think the deal is done when the contract is signed. What’s the biggest misconception about that moment?

Doug: The biggest misconception is that the signature is the finish line. In reality, it’s the starting gun. For us, that contract isn’t an endpoint; it’s a formal commitment to a partnership. You've just earned the right to help the customer begin the real work of transformation.

In these large-scale projects, especially at global financial institutions or manufacturing giants, you’re not just installing software. You’re fundamentally changing a core business process that can touch every single employee, partner, and sometimes even their customers. If you view that as a simple handoff to a deployment team, you're setting yourself up for failure. The trust you built during the sales cycle is the foundation you need for the change management journey ahead.

When you’re dealing with a global corporation, you have IT, security, legal, procurement, and business units all with their own priorities. How do you start building the consensus needed for a successful rollout?

Doug: You have to build a coalition, and you do that by speaking the language of each stakeholder. I remember working with a major global bank. Their security team was our initial champion; they immediately saw how passkeys would eliminate phishing risk and secure their high-value transactions. But one of the key stakeholders was wary. Their primary concern was a potential surge in help desk calls during the transition, which would blow up their budget.

Instead of just talking about security with them, we shifted the conversation entirely and early. We presented the case study from another financial services deployment showing a 70-80% reduction in password-related help desk tickets within six months of rollout. We framed the project not as a security mandate, but as an operational efficiency initiative that would free up the team's time.

We connected the dots for them. Security got their risk reduction. IT saw a path to lower operational costs. The business leaders saw a faster, more productive login experience for their bankers. When each department saw its specific problem being solved, they became a unified force pushing the project forward. That's how you turn individual stakeholders into a powerful coalition.

That leads to the user. How do you get hundreds of thousands of employees at a global company to embrace a new way of signing in?

Doug: You can’t force change on people; you have to make them want it. A great example is a Fortune 500 manufacturing company we worked with. They had an incredibly diverse workforce. From corporate executives on laptops to factory floor workers using shared kiosks and tablets. Compounding this further, employees spanned the globe, from US, to China to LatAm and beyond. Let’s face it, a single, top-down email mandate was never going to work.

We partnered with them to create a phased rollout that respected these different user groups. For the factory floor, we focused on speed. The message was simple: "Clock in faster, start your shift faster." We trained the shift supervisors to be the local experts and put up simple, visual posters near the kiosks.

For the corporate employees, we focused on convenience and security, highlighting the ability to log in from anywhere without typing a password. We identified influential employees in different departments to be part of a pilot program. Within weeks, these "champions" were talking about how much easier their sign-in experience was. That word-of-mouth was more powerful than any corporate memo. The goal is to make the new way so demonstrably better that people are actively asking when it's their turn. That’s when adoption pulls itself forward.

Looking back at these massive, multi-year deployments, what defines a truly "successful" partnership for you?

Doug: Success isn’t the go-live announcement. It's six months later when the CISO tells you their help desk calls are down 70%. It's when an employee from a branch in Singapore sends unsolicited feedback about how much they love the new login experience. It’s when the customer’s security team stops seeing you as a vendor and starts calling you for advice on their entire identity strategy.

That's the real finish line. It's when the change has stuck, the value is being realized every day, and you’ve built a foundation of trust that you can continue to build on for years to come.

What's the biggest topic that keeps coming up in your customer conversations these days?

Doug: I'm having a lot of fun clarifying the difference between simply checking a document and actually verifying a person's identity. Many companies believe that if they scan a driver's license, they're secure. But I always ask, "Okay, that tells you the document is probably real, but how do you really know who's holding it?" That question changes everything. Between the rise of AI-generated fakes, or the simple reality that people lose their wallets, relying on a single document is incredibly fragile. The last thing you want is your top employee stranded and locked out of their accounts because their license is missing.

I move the conversation to a multi-factor approach. We check the document, yes, but then we use biometrics to bind it to the live person in front of the camera, and then we cross-reference that against another trusted signal, like the phone they already use to sign in. It gives you true assurance that the right person is there. More importantly, it provides multiple paths so your employees are never left helpless. It’s about building a resilient system that’s both more secure and more practical for your people.

Bonus question! What’s one piece of advice you’d give to someone just starting to manage these complex sales and deployment cycles?

Doug: Get obsessed with your customer's business, not your product. Understand what keeps their executives up at night, what their biggest operational headaches are, and what their long-term goals are. If you can authentically map your solution to solving those core problems, you stop being a salesperson and start being a strategic partner. Everything else follows from that.

Thanks for the insights, Doug. It’s clear that partnership is the key ingredient to success!


This week in identity

E63 - Are Identity Platforms Legacy? The Rise of Identity Information Flows

Keywords PAM, IGA, CyberArk, Palo Alto, identity security, AI, machine identity, cybersecurity, information flows, behavioral analysis Summary In this episode of the Analyst Brief Podcast, Simon Moffatt and David Mahdi discuss the significant changes in the cybersecurity landscape, particularly focusing on Privileged Access Management (PAM) and Identity Governance and Administration (IGA)

Keywords

PAM, IGA, CyberArk, Palo Alto, identity security, AI, machine identity, cybersecurity, information flows, behavioral analysis


Summary


In this episode of the Analyst Brief Podcast, Simon Moffatt and David Mahdi discuss the significant changes in the cybersecurity landscape, particularly focusing on Privileged Access Management (PAM) and Identity Governance and Administration (IGA). They explore the recent acquisition of CyberArk by Palo Alto, the evolution of identity security, and the convergence of various identity management solutions.

The conversation highlights the importance of information flows, and the need for a mindset shift in the industry to effectively address identity security challenges.


Takeaways


The cybersecurity landscape is rapidly changing due to AI. PAM and IGA are evolving but remain siloed. The acquisition of CyberArk by Palo Alto signifies a shift in identity security. Organizations struggle with integrating disparate identity technologies. Behavioral analysis is crucial for identifying security threats. AI will play a significant role in optimizing identity security. Defensive acquisitions are common in the cybersecurity industry. The future of identity security relies on understanding information flows.


Chapters


00:00 Welcome Back and Industry Changes

02:01 The Evolution of Privileged Access Management (PAM)

10:41 The Convergence of Cybersecurity and Identity

16:13 The Future of Identity Management Platforms

24:23 Understanding Information Flows in Cybersecurity

28:12 The Role of AI in Identity Management

33:42 Navigating Mergers and Acquisitions in Tech

39:50 The Future of Identity Security and AI Integration



Tokeny Solutions

Are markets ready for tokenised stocks’ global impact?

The post Are markets ready for tokenised stocks’ global impact? appeared first on Tokeny.
September 2025 Are markets ready for tokenised stocks’ global impact?

Nasdaq has filed with the SEC to tokenise every listed stock by 2026. If approved, this would be the first time tokenised securities trade on a major U.S. exchange, a milestone that could transform global capital markets. Under the proposal, investors will be able to choose whether to settle their trades in traditional digital form or in tokenised blockchain form.

As, more and more firms are tokenising stocks. The implications are potentially huge:

24/7 trading of tokenised equities Instant settlement Programmable ownership Full shareholder rights, identical to traditional shares

This is a large overhaul of market infrastructure. Sounds great, but the reality is much more complex.

How to tokenise stocks?

Tokenised stocks today can be structured in several ways, including:

Indirect tokenisation: The issuer raises money via the issuance of a financial instrument different from the stocks, typically a debt instrument (e.g. bond/note), and buys the underlying stocks with the raised funds. The tokens may either be the financial instrument itself or represent a claim on that financial instrument. The token does not grant investors direct ownership of the underlying stock. However, it is simple to launch. Direct tokenisation: Stocks are tokenised directly at the stock company level, preserving voting, dividends, and reporting rights. Although this method tends to be more difficult to implement due to legal and infrastructure requirements.

Both structures have their benefits and drawbacks. The real issue, however, is how the tokens are managed post-issuance.

Permissionless vs permissioned tokens

While choosing a structure for tokenised stocks is important, the true success of tokenisation depends on whether the tokens are controlled or free to move, because this determines compliance, investor protection, and ultimately whether the market can scale safely.

Permissionless: Tokens can move freely on-chain after issuance. Token holders gain economic exposure, but not shareholder rights. Secondary market trading is not controlled, creating compliance risks. The legitimate owner of the security is not always clear. Permissioned: Compliance and eligibility are enforced at every stage, embedding rules directly into the token. Crucially, permissioned tokens also guarantee investor safety by making ownership legally visible in the issuer’s register. For issuers, this model also fulfils their legal obligation to know who their investors are at all times. Transfers to non-eligible wallets are blocked, maintaining regulatory safety while preserving trust.

While permissionless tokens may be quicker to launch, they carry significant legal risks, weaken investor trust, and fragment growth. By contrast, permissioned tokens should be considered as the only sustainable approach to tokenising stocks, because they combine compliance, investor protection, and long-term scalability.

The right way forward – compliance at the token level

Nasdaq’s SEC filing shows the path to do this right. Tokenised stocks will only succeed if eligibility and compliance are enforced in both issuance and secondary trading.

That’s where open-source standards like ERC-3643 come in:

Automated compliance baked in: Rules are enforced automatically at the protocol level, not manually after the fact Eligibility checks: Only approved investors can hold the asset, enabling ownership tracking efficiently Controlled transfers: Tokens cannot be sent to non-eligible investors, even in the secondary market Auditability: Every transaction can be monitored in real time, ensuring trust with regulators

This is how tokenised stocks can operate safely at scale, with compliance embedded directly into the digital infrastructure, no matter if it’s through direct or indirect tokenisation. This provides safety at scale, unlocked liquidity, efficiency, and regulatory alignment.

Why this matters now?

Investor demand for tokenised assets is surging. Global banks are exploring issuance, Coinbase has sought approval, and now Nasdaq is moving ahead under the SEC’s umbrella. Tokenisation will be at the core of financial markets.

But shortcuts built on permissionless, freely transferable tokens will only invite regulatory backlash, slowing innovation and preventing the market from scaling.

The future of tokenised shares will be built on:

Carrying full shareholder rights and guaranteeing ownership Automatic, enforced compliance on every trade Integrating directly into existing market infrastructure

That is what true tokenisation means, not synthetic exposure, but embedding the rules of finance into the share itself.

We believe this is the turning point. Nasdaq’s move validates what we’ve been building toward: a global financial system where tokenisation unlocks liquidity, efficiency, and access, not at the expense of compliance, but because of it.

The race is on. The winners won’t be those who move fastest, but those who build markets that are trusted, compliant, and scalable from day one.

Tokeny Spotlight

Annual team building

We head to Valencia for our annual offsite team building. A fantastic time filled with great memories.

Read More

Token2049

Our CEO and Head of Product for Apex Digital Assets, and CBO, head to Singapore for Token2049

Read More

New eBook

Global payments reimagined. Download to learn what’s driving the rapid rise of digital currencies.

Read More

RWA tokenisation report

We are proud to have contributed to the newly released RWA Report published by Venturebloxx.

Read More

SALT Wyoming

Our CCO and Global Head of Digital Assets at Apex Group, Daniel Coheur, discusses Blockchain Onramps at SALT.

Read More

We test SilentData’s privacy

Their technology explores how programmable privacy allows for secure and compliant RWA tokenisation.

Read More Tokeny Events

Token2049 Singapore
October 1st-2nd, 2025 | 🇸🇬 Singapore

Register Now

Digital Assets Week London
October 8th-10th, 2025 | 🇬🇧 United Kingdom

Register Now

ALFI London Conference
October 15th, 2025 | 🇬🇧 United Kingdom

Register Now

RWA Singapore Summit
October 2nd, 2025 | 🇸🇬 Singapore

Register Now

Hedgeweek Funds of the Future US 2025
October 9th, 2025 | 🇺🇸 United States of America

Register Now ERC3643 Association Recap

ERC-3643 is recognized in Animoca Brands Research’s latest report on tokenised real-world assets (RWAs).

The report highlights ERC-3643 as a positive step for permissioned token standards, built to solve the exact compliance and interoperability challenges holding the market back.

Read the story here

Subscribe Newsletter

A monthly newsletter designed to give you an overview of the key developments across the asset tokenization industry.

Previous Newsletter  Oct10 Are markets ready for tokenised stocks’ global impact? September 2025 Are markets ready for tokenised stocks’ global impact? Nasdaq has filed with the SEC to tokenise every listed stock by 2026. If approved,… Sep1 SkyBridge Tokenises $300m Hedge Funds with Tokeny and Apex Group August 2025 SkyBridge Tokenises $300m Hedge Funds with Tokeny and Apex Group Last month, together with Apex Group, we introduced Apex Digital 3.0, the first… Aug1 Apex Digital 3.0 is Live – The Future of Finance Starts Now July 2025 Apex Digital 3.0 is Live – The Future of Finance Starts Now To truly scale tokenisation, we need a global force at the… Jul1 Real Estate Tokenization Takes Off in Dubai June 2025 Real Estate Tokenization Takes Off in Dubai Dubai’s real estate market is breaking records. According to data shared by Property Finder, Dubai recorded…

The post Are markets ready for tokenised stocks’ global impact? appeared first on Tokeny.


auth0

Is Your Business Ready for AI Agents? The Ultimate AI Security Checklist for Customer Identity

Assess your business's AI Agent readiness. Use this checklist to master the unique AI security challenges autonomous agents pose to customer identity and data access.
Assess your business's AI Agent readiness. Use this checklist to master the unique AI security challenges autonomous agents pose to customer identity and data access.

Thursday, 09. October 2025

Spruce Systems

Why Digital Identity Frameworks Should Be Public Infrastructure

Digital identity is essential infrastructure, and it deserves the same level of public investment, oversight, and trust as other core systems like roads or utilities.

Most people think of digital identity as a mobile driver’s license or app on their phone. But identity isn’t just a credential, it’s infrastructure. Like roads, broadband, or electricity, digital identity frameworks must be built, governed, and funded as public goods.

Today, the lack of a unified identity system fuels fraud, inefficiency, and distrust.  In 2023, the U.S. recorded 3,205 data breaches affecting 353 million people, and the Federal Trade Commission reported $12.5 billion in fraud losses, much of it rooted in identity theft and benefit scams.

These aren’t isolated incidents but symptoms of fragmentation: every agency and organization maintaining its own version of identity, duplicating effort, increasing breach risk, and eroding public trust.

We argue that identity should serve as public infrastructure: a government-backed framework that lets residents prove who they are securely and privately, across contexts, without unnecessary data collection or centralization. Rather than a single product or app, this framework can represent a durable set of technical and statutory controls built to foster long-term trust, protect privacy, and ensure interoperability and individual control.

From Projects to Public Infrastructure

Governments often launch identity initiatives as short-term projects: a credential pilot, a custom-built app, or a single-agency deployment. While these efforts may deliver immediate results, they rarely provide the interoperability, security, or adoption needed for a sustainable identity ecosystem. Treating digital identity as infrastructure avoids these pitfalls by establishing common rails that multiple programs, agencies, and providers can build upon.

A better approach is to adopt a framework model, where digital identity isn’t defined by a single product or format but by adherence to a shared set of technical and policy requirements. These requirements, such as selective disclosure, minimal data retention, and individual control, can apply across many credential types, from driver’s licenses and professional certifications to benefit eligibility and guardianship documentation.

This enables credentials to be iterated and expanded on thoughtfully: credentials can be introduced one at a time, upgraded as standards evolve, and tailored to specific use cases while maintaining consistency in protections and interoperability.

Enforcing Privacy Through Law and Code

Foundational privacy principles such as consent, data minimization, and unlinkability must be enforced by technology, not just policy documents. Digital identity systems should make privacy the default posture, using features (depending on the type of credential) such as:

Selective disclosure (such as proving “over 21” without showing a birthdate) Hardware-based device binding Cryptographically verifiable digital credentials with offline presentation Avoid architectures that risk exposing user metadata during verification.

By embedding security, privacy, and interoperability directly into the architecture, identity systems move beyond compliance and toward real-world protection for residents. These are not optional features, they are statutory expectations brought to life through secure protocols.

Open Standards, Broad Interoperability

Public infrastructure should allow for vendor choice and competitive markets that foster innovation. That’s why modern identity systems should be built on open, freely implementable standards, such as ISO/IEC 18013-5/7, OpenID for Verifiable Presentations (OID4VP), W3C Verifiable Credentials, and IETF SD-JWTs.

These standards allow credentials to be portable across wallet providers and verifiable in both public and private sector contexts, from airports and financial institutions to universities and healthcare. Multi-format issuance ensures credentials are accepted in the widest range of transactions, without compromising on core privacy requirements.

A clear certification framework covering wallets, issuers, and verifiers can ensure compliance with these standards through independent testing, while maintaining flexibility for providers to innovate. Transparent certification also builds trust and ensures accountability at every layer of the ecosystem.

Governance Leads, Industry Builds

Treating digital identity as infrastructure doesn’t mean the public sector has to (or even should) build everything. It means the public sector must set the rules, defining minimum standards, overseeing compliance, and ensuring vendor neutrality.

Wallet providers, credential issuers, and verifiers can all operate within a certified framework if they meet established criteria for security, privacy, interoperability, and user control. Governments can maintain legal authority and oversight while encouraging healthy private-sector competition and innovation.

This governance-first approach creates a marketplace that respects rights, lowers risk, and is solvent. Agencies retain procurement flexibility, while residents benefit from tools that align with their expectations for usability and safety.

Why This Matters

Digital identity is the entry point to essential services: healthcare, education, housing, employment, and more. If it’s designed poorly, it can become fragmented, invasive, or exclusionary. But if it’s designed as infrastructure with strong governance and enforceable protections, it becomes a foundation for inclusion, trust, and public value.

Well-governed digital identity infrastructure enables systems that are:

Interoperable across jurisdictions and sectors Private by design, not retrofitted later Transparent, with open standards and auditability Resilient, avoiding lock-in and enabling long-term evolution

Most importantly, it is trustworthy for residents, not just functional.

A Foundation for the Future

Public infrastructure requires alignment between law, technology, and market design. With identity, that means enforcing privacy in code, using open standards to drive adoption, and establishing certification programs that ensure accountability through independent validation without stifling innovation.

This is more than a modernization effort. It’s a transformation that ensures digital identity systems can grow, adapt, and serve the public for decades to come.

Ready to Build Trustworthy Digital ID Infrastructure?

SpruceID partners with governments to design and implement privacy-preserving digital identity systems that scale. Contact us to explore how we can help you build standards-aligned, future-ready identity infrastructure grounded in law, enforced by code, and trusted by residents.

Contact Us

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


Ockam

The Psychology of Buying

How Brand positioning impact people behaviour Continue reading on Medium »

How Brand positioning impact people behaviour

Continue reading on Medium »


Indicio

Your authentication dilemma: DIY or off-the-shelf decentralized identity?

The post Your authentication dilemma: DIY or off-the-shelf decentralized identity? appeared first on Indicio.
With the European Union mandating digital wallets by the end of 2026, and Verifiable Credentials offering new, powerful, and cost-effective ways to solve identity fraud and simplify operations, you may be thinking it’s time to embrace decentralized identity and build your own Verifiable Credential system. You’ve got a developer team, they understand security — so it couldn’t be that difficult, right?

By Helen Garneau

It’s tempting to do things yourself. But there’s a reason a professional painter will almost certainly do a better and quicker job at painting your house than you will. And, when you price how much time it would take you, there’s a good chance a professional will probably end up costing you less too.

The same logic applies to building decentralized identity systems with Verifiable Credentials.

If you have a talented team of engineers, it’s easy to think, “we’ve got this.” They understand security, they can code, issuing and verifying a few credentials sounds simple enough.

But once you start digging into credential formats, protocols, interoperability, global standards, regulations, and governance, what seems like a quick project for a few developers quickly becomes a long, complex, and costly effort to build and maintain a secure, standards-compliant system.

How fast “We got this” turns into “Why did we do this?”

Decentralized identity makes data portable and cryptographically verifiable without the need for certificate authorities or centralized data management. Its vehicle is the Verifiable Credential, a way of sealing any kind of information in a digital container so that it cannot be altered and you can be certain of its origin.

If you trust the origin of the credential — say a passport office or a bank — you can trust that the information placed in the credential has not been altered. Verifiable Credentials are held in digital wallet apps and can be shared by the consent of the holder, whether a person or an organization, in privacy-preserving ways.

Verifiable Credentials are most commonly used to create instantly authenticatable versions of trusted documents, such as passports, driver’s licenses, but they can be created and held by devices for secure data sharing, or robots and AI agents, for authentication and permissioned data access.

The point of all this is that it transforms authentication, fraud prevention, privacy, security, and operational efficiency. You are able to remove usernames and passwords, centralized storage and multi-factor authentication and combine authentication and fraud prevention in a seamless, instant process.

A decentralized ecosystem consists of three parts: an issuer that creates and digitally signs the credential, a holder who keeps it in a digital wallet and presents it for authentication and access to resources, and a verifier or relying party that needs to authenticate the information presented for some purpose.

When building an ecosystem for a use case — say systems account access — here’s what you need to consider: There are, presently, three major credential formats, each with differing communications protocols. They’ve got to interoperate with each other and across different digital wallets according to whatever standards you want to align with. Which are you going to pick?

Then, you need to get them into people’s wallets. Which wallet? An existing one or do you need an SDK?

If you want to verify credentials, you should be able to verify thousands — perhaps tens of thousands — simultaneously. How do you do this when mobile devices don’t have fixed IP addresses? How are you going to establish offline verification? And how are you going to establish governance so that participants know who is a trusted issuer of a credential?

This is just a basic implementation — a foundation to build the kind of solutions the market wants. Are you also prepared to then develop integrated payments, integrated biometrics, digital travel credentials, document validation, and identity and delegated authority for AI agents and robots? You better be, because that’s where the market is now at.

There’s a reason Indicio was the first (and still the only) company to launch a complete, off-the-shelf solution for implementing Verifiable Credentials in both the Amazon and Google Cloud Marketplaces: We built a team composed of pioneers and leaders in decentralized identity, engineers and developers deeply engaged with the open source codebases and communities that have shaped this technology. They live and breathe this stuff every day. And even so, it still took years to build an interoperable, multi-credential, multi-protocol, system that can scale to country-level deployments.

If your team isn’t already familiar with the open-source codebases and the evolving international specifications and standards, how are they going to deliver in a realistic time frame at an acceptable cost?

The probability that your team is going to do all that we did in six months is… low.

The likelihood that they will end up blowing through a lot of your budget attempting to do this is… high.

Interoperability — everyone expects it

No one is going to buy a proprietary, siloed system. Decentralized identity is an architecture for data sharing and integrating markets into functioning ecosystems; if your solution can’t do this, can’t interoperate or scale, it’s missing out on key features that drive business growth. Sure, you may want to start by securing your SSO with a Verifiable Credential, but why limit the power of verification?

For example, one of the key failures of the mobile driver’s license (m/DL) in the U.S. is that so many implementations failed to make verification open to other parties. Think of all the ways an m/DL could be used to prove age or identity. A digital identity that’s locked into a narrow use case and proprietary verification is a wasted opportunity not least because verification can be monetized (Indicio’s m/DL is easily verifiable anywhere).

To make a system work with the rest of the world, it has to speak the relevant languages. That means following multiple standards and protocols that define how credentials are created, stored, and exchanged and, depending on what your needs are, for whatever specific credential workflow you want to deploy, keeping up with some or all of the following:

W3C Verifiable Credential Data Model (VCDM) — defines how credentials are structured and signed.

ISO/IEC 18013-5 and ICAO DTC — govern mobile driver’s licenses (mDL) and Digital Travel Credentials, ensuring global interoperability across borders and transport systems.

DIDComm and DID methods — specify how secure, peer-to-peer communication and decentralized identifiers work.

OpenID for Verifiable Credentials (OID4VC and OID4VP) — bridges decentralized identity with mainstream authentication systems like OAuth and OpenID Connect.

Each of these comes with its own working groups, test suites, and compliance updates. Building your own system means keeping pace with all of them and making sure your implementation doesn’t break every time a standard changes.

With off-the-shelf, you implement in days

Indicio Proven® eliminates the DIY risk. You have a way to start implementing a POC in days, pilot in weeks, launch in months. We’ve spent years doing the heavy lifting so you don’t have to. It’s the mature, field-tested Verifiable Credential infrastructure that governments, airports, and financial institutions already use.

Instead of building from scratch, you have everything you need to start building a solution, a product, or a service so your team is free to focus on things that make you money.

Indicio Proven can already handle country-level deployments and multi-credential workflows. It has been DPIA’d for GDPR. It comes with document validation and biometric authentication, a white-label digital wallet if you need one, a mobile SDK to add Verifiable Credentials to your apps. We’ve already mastered:

Multiple credential formats (AnonCreds, SD-JWT VC, JSON-LD, mdoc/mDL) DIDComm and OID4VC/OID4VP communications protocols Digital Travel Credentials aligned with ICAO DTC-1 and DTC-2 specifications. Decentralized ecosystem governance Hosting on premise, in the cloud or as a SaaS product. A global, enterprise-grade blockchain-based distributed ledger for anchoring credentials Certified training in every aspect of decentralized identity Support packages Continuous updates

In one package, you get everything you need to build, deploy, and stay current with evolving standards, so your team doesn’t have to chase every update.

Deploy with confidence

There’s no shame in DIY, but for Verifiable Credentials, the smarter move is to build on top of something that already works. Indicio does the heavy lifting so you can focus on what matters: using trusted digital identity to deliver value to your users. A Verifiable Credential system should give you trust, not technical debt.

In short: don’t reinvent the tech. Build with what’s already proven.

Want to do it right the first time? Let’s talk.

The post Your authentication dilemma: DIY or off-the-shelf decentralized identity? appeared first on Indicio.


Dock

What We Learned Showing Digital IDs for Local Government

In a recent client call, we were asked whether our platform could help a local government issue digital IDs.  To answer that, Richard Esplin (Head of Product) put together a live demo. Instead of complex architectures or long timelines, he showed how a city could issue a&

In a recent client call, we were asked whether our platform could help a local government issue digital IDs. 

To answer that, Richard Esplin (Head of Product) put together a live demo.

Instead of complex architectures or long timelines, he showed how a city could issue a digital residency credential and use it instantly across departments. From getting a library card to scheduling trash pickup.

The front end for the proof-of-concept was spun up in an afternoon with an AI code generator. 

Behind the scenes, we handled verifiable credential issuance, verification, selective disclosure, revocation, and ecosystem governance, proving that governments can move from paper processes to reusable, privacy-preserving digital IDs in days, not months.


From ID uploads to VPN downloads: The UK’s digital rebellion

The UK's Online Safety Act triggered a staggering 1,800% surge in VPN signups within days of implementation. The UK’s Online Safety Act was introduced to make the internet “safer,” especially for children. It forces websites and platforms to implement strict age verification measures

The UK's Online Safety Act triggered a staggering 1,800% surge in VPN signups within days of implementation.

The UK’s Online Safety Act was introduced to make the internet “safer,” especially for children. It forces websites and platforms to implement strict age verification measures for adult and “harmful” content, often requiring users to upload government IDs, credit cards, or even biometric scans.

While the goal is protection, the method feels intrusive. 

Suddenly, every UK citizen is being asked to share sensitive identity data with third-party verification companies just to access certain sites.

The public response was immediate. 

Within days of implementation, the UK saw a staggering 1,800% surge in VPN signups. 

ProtonVPN jumped to the #1 app in the UK App Store. NordVPN reported a 1,000% surge. In fact, four of the top five free iOS apps in the UK were VPNs. 

Millions of people literally paid to preserve their privacy rather than comply.

This backlash reveals a fundamental flaw in how age verification was implemented.

People are rejecting what they perceive to be privacy-invasive ID uploads. They don’t want to hand over passports, driver’s licenses, or facial scans just to browse.

Can we blame them?

The problem isn’t age verification itself. The problem is the method, which pushes people to circumvent the rules with VPNs or even fake data.

But here’s the thing: we already have better options.

Government-issued digital IDs already exist. Zero-knowledge proofs let you prove you’re 18+ without revealing who you are. Verifiable credentials combine reliability (government-backed trust) with privacy by design.

With this model, the website never sees your personal data. 

The check is still secure, government-backed, and reliable, without creating surveillance or new honeypots of sensitive data.

The VPN surge is proof that people value their digital privacy so much that they’ll pay for it.

If governments want compliance and safety, they need to meet people where they are: with solutions that respect privacy as much as protection.

The UK’s privacy backlash demonstrates exactly why verifiable ID credentials are the way forward. 

They can resolve public resistance while maintaining both effective age checks and digital rights.


Why Derived Credentials Are the Future of Digital ID

In our recent live podcast, Richard Esplin (Dock Labs) spoke with Andrew Hughes (VP of Global Standards, FaceTec) and Ryan Williams (Program Manager of Digital Credentialing, AAMVA) about the rollout of mobile driver’s licenses (mDLs) and what comes next. One idea stood out: derived credentials. mDLs

In our recent live podcast, Richard Esplin (Dock Labs) spoke with Andrew Hughes (VP of Global Standards, FaceTec) and Ryan Williams (Program Manager of Digital Credentialing, AAMVA) about the rollout of mobile driver’s licenses (mDLs) and what comes next.

One idea stood out: derived credentials.

mDLs are powerful because they bring government-issued identity into a digital format. 

But in practice, most verifiers don’t need everything on your driver’s license. 

A student bookstore doesn’t need your address, it only needs to know that you’re enrolled.

That’s where derived credentials come in. 

They allow you to take verified data from a root credential like an mDL and create purpose-specific credentials:

A student ID for campus services. An employee badge for workplace access. A travel pass or loyalty credential.

Andrew put it simply: if you don’t need to use the original credential with everything loaded into it, don’t. 

Ryan added that the real benefit is eliminating unnecessary personal data entirely, only passing on what’s relevant for the transaction.

Derived credentials also make it possible to combine data from multiple credentials into one, enabling new use cases. 

For example, a travel credential could draw on both a government-issued ID and a loyalty program credential, giving the verifier exactly what they need in a single, streamlined interaction.

This approach flips the model of identity sharing. 

Instead of over-exposing sensitive details, derived credentials enable “less is more” identity verification: stronger assurance for the verifier, greater privacy for the user.

Looking ahead, Andrew revealed that the ISO 18013 Edition 2 will introduce support for revocation and zero-knowledge proofs, enhancements that will make derived credentials even more practical and privacy-preserving.

Bottom line: mDLs are an important foundation, but the everyday future of digital ID lies in derived credentials.


auth0

Auth0 Token Vault: Secure Token Exchange for AI Agents

Learn how Auth0 Token Vault uses OAuth 2.0 Token Exchange to provide secure, delegated access, letting AI agents act on a user's behalf without handling refresh tokens.
Learn how Auth0 Token Vault uses OAuth 2.0 Token Exchange to provide secure, delegated access, letting AI agents act on a user's behalf without handling refresh tokens.

Thales Group

Thales and StandardAero’s StableLight™ Autopilot chosen by leading helicopter operator Heli Austria

Thales and StandardAero’s StableLight™ Autopilot chosen by leading helicopter operator Heli Austria prezly Thu, 10/09/2025 - 11:00 Civil Aviation Austria Share options Facebook
Thales and StandardAero’s StableLight™ Autopilot chosen by leading helicopter operator Heli Austria prezly Thu, 10/09/2025 - 11:00 Civil Aviation Austria

Share options

Facebook X Whatsapp Linkedin Email URL copied to clipboard 09 Oct 2025 Heli Austria selects the StableLight 4-Axis Autopilot from for its single-engine H125 helicopter Next-gen safety & performance: cutting pilot workload and boosting mission capability Proven FAA-certified system now entering EASA validation for European operators

Thales and StandardAero are pleased to announce the StableLight 4-Axis Autopilot system has been selected by Heli Austria, a leading European helicopter operator. The system is currently being installed on one of Heli Austria’s H125 helicopters at their facility in Sankt Johann im Pongau, Salzburg, Austria.

Based on Thales’s Compact Autopilot System, the StableLight 4-Axis Autopilot system combines several robust features into a lightweight system ideally suited for light category rotorcraft. ​ The system transforms the flight control experience of the helicopter with its stability augmentation. Adding stabilized climb flight attitude recovery, auto hover, and a wide range of other sophisticated features, significantly decreases pilot workload. This enhances mission capability and can help to reduce risks in critical flight phases and adverse conditions such as Inadvertent entry into Instrument Meteorological Conditions (IIMC). StableLight has a Supplemental Type Certificate (STC) from the US Federal Aviation Administration (FAA).

“Operational and pilot safety are very important to Heli Austria. ​ We have been eagerly awaiting the opportunity to be the European launch customer of this proven product. The added safety features and reliability is a welcomed advantage to our pilots.” Roy Knaus, CEO, Heli Austria.
“At Thales, integrating cutting-edge technologies to deliver safety and trust is fundamental to who we are. By uniting Thales’s advanced expertise with StandardAero’s deep industry knowledge, we harness a powerful combination to provide Heli Austria’s pilots with the autopilot solution they have eagerly awaited.” Florent Chauvancy, Vice President Flight Avionics Activities, Thales.
“We are thrilled to be working with Heli Austria, a renowned operator in the European market. The adoption of our StableLight autopilot system demonstrates their commitment to safety and innovation. Once certified by EASA, European H125 operators will be able to reach a new level of safety and efficiency of helicopter operations with the StableLight system.” ​ Andrew Park, General Manager, StandardAero.

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies for the Defence, Aerospace, and Cyber & Digital sectors. Its portfolio of innovative products and services addresses several major challenges: sovereignty, security, sustainability and inclusion.The Group invests more than €4 billion per year in Research & Development in key areas, particularly for critical environments, such as Artificial Intelligence, cybersecurity, quantum and cloud technologies. Thales has more than 83,000 employees in 68 countries. In 2024, the Group generated sales of €20.6 billion.

About StandardAero

StandardAero is a leading independent pure-play provider of aerospace engine aftermarket services for fixed- and rotary-wing aircraft, serving the commercial, military and business aviation end markets. StandardAero provides a comprehensive suite of critical, value-added aftermarket solutions, including engine maintenance, repair and overhaul, engine component repair, on-wing and field service support, asset management and engineering solutions. StandardAero is an NYSE listed company, under the symbol SARO. For more information about StandardAero, go to www.standardaero.com.

View PDF market_segment : Civil Aviation ; countries : Europe > Austria https://thales-group.prezly.com/thales-and-standardaeros-stablelight-autopilot-chosen-by-leading-helicopter-operator-heli-austria thales-and-standardaeros-stablelighttm-autopilot-chosen-leading-helicopter-operator-heli-austria On Thales and StandardAero’s StableLight™ Autopilot chosen by leading helicopter operator Heli Austria

Ocean Protocol

Ocean Protocol Foundation withdraws from the Artificial Superintelligence Alliance

$OCEAN can be de-pegged and re-listed on exchanges Singapore, 9 October 2025 Effective immediately, Ocean Protocol Foundation has withdrawn its designated directors and resigned as a member from the Superintelligence Alliance (Singapore) Ltd, aka the “ASI Alliance”. The ASI Alliance was founded on voluntary association and collaboration to promote decentralized AI through a token merge
$OCEAN can be de-pegged and re-listed on exchanges

Singapore, 9 October 2025

Effective immediately, Ocean Protocol Foundation has withdrawn its designated directors and resigned as a member from the Superintelligence Alliance (Singapore) Ltd, aka the “ASI Alliance”. The ASI Alliance was founded on voluntary association and collaboration to promote decentralized AI through a token merger.

Ocean has worked closely with the other members of the Alliance to seek technology integration, joint podcasts and run community events such as the Superintelligence Summit and ETHGlobal NYC hackathon in the past year.

Moving forward, funding for future Ocean development efforts is fully secured. A portion of profits from spin-outs of Ocean derived-technologies will be used to buyback and burn $OCEAN, offering a permanent and continual supply reduction of the $OCEAN supply.

Since 7/2024, 81% of the $OCEAN token supply has been converted into $FET, yet there are still 37,334 $OCEAN token holders representing 270 million $OCEAN, that have not yet converted to $FET on the existing $OCEAN token contract (0x967da … b9F48).

As independent economic actors, former $OCEAN holders can fully decide to continue to hold $FET or not.

At the time of this announcement, the token bridge, fully managed and controlled by Fetch.ai, remains open for $OCEAN holders to convert to $FET at the rate of 0.433226 $FET/$OCEAN.

Any exchange that has de-listed $OCEAN may assess whether they would like to re-list the $OCEAN token. Acquirors can currently exchange for $OCEAN on Coinbase, Kraken, UpBit, Binance US, Uniswap and SushiSwap.

Community questions to be sent to https://t.me/OceanProtocol_Community.

Press questions can be sent to inquiries@oceanprotocol.com.

Ocean Protocol Foundation withdraws from the Artificial Superintelligence Alliance was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


DF158 Completes and DF159 Launches

Predictoor DF158 rewards available. DF159 runs October 9th — October 16th, 2025 1. Overview Data Farming (DF) is an incentives program initiated by ASI Alliance member, Ocean Protocol. In DF, you can earn OCEAN rewards by making predictions via ASI Predictoor. Data Farming Round 158 (DF158) has completed. DF159 is live today, October 9th. It concludes on October 16th. For this DF round, P
Predictoor DF158 rewards available. DF159 runs October 9th — October 16th, 2025 1. Overview

Data Farming (DF) is an incentives program initiated by ASI Alliance member, Ocean Protocol. In DF, you can earn OCEAN rewards by making predictions via ASI Predictoor.

Data Farming Round 158 (DF158) has completed.

DF159 is live today, October 9th. It concludes on October 16th. For this DF round, Predictoor DF has 3,750 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF159 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF159

Budget. Predictoor DF: 3.75K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and ASI Predictoor

Ocean Protocol was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF158 Completes and DF159 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


FastID

How to Tame Varnish Memory Usage Safely

How Fastly turned a shelved Varnish idea into 25% fewer memory writes and real system-wide gains.
How Fastly turned a shelved Varnish idea into 25% fewer memory writes and real system-wide gains.

Wednesday, 08. October 2025

Ockam

The Art of Building in Public

Turn Your Journey Into Your Unfair Advantage Continue reading on Medium »

Turn Your Journey Into Your Unfair Advantage

Continue reading on Medium »


liminal (was OWI)

Building Trust in Agentic Commerce

The post Building Trust in Agentic Commerce appeared first on Liminal.co.

The post Building Trust in Agentic Commerce appeared first on Liminal.co.


Building Trust in Agentic Commerce

Would you let an AI agent spend your company’s quarterly budget, no questions asked? Most leaders I talk to aren’t there yet. Our research shows that only 8% of organizations are using AI agents in the long term, and the gap isn’t due to a lack of awareness. It’s trust. If agentic AI is going […] The post Building Trust in Agentic Commerce appeared first on Liminal.co.

Would you let an AI agent spend your company’s quarterly budget, no questions asked? Most leaders I talk to aren’t there yet. Our research shows that only 8% of organizations are using AI agents in the long term, and the gap isn’t due to a lack of awareness. It’s trust.

If agentic AI is going to matter in e-commerce, we need guardrails that make it safe, compliant, and worth the operational risk. That is where authentication, authorization, and verification come in. Think identity, boundaries, and proof. Until teams can check those boxes with confidence, adoption will stall.

What is an AI agent, and why does it matter in e-commerce

At its simplest, an AI agent is software that can act on instructions without waiting for every step of human input. Instead of a static chatbot or recommendation engine, an agent can take context, make a decision, and carry out an action.

In e-commerce, that could mean:

Verifying a buyer’s identity before an agent executes a purchase on their behalf Allowing an agent to issue refunds up to a set limit, but requiring human approval beyond that threshold Confirming that an AI-driven order or promotion matches both customer intent and compliance rules before it goes live

The upside is clear: faster processes, lower manual overhead, and customer experiences that feel effortless. But the risk is just as clear. If an agent acts under the wrong identity, oversteps its boundaries, or produces outcomes that don’t match user intent, the impact is immediately evident in increased fraud losses, compliance failures, or customer churn.

That’s why the industry is focusing on three pillars: authentication, authorization, and verification. Without them, agentic commerce cannot scale.

The adoption gap

Analysts project autonomous agents will grow to $70B+ by 2030. Buyers want speed, automation, and scale, but customers are not fully on board. In fact, only 24% of consumers say they are comfortable letting AI complete a purchase on their own.

That consumer hesitation is the critical signal. Ship agentic commerce without shipping trust, and you don’t just risk adoption, you risk chargebacks, brand erosion, and an internal rollback before your pilot even scales.

What’s broken today

Three realities keep coming up in my conversations with product, fraud, and risk leaders:

Attack surface expansion. Synthetic identity and deepfakes raise the baseline risk. 71% of organizations say they lack the AI/ML depth to defend against these tactics. Confidence is slipping. Trust in fully autonomous agents dropped from 43% to 27% in one year, even among tech-forward orgs. Hype hurts. A meaningful share of agent projects will get scrapped by 2027 because teams cannot tie them to real value or reliable controls.

The regulatory lens makes this sharper. Under the new EU AI Act, autonomous systems are often treated as high-risk, requiring transparency, human oversight, and auditability. In the U.S., proposals like the Algorithmic Accountability Act and state laws such as the Colorado AI Act point in the same direction—demanding explainability, bias testing, and risk assessments. For buyers, that means security measures are not only best practice but a growing compliance requirement.

When I see this pattern, I look for the missing scaffolding. It is almost always the same three blanks: who is the agent, what can it do, and did it do the right thing.

The guardrails that matter

If you are evaluating solutions, anchor on these three categories. This is the difference between a flashy demo and something you can put in production.

Authentication

Prove the agent’s identity before you let it act. That means credentials for agents, not just users. It means attestation, issuance, rotation, and revocation. It means non-repudiation, so you can tie a transaction to a specific agent and key.

 What to look for:

strong, verifiable agent identities and credentials support for attestation, key management, rotation, and kill switches logs that let you prove who initiated what, and when
Authorization

Set boundaries that are understood by both machines and auditors. Map policies to budgets, scopes, merchants, SKUs, and risk thresholds. Keep it explainable so a human can reason about the blast radius.

What to look for:

policy engines that accommodate granular scopes and spend limits runtime constraints, approvals, and step-up controls simulation and sandboxes to test policies before they go live
Verification

Trust but verify. Confirm that outcomes align to user intent, compliance, and business rules. You need evidence that holds up in a post-incident review.

Verification isn’t just operational hygiene. Under privacy rules like GDPR Article 22, individuals have a right to safeguards when automated systems make decisions about them. That means the ability to explain, evidence, and roll back agent actions is not optional.

What to look for:

transparent audit trails and readable explanations outcome verification against explicit user directives real-time anomaly detection and rollback paths

If a vendor cannot demonstrate these three pillars working together, you are buying a future incident.

Real-world examples today

Real deployments are still early, but they show what’s possible when trust is built in.

ChatGPT Instant Checkout marks one of the first large-scale examples of agentic commerce in production. Powered by the open-source Agentic Commerce Protocol, co-developed with Stripe, it enables users in the U.S. to buy directly from Etsy sellers in chat, with Shopify merchants like Glossier, SKIMS, and Vuori coming next. The article affirms each purchase is authenticated, authorized, and verified through secure payment tokens and explicit user confirmation—demonstrating how agentic AI can act safely within clear trust boundaries. Konvo AI automates ~65% of customer queries for European retailers and converts ~8% of those into purchases, using agents that can both interact with customers and resolve logistics issues. Visa Intelligent Commerce for Agents is building APIs that let AI agents make purchases using tokenized credentials and strong authentication — showing how payment-grade security can extend to autonomous actions. Amazon Bedrock AgentCore Identity provides identity, access control, and credential vaulting for AI agents, giving enterprises the tools to authenticate and authorize agent actions at scale Agent Commerce Kit (ACK-ID) demonstrates how one agent can verify the identity and ownership of another before sensitive interactions, laying the groundwork for peer-to-peer trust in agentic commerce.

These aren’t fully autonomous across all commerce workflows, but they demonstrate that agentic AI can deliver value when authentication, authorization, and verification are in place.

What good looks like in practice

Buyers ask for a checklist. I prefer evaluation cues you can test in a live environment:

Accuracy and drift. Does the system maintain performance as the catalog, promotions, and fraud patterns shift? Latency and UX. Do the controls keep decisions fast enough for checkout and service flows? Integration reality. Can this plug into your identity, payments, and risk stack without six months of glue code? Explainability. When an agent takes an action, can a product manager and a compliance lead both understand why? Recourse. If something goes wrong, what can you unwind, how quickly can you roll it back, and what evidence exists to explain the decision to auditors, customers, or regulators?

The strongest teams will treat agent actions like high-risk API calls. Every action is authenticated, every scope is authorized, and every outcome is verified. The tooling makes that visible.

Why this matters right now

It is tempting to wait. The reality is that agentic workflows are already creeping into back-office operations, customer onboarding, support, and payments. Early movers who get trust right will bank the upside: lower manual effort, faster cycle time, and a margin story that survives scrutiny.

The inverse is also true. Ship without safeguards, and you’ll spend the next quarter explaining rollback plans and chargeback spikes. Customers won’t give you the benefit of the doubt. Neither will your CFO.

A buyer’s short list

If you are mapping pilots for Q4 and Q1 2026, here’s a simple way to keep the process grounded:

define the jobs to be done write the rules first simulate and stage measure what matters keep humans in the loop regulatory readiness. Confirm vendors can meet requirements for explainability, audit logs, and human oversight under privacy rules. The road ahead

Agentic commerce is not a future bet. It is a present decision about trust. The winners will separate signal from noise, invest in authentication, authorization, and verification, and scale only when those pillars are real.

At Liminal, we track the vendors and patterns shaping this shift. If you want a deeper dive into how teams are solving these challenges today, we’re bringing together nine providers for a live look at the authentication, authorization, and verification layers behind agentic AI. No pitches, just real solutions built to scale safely.

📅 Join us at Liminal Demo Day: Agentic AI in E-Commerce on October 22 at 9:30 AM ET.

My take: The winners won’t be the first to launch AI agents. They’ll be the first to prove their agents can be trusted at scale.

The post Building Trust in Agentic Commerce appeared first on Liminal.co.


FastID

The CDN Showdown: Fastly Outpaces Akamai in Real-World Performance

As user expectations rise and milliseconds define outcomes, choosing a modern, high-speed CDN is no longer optional but a strategic imperative. Independent Google data shows Fastly consistently outperforms Akamai in real-world web performance.
As user expectations rise and milliseconds define outcomes, choosing a modern, high-speed CDN is no longer optional but a strategic imperative. Independent Google data shows Fastly consistently outperforms Akamai in real-world web performance.

In AI We Trust? Increasing AI Adoption in AppSec Despite Limited Oversight

AI adoption in AppSec is soaring, yet oversight lags. Explore the paradox of trust vs. risk, false positives, and the future of AI in application security.
AI adoption in AppSec is soaring, yet oversight lags. Explore the paradox of trust vs. risk, false positives, and the future of AI in application security.

Tuesday, 07. October 2025

Anonym

6 Ways Insurers Can Differentiate Identity Theft Insurance  

Identity theft is one of the fastest-growing financial crimes worldwide, and consumers are more aware of the risks than ever before. But in an increasingly competitive market, offering “basic” identity theft insurance is no longer enough. To stand out, insurers need to think beyond the minimum by focusing on product innovation, customer experience, and trust.  […] The post 6 Ways Insurers C

Identity theft is one of the fastest-growing financial crimes worldwide, and consumers are more aware of the risks than ever before. But in an increasingly competitive market, offering “basic” identity theft insurance is no longer enough. To stand out, insurers need to think beyond the minimum by focusing on product innovation, customer experience, and trust. 

Below, we explore six powerful ways insurers can differentiate their identity theft insurance offerings.  

1. Innovate with product features & coverage  

Most identity theft insurance policies cover financial losses and restoration costs, but few go beyond reactive measures to prevent identity theft from occurring. To gain a competitive edge, insurers can expand coverage to offer proactive identity protection solutions, such as:  

Alternative phone numbers and emails to keep customer communications private and reduce phishing risks.  A password manager to help policyholders secure accounts and prevent credential-based account takeovers.  VPN for private browsing to protect sensitive activity on public Wi-Fi and stop data interception.   Virtual cards that protect payment details and shield credit card numbers from fraudsters.  Real-time breach alerts so customers can take immediate action when their data is compromised.  Personal data removal tools to wipe sensitive information from people-search sites and reduce exposure.  A privacy-first browser with ad and tracker blocking to prevent data harvesting and malicious tracking. 

By proactively covering these risks and offering early detection, insurers not only reduce claims costs but also create meaningful value for customers. 

2. Provide strong restoration & case management 

Customers are often overwhelmed and unsure what to do next when their identity is stolen. Insurers can become their most trusted ally by offering: 

A dedicated case manager who works with them from incident to resolution.  A restoration kit with step-by-step instructions, pre-filled forms, and key contacts.  24/7 access to a helpline for guidance and reassurance. 

A study from the University of Edinburgh shows that case management can reduce the cost burden of an incident by up to 90%. It also boosts customer satisfaction and loyalty, which is a critical differentiator in a market where switching providers is easy. 

3. Build proactive prevention & education programs  

Most consumers only think about identity protection after an incident occurs. Insurers can flip this dynamic by helping customers stay ahead of threats. 

Ideas include:  

Regular scam alerts and phishing education campaigns.   Tools for identity monitoring, breach notifications, and credit report access.   Dashboards that visualize a customer’s digital exposure, allowing them to see their risk level.   Ongoing educational content such as webinars, how-to guides, and FAQs. 

Short, targeted online fraud education lowers the risk of falling for scams by roughly 42–44% immediately after training. This finding is based on a study that used a 3-minute video or short text intervention with 2,000 U.S. adults. 

4. Offer flexible pricing & bundling options

Flexibility is key to reaching a broader customer base. Instead of a one-size-fits-all product, insurers can:  

Offer tiered plans (basic, mid, premium) with incremental features.  Bundle identity theft insurance with homeowners, renters, etc.  Provide family plans that protect multiple household members.   

This strategy serves both budget-conscious and premium segments. 

5. Double down on customer experience 

Trust is one of the most important factors consumers consider when buying identity theft insurance. Insurers can build confidence by:   

Using clear, jargon-free language in policy documents.    Responding quickly and resolving cases smoothly.    Displaying trust signals, such as third-party audits, security certifications, and privacy commitments.    Publishing reviews, testimonials, and case studies that show real results. 

A better experience leads to higher Net Promoter Scores (NPS), lower churn rates, and a long-term competitive advantage.   

6. Leverage partnerships

Working with technology partners can enhance insurers’ offerings without straining internal resources. Here are some examples of what partners can do:   

Custom-branded dashboards and mobile apps that seamlessly integrate into your existing customer experience, keeping your brand front and center.    Privacy status at a glance, indicating to customers whether their information has been found in data breaches.   Management of alternative phone numbers and emails, allowing customers to create, update, or retire these directly in the portal. 

By offering these features through a white-labeled experience, insurers provide customers with daily, visible value while partners, like Anonyome Labs, handles the privacy technology behind the scenes. 

Outside of white-label opportunities, strategic partnerships and endorsements also strengthen offerings. Collaborations with credit bureaus, cybersecurity firms, and privacy organizations expand capabilities and build credibility. 

Powering the next generation of identity theft insurance  

The future of identity theft insurance is proactive, not reactive. Insurers who move beyond basic reimbursement to offer daily-use privacy and security tools will lead the industry in trust, engagement, and profitability. Anonyome Labs makes this shift seamless with a fully white-labeled Digital Identity Protection suite that includes alternative phone numbers and emails, password managers, VPNs, virtual cards, breach alerts, and tools for removing personal data. 

By offering these proactive protections, you provide customers with peace of mind, prevent costly fraud incidents before they occur, and unlock new revenue opportunities through subscription-based services. 

By partnering with Anonyome Labs, you can transform identity theft insurance into a daily value driver, positioning your company as a market leader in proactive protection. 

Learn more by getting a demo of our Digital Identity Protection suite today! 

The post 6 Ways Insurers Can Differentiate Identity Theft Insurance   appeared first on Anonyome Labs.


Spruce Systems

Foundations of Decentralized Identity

This article is the first installment of our series: The Future of Digital Identity in America.
What is Decentralized Identity?

Most of us never think about identity online. We type in a username, reuse a password, or click “Log in with Google” without a second thought. Identity, in the digital world, has been designed for convenience. But behind that convenience lies a hidden cost: surveillance, lock-in, and a system where we don’t really own the data that defines us.

Digital identity today is built for convenience, not for people.

Decentralized identity is a way of proving who you are without relying on a single company or government database to hold all the power. Instead of logging in with Google or handing over a photocopy of your driver’s license, you receive digital verifiable credentials, digital versions of IDs, diplomas, or licenses, directly from trusted issuers like DMVs, universities, or employers. You store these credentials securely in your own digital wallet and decide when, where, and how to share them. Each credential is cryptographically signed, so a verifier can instantly confirm its authenticity without needing to contact the issuer. The result is an identity model that’s portable, privacy-preserving, and designed to give control back to the individual rather than intermediaries.

Decentralized identity means you own and control your credentials, like IDs or diplomas, stored in your wallet, not in someone else’s database.

In this series, we’ll explore why decentralized identity matters, how policymakers are responding, and the technology making it possible. But before diving into policy debates or technical standards, it’s worth starting with the foundations: why identity matters at all, and what it means to build a freer digital world around it.

From Borrowed Logins to Borrowed Autonomy

The internet we know today was built on borrowed identity. Early online gaming systems issued usernames, turning every move into a logged action inside a closed sandbox. Social media platforms went further, normalizing surveillance as the price of connection and building entire economies on behavioral data. Even in industries like healthcare or financial services, “identity” was usually just whatever proprietary account a platform would let you open, and then keep hostage.

Each step offered convenience, but at the cost of autonomy. Accounts could be suspended. Data could be resold. Trust was intermediated by companies whose incentives rarely aligned with their users. The result was an internet where identity was an asset to be monetized, not a right to be owned.

On today’s internet, identity is something you rent, not something you own.

Decentralized identity represents a chance to reverse that arc. Instead of treating identity as something you rent, it becomes something you carry. Instead of asking permission from platforms, platforms must ask permission from you.

Why Identity Is a Pillar of Free Societies

This isn’t just a technical argument - it’s a philosophical and economic one. Identity is at the center of how societies function.

Economists have long warned of the dangers of concentrated power. Adam Smith argued that monopolies distort markets. Milton Friedman cautioned against regulatory capture. Friedrich Hayek showed that dispersed knowledge, not central planning, leads to better decisions. Ronald Coase explained how lowering transaction costs opens new forms of cooperation.

Philosophers, too, placed identity at the heart of freedom. John Locke’s principle of self-ownership and John Stuart Mill’s defense of liberty both emphasize that individuals must control what they disclose, limited only by the harm it might cause others.

Decentralized identity operationalizes these ideas for the digital era. By distributing trust, it reduces dependency on monopolistic platforms. By lowering the cost of verification, it unlocks new forms of commerce. By centering autonomy, it ensures liberty is preserved even as interactions move online.

The Costs of Getting It Wrong

American consumers and institutions are losing more money than ever to fraud and cybercrime. In 2024 alone, the FBI’s Internet Crime Complaint Center (IC3) reported that scammers stole a record $16.6 billion, a stark 33% increase from the previous year. Meanwhile, the FTC reports that consumers lost over $12.5 billion to fraud in 2024, a 25% rise compared to 2023.

On the organizational side, data breach costs are soaring. IBM’s 2025 Cost of a Data Breach Report shows that the average cost of a breach in the U.S. has reached a record $10.22 million, driven by higher remediation expenses, regulatory penalties, and deepening complexity of attacks  .

Identity theft has become one of the fastest-growing crimes worldwide. Fake accounts drain social programs. Fraudulent applications weigh down financial institutions. Businesses lose customers, governments lose trust, and people lose confidence that digital systems are designed with their interests in mind.

The Role of AI: Threat and Catalyst

As artificial intelligence tools advance, they’re empowering fraudsters with tools that make identity scams faster, more automated, and more believable. According to a Federal Reserve–affiliated analysis, synthetic identity fraud, where criminals stitch together real and fake information to fabricate identities, reached a staggering $35 billion in losses in 2023. These figures highlight the increasing risk posed by deepfakes and AI-generated personas in undermining financial systems and consumer trust.

And at the frontline of consumer protection, the Financial Crimes Enforcement Network (FinCEN) has warned that criminals are increasingly using generative AI to create deepfake videos, synthetic documents, and realistic audio to bypass identity checks, evade fraud detection systems, and exploit financial institutions at scale.

AI doesn’t just make fraud easier—it makes strong identity more urgent.

As a result, AI looms over every digital identity conversation. On one side, it makes fraud easier: synthetic faces, forged documents, and bots capable of impersonating humans at scale. On the other, it makes strong identity more urgent and more possible.

Digital Credentials: The Building Blocks of Trust

That’s why the solution isn’t more passwords, scans, or one-off fixes - it’s a new foundation built on verifiable digital credentials. These are cryptographically signed attestations of fact - your age, your license status, your professional certification - that can be presented and verified digitally.

Unlike static PDFs or scans, digital credentials are tamper-proof. They can’t be forged or altered without detection. They’re also user-controlled: you decide when, where, and how to share them. They also support selective disclosure: you can prove you’re over 21 without sharing your exact birthdate, or prove your address is in a certain state without exposing the full line of your home address.

Verifiable digital credentials are tamper-proof, portable, and under the user’s control—an identity model built for trust.

Decentralized identity acts like an “immune system” for AI. By binding credentials to real people and organizations, it distinguishes between synthetic actors and verified entities. It also makes possible a future where AI agents can act on your behalf - booking travel, filling out forms, negotiating contracts - while remaining revocable and accountable to you.

Built on open standards, digital credentials are globally interoperable. Whether issued by a state DMV, a university, or an employer, they can be combined in a wallet and presented across contexts. For the first time, people can carry their identity across borders and sectors without relying on a single gatekeeper.

From Pilots to Infrastructure

Decentralized identity isn’t just theory - it’s already being deployed.

In California, the DMV Wallet has issued more than two million mobile driver’s licenses in under 18 months, alongside blockchain-backed vehicle titles for over 30 million cars. Utah has created a statewide framework for verifiable credentials, with privacy-first principles written directly into law. SB 260 prohibits forced phone handovers, bans tracking and profiling, and mandates that physical IDs remain an option . At the federal level, the U.S. Department of Homeland Security is piloting verifiable digital credentials for immigration, while NIST’s NCCoE has convened banks, state agencies, and technology providers, including SpruceID, to define standards . Over 250 TSA checkpoints already accept mobile IDs from seventeen states, and adoption is expected to double by 2026 .

These examples show that decentralized identity is moving from pilot projects to infrastructure, just as HTTPS went from niche to invisible plumbing for the web.

Why It Matters Now

We are at a crossroads. On one side, centralized systems continue to create single points of failure - massive databases waiting to be breached, platforms incentivized to surveil, and users with no say in the process. On the other, decentralized identity offers resilience, interoperability, and empowerment.

For governments, it reduces fraud and strengthens democratic resilience. For businesses, it lowers compliance costs and builds trust. For individuals, it restores autonomy and privacy.

This isn’t just a new login model. It’s the foundation for digital trust in the 21st century - the bedrock upon which free societies and vibrant economies can thrive.

This article is part of SpruceID’s series on the future of digital identity in America.

Subscribe to be notified when we publish the next installment.

Subscribe Email sent! Check your inbox to complete your signup.

No spam. Unsubscribe anytime.


Ockam

The Content Creation System That Multiplies Your Output by 7x

How to Use Human + AI to Do the Work of 7 People Continue reading on Medium »

How to Use Human + AI to Do the Work of 7 People

Continue reading on Medium »


LISNR

The New Transit Security Mandate

How Hardware-Agnostic Authentication Solves Fraud and Revenue Leakage The public transit sector is undergoing a significant digital transformation, consolidating operations under the vision of Mobility-as-a-Service (MaaS). This shift promises passenger convenience through integrated mobile ticketing and Account-Based Ticketing (ABT) systems, but it simultaneously introduces a critical vulnerabilit
How Hardware-Agnostic Authentication Solves Fraud and Revenue Leakage

The public transit sector is undergoing a significant digital transformation, consolidating operations under the vision of Mobility-as-a-Service (MaaS). This shift promises passenger convenience through integrated mobile ticketing and Account-Based Ticketing (ABT) systems, but it simultaneously introduces a critical vulnerability: the rising threat of mobile fraud and revenue leakage.

For transit operators, the stakes are substantial. Revenue losses from fare evasion and ticket forgery, ranging from simple misuse of paper tickets to sophisticated man-in-the-middle attacks, can significantly impact the sustainability of MaaS and the ability to reinvest in services.

Traditional authentication methods are proving insufficient for the complexity of modern, multimodal transit:

NFC: Require significant, capital-intensive infrastructure replacement, which creates a high barrier to entry and slows deployment. QR Codes: Are prone to fraud, can be easily duplicated, and suffer from friction, slowing down passenger throughput at peak hours. BLE: Relies on robust cellular connectivity, which is often unavailable in critical transit environments, such as underground tunnels or moving vehicles.

The strategic imperative for any transit authority or MaaS provider is to adopt a hardware-agnostic, software-defined proximity verification solution that is secure, fast, and works reliably regardless of network availability.

The Strategic Imperative: Securing the Transaction at the Point of Presence

The sophistication of mobile fraud is escalating, posing a threat to the integrity of digital payment systems. Fraudsters exploit vulnerabilities, such as deferred payment authorization, to use compromised credentials repeatedly.

The solution requires a layer of security that instantly validates both the physical proximity and digital identity of the passenger. LISNR, as a worldwide leader in proximity verification, delivers this capability by transforming everyday audio components into secure transactional endpoints.

Technical Solution: Proximity Authentication with Radius® and ToneLock

LISNR’s technology provides a secure, reliable, and cost-effective foundation for next-generation transit ticketing and ticket validation. This is achieved through the Radius® SDK, which facilitates the ultrasonic data-over-sound communication and the proprietary ToneLock security protocol.

Proximity Validation with Radius

The Radius SDK is integrated directly into the transit agency’s mobile application and installed as a lightweight software component onto existing transit hardware equipped with a speaker or microphone (e.g., fare gates, information screens, on-bus systems).

Offline Capability: The MaaS application uses ultrasonic audio with user ticket data embedded within for fast data exchange. Crucially, the tone generation and verification process can occur entirely offline, ensuring that ticketing and payment validation remain functional and sub-second fast, even in areas with zero network coverage. Hardware Agnostic Deployment: Since Radius only requires a standard speaker and microphone, it eliminates the high cost and complexity of deploying proprietary NFC hardware, allowing for rapid and scalable deployment across an entire fleet or network. Security for Fraud Prevention

To combat the growing threat of mobile fraud, LISNR enables ecosystem leaders to deploy multiple advanced measures directly into the ultrasonic transaction:

ToneLock Security: Every Radius transaction can be protected by ToneLock, a proprietary tone security protocol. Only the intended receiver, with the correct, pre-shared key, can demodulate and authenticate the tone. AES256 Encryption: LISNR also offers the ability for developers to add the security protocol trusted by governments worldwide, AES256 Encryption, to all emitted tones. By folding this feature into mobility ecosystems, transit providers can ensure a secure and scalable solution for their ticketing infrastructure. 

 

The Top Business Values of Ultrasonic Proximity in Transit

For forward-thinking transit agencies and MaaS providers, adopting LISNR’s technology offers tangible operational and financial advantages:

Reduced Capital and Operational Expenditure Business Value: Eliminates the need for expensive, proprietary NFC reader hardware replacement and maintenance. Impact on ROI: Lowered infrastructure cost and faster time-to-market for new ticketing solutions. Enhanced Security and Revenue Protection Business Value: ToneLock and Encryption provide an advanced and off-network security layer for ticket and payment authentication. Impact on ROI: Significant reduction in fare evasion, fraud, and revenue leakage, directly increasing financial stability. Superior Passenger Throughput and Experience Business Value: Sub-second authentication regardless of connectivity or weather conditions. Impact on ROI: Increased rider throughput and satisfaction, encouraging greater adoption of digital ticketing and MaaS. Future-Proof and Scalable Platform Business Value: Provides a flexible, software-defined foundation that easily integrates with new Account-Based Ticketing (ABT) and payment models. Impact on ROI: Ensures longevity of infrastructure and adaptability to future urban mobility standards.

By integrating the Radius SDK into their existing platform, transit operators secure their revenue, eliminate infrastructure debt, and deliver the seamless, high-security experience modern passengers demand. 

Are you interested in how Radius can provide an additional stream while onboard (i.e. proximity marketing)? Are you using a loyalty system to capture and reward your most loyal riders? Want to learn more about how Radius works in your ecosystem? Fill out the contact form below to get in contact with an ultrasonic expert.

The post The New Transit Security Mandate appeared first on LISNR.


Thales Group

Thales Alenia Space inaugurates state-of-the-art Space Smart Factory

Thales Alenia Space inaugurates state-of-the-art Space Smart Factory tas Tue, 10/07/2025 - 12:51 Space Share options Facebook X
Thales Alenia Space inaugurates state-of-the-art Space Smart Factory tas Tue, 10/07/2025 - 12:51 Space

Share options

Facebook X Whatsapp Linkedin Email URL copied to clipboard 07 Oct 2025

One of Europe’s smartest, digital and reconfigurable manufacturing facilities, located in Rome, Italy

• This achievement was made possible by significant funding from the Italian Space Agency through PNRR (Italy’s recovery and resilience plan) funds, as well as substantial investments from Thales and Leonardo.

• Paradigm shift in the construction of space systems thanks to highly modular and configurable cleanrooms.

• Increased production capacity for satellites across various classes and applications, including large constellations.

• Intensive use of digital technologies, and Industry 4.0, including robotics, offering digital continuity between systems, from engineering activities up to production.

• First satellites to be tested and integrated in the new-generation cleanrooms: the second-generation Galileo constellation satellites, new Copernicus missions, including ROSE-L and CIMR, as well as the Sicral 3 satellite.

• Joint Lab is the facility’s strategic hub: an innovative collaborative space bringing together SMEs in the supply chain and fostering dialogue with universities and research centers.

 

Rome, October 7, 2025 – Thales Alenia Space, a joint venture between Thales (67%) and Leonardo (33%), has today inaugurated its Space Smart Factory in Rome with a ceremony attended by Italian President Sergio Mattarella. The factory — one of Europe’s largest intelligent, digital, reconfigurable manufacturing facilities — is located at the Tecnopolo Tiburtino high-tech innovation hub in Rome.

President of the Italian Republic, Sergio Mattarella, and Minister for Enterprises and Made in Italy, Adolfo Urso,  were welcomed by Ambassador Stefano Pontecorvo, Chairman of Leonardo; Roberto Cingolani, CEO and General Manager of Leonardo, and Teodoro Valente, President of the Italian Space Agency (ASI). The delegation also included Philippe Keryer, SEVP Strategy, Research and Technology for Thales, Massimo Claudio Comparini, Managing Director of Leonardo’s Space Division and Chairman of the Thales Alenia Space Supervisory Board, Hervé Derrey, President and CEO of Thales Alenia Space, and Giampiero Di Paolo, Deputy CEO of Thales Alenia Space and CEO of Thales Alenia Space Italia.

The Space Smart Factory is the concrete result of an investment of over €100 million, partly financed through PNRR funds managed by the Italian Space Agency and by substantial investments from Thales and Leonardo.

The new production hub, scheduled to begin operations by year’s end with work on the Sicral 3 satellite for the Italian Defense Ministry, is based at Rome’s Tecnopolo Tiburtino — a center of technological excellence bringing together 150 companies, mostly SMEs, closely integrated with the city and its industrial landscape.

© Thales Alenia Space/ M.Iacobucci

From left to right: Hervé Derrey, President and CEO of Thales Alenia Space, Ambassador Stefano Pontecorvo, Chairman of Leonardo, Sergio Mattarella, President of the Italian Republic, Adolfo Urso, Minister for Enterprises and Made in Italy, Roberto Cingolani, CEO and General Manager of Leonardo, Giampiero Di Paolo, Deputy CEO of Thales Alenia Space and CEO of Thales Alenia Space Italia, Teodoro Valente, President of the Italian Space Agency (ASI), Massimo Claudio Comparini, Managing Director of Leonardo’s Space Division and Chairman of the Thales Alenia Space Supervisory Board, and Philippe Keryer, SEVP Strategy, Research and Technology for Thales.

 

“Today, Italy soars even higher. With the inauguration of this new Space Smart Factory, we are taking another strategic step to strengthen the national space supply chain and consolidate Italy’s leadership by enhancing our capacity to design and integrate next-generation satellites,” said Adolfo Urso, Minister for Enterprises and Made in Italy. “This project also stands as a concrete example of effective collaboration between the public and private sectors and of the virtuous use of PNRR funds. Italy knows how to invest with strategic vision in key sectors, generating growth and qualified employment. We are at the forefront of strengthening our technological sovereignty and projecting our industrial system into the future,” said Adolfo Urso, Minister for Enterprises and Made in Italy.

“The inauguration of this state-of-the-art facility crowns years of intense efforts by Italian Space Agency and completes the network of facilities operating throughout the country for the assembly, integration and testing of satellite” said Teodoro Valente, President of the Italian Space Agency. “The Space Factory program also represents a virtuous example of public-private collaboration for the benefit of the entire national ecosystem, having effectively used the resources of the PNRR to permanently endow the country with a strategic asset. Thanks to the functionality and production capacity of this plant, Italy stands as a reference point for the realization of large satellite infrastructures in the field of Earth Observation, Telecommunications and Navigation.”

“I’m especially proud to inaugurate this new state-of-the-art facility, designed to rank among the world’s most advanced for space system production,” said Hervé Derrey, President and CEO of Thales Alenia Space. “Leveraging the latest technologies, the Space Smart Factory will enhance Thales Alenia Space’s production capacity and its global competitiveness as a leading player in Europe’s space industry. In that sense, our company will even more support European and national sovereign programs as well as the continent’s major space ambitions, including in large constellations.”

“The new space factory, an investment that looks to the future and is the result of the vision of the Italian Space Agency, institutions and the company, is a benchmark for production paradigms of the European space industry,” declared Massimo Claudio Comparini, Managing Director of Leonardo’s Space Division and Chairman of the Thales Alenia Space Supervisory Board. It is a smart factory that can be reconfigured to produce all types of satellites and constellations using the principle of serialization of activities. The site is capable of producing over 100 satellites a year in the class up to 300 kilograms in an environment integrated with the most advanced digital, robotic and interconnection technologies with the ecosystem of suppliers and partners, a fundamental asset for the growth of the space economy. This is a further stimulus for the growth of space activities in Italy and Europe.”

“Today, with deep pride and in the presence of Italy’s highest institutional authority, we inaugurated our Space Smart Factory — a modern and fully digital facility and a true technological jewel,” said Giampiero Di Paolo, Deputy CEO and CEO of Thales Alenia Space Italia. “At our Satellite Integration Center in Rome — operating at full capacity — our teams have been building some of the world’s most prestigious Earth observation, telecommunications and navigation satellites, establishing the facility as a global benchmark in satellite manufacturing. Building on this legacy, the new Space Smart Factory will serve as an additional production hub able to meet the growing demand for future constellations, while reducing time-to-market and marking a real paradigm shift in space asset manufacturing. This new infrastructure will also be open to the entire supply chain, including small and medium-sized enterprises, which will be able to access it as a service — a winning formula that will strengthen our country’s role in the space economy.”

 

About the Space Smart Factory

The Space Smart Factory will employ flexible automation and digital systems to deliver high production capacity for next-generation space systems, with a strong focus on micro and small satellites, future constellations and Thales Alenia Space’s full portfolio of modular platforms for commercial and institutional programs. It will also support the rapid refurbishment of innovative, modular, high-performance platforms for future constellations, including the European Space Agency’s ERS constellation, the Italian Space Agency’s telecommunications constellation and Leonardo’s constellation for new Earth observation services.

The Factory will use advanced digital and robotic/cobotic technologies to build satellites across multiple classes and applications. Designed to optimize capacity and reduce costs, it can manufacture more than 100 satellites per year — around two per week — with the capacity to further scale production in line with market demand. Furthermore, being part of Italy’s network of interconnected space factories, it will amplify synergies and capacities. Through its open approach to the entire supply chain and close work with academia, it will drive the development of new products and professional skills.

With modular cleanrooms and advanced digital technologies, the Space Smart Factory can be reconfigured to meet production needs, supporting the integration and testing of a wide range of satellites — from Earth observation, navigation and space telecommunications to automated and reusable vehicles and in-orbit servicing demonstrators. As a true digital hub, the center will apply cutting-edge tools and methods across every stage of satellite design, assembly, integration and testing. These include numerical modeling and digital twin technologies, virtual and augmented reality, integrated simulators connected to the supply chain and advanced automation solutions such as robots and cobots. Another advantage of the facility relies on its ability to address large constellations up to several hundred satellites.

All assembly and integration areas are now complete. This new facility will boost the Rome site’s production capacity, with plans to recruit additional highly qualified employees. Once fully operational, the Space Smart Factory will begin testing and integrating its first satellites in the new-generation cleanrooms: the Sicral 3 defense satellite, second-generation Galileo constellation satellites and new Copernicus program satellites, including ROSE-L and CIMR.

A strategic cornerstone of the facility is the Space Joint Lab — an innovative, fully flexible collaborative space strongly backed by ASI through PNRR funds. It is designed to train new professionals in space disciplines and foster the development of innovative ideas and products in partnership with SMEs, startups, suppliers, industry partners and research centers.

This new entity also brings together top expertise in aerospace and industrial disciplines from academic institutions such as Politecnico di Milano and the University of Rome “La Sapienza,” along with the global know-how of Accenture, a leader in digital and process innovation for the aerospace sector.

The entire project is guided by sustainable architecture principles, with a strong focus on energy efficiency and extensive use of renewable energy enabled by digital technologies. The building is LEED certified and equipped with rainwater recovery systems and solar panels supplying around 10% of its energy needs. It also has an installed power capacity of 4.5 MW, supported by a redundant system to guarantee 24/7 operational continuity.

The facility was designed by eos s.r.l., which also supervised the project, and built by CBRE | Hitrac, a global leader in critical infrastructure technologies and lifecycle services for advanced technology systems. Leonardo Global Solutions oversaw the entire real estate operation — from land acquisition and procurement management to the launch of construction.

© Thales Alenia Space

 

Notes

Eos, headquartered in Milan and Rome, is an integrated engineering services company built on teamwork. Its matrix organization supports a multidisciplinary approach, combining expertise in architecture, structural engineering, safety and civil plant systems. www.eosweb.it

CBRE | Hitrac is a global leader in technologies for critical infrastructure and services covering the entire lifecycle of advanced technological systems. www.hitrac-engineering.com

Leonardo Global Solutions (LGS), a service provider for Leonardo, operates with the primary objective of creating value for the entire Leonardo Group. It supports business activities in Italy and abroad with economic efficiency and process standardization, aiming at technological innovation and promoting the wellbeing of people, aligned with common sustainability goals. https://leonardoglobalsolutions.com/it/home

 

About Thales Alenia Space

Drawing on over 40 years of experience and a unique combination of skills, expertise and cultures, Thales Alenia Space delivers cost-effective solutions for telecommunications, navigation, Earth observation, environmental monitoring, exploration, science and orbital infrastructures. Governments and private industry alike count on Thales Alenia Space to design satellite-based systems that provide anytime, anywhere connections and positioning, monitor our planet, enhance management of its resources and explore our Solar System and beyond. Thales Alenia Space sees space as a new horizon, helping build a better, more sustainable life on Earth. A joint venture between Thales (67%) and Leonardo (33%), Thales Alenia Space also teams up with Telespazio to form the Space Alliance, which offers a complete range of solutions including services. Thales Alenia Space posted consolidated revenues of €2.23 billion in 2024 and has more than 8,100 employees in 7 countries with 14 sites in Europe. www.thalesaleniaspace.com

View PDF market_segment : Space thales-alenia-space-inaugurates-state-art-space-smart-factory On

Spherical Cow Consulting

The End of the Global Internet

Many people reading this post grew up believing and expecting in a single, borderless Internet: a vast network of networks that let us talk, share, and build without arbitrary walls. I like that model, probably because I am a globalist, but I don't think that's where the world is heading. The post The End of the Global Internet appeared first on Spherical Cow Consulting.

“The Internet is too big to fail, but it may be becoming too big to hold together as one.”

Many of the people reading this post grew up believing and expecting in a single, borderless Internet: a vast network of networks that let us talk, share, learn, and build without arbitrary walls. I like that model, probably because I am a globalist, but I don’t think that’s where the world is heading. In recent years, laws, norms, infrastructure, and power pulling in different directions, driving us increasingly towards a fragmented Internet. This is a reality that is shaping how we connect, what tools we use, and who controls what.

In this post, I talk about what fragmentation is, how it is happening, why it matters, and what cracks in the system may also open up room for new kinds of opportunity. It’s a longer post than usual; there’s a lot to think about here.

A Digital Identity Digest The End of the Global Internet Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:16:34 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link Embed

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

What is “fragmentation”?

Fragmentation isn’t a single event with a single definition; it’s a multi-dimentional process. Research has identified at least three overlapping types:

Technical fragmentation: differences in protocols, infrastructure, censorship, filtering; sometimes entire national “gateways” or shutdowns. Regulatory / governmental fragmentation: national laws around data flows, privacy, platform regulation, online safety, and content moderation diverge sharply. Commercial fragmentation: companies facing divergent rules in different markets (privacy, liability, content) so they adapt differently; global products become “local versions.”

A primer from the United Nations Institute for Disarmament Research (UNIDIR) published in 2023 lays this out in detail. The authors of that paper argue that Internet fragmentation is increasingly something that influences cybersecurity, trade, national security, and civil liberties. Another study published not that long ago in SciencesPo suggests that fragmentation is shifting from inward-looking national control toward being used as a tool of power projection; i.e. countries not only fence their own access, but use fragmented rules or control of infrastructure to impose influence beyond their borders.

Evidence: How fragmentation is happening

Sounds all conspiracy theory, doesn’t it? Here are some concrete examples and trends.

Divergent regulatory frameworks The European Union, China, and the U.S. are increasingly adopting very different regulatory models for digital platforms, data privacy, and online content. The “prudent regulation” approach in the EU (which tends toward pre-emptive checks, heavy regulation) contrasts with the more laissez-faire (or “permissionless”) philosophy in parts of the U.S. or other jurisdictions. I really like how that’s covered in the Fondation Robert Schuman’s paper, “Digital legislation: convergence or divergence of models? A comparative look at the European Union, China and the United States.“ Countries around the world have passed or are passing online safety laws, content moderation mandates, or rules that give governments broad powers over what gets seen, what stays hidden, and what content is restricted. Check out the paper published in the Tech Policy Press, “Amid Flurry of Online Safety Laws, the Global Online Safety Regulators Network is Growing” for a lot more on that topic. Regulatory divergence not only in content, but in infrastructure: for example laws about mandatory data localization, national gateways, network sovereignty. These increase the cost and complexity for cross-border services. Few organizations know more about that than the Internet Society, which has an explainer entirely dedicated to Internet fragmentation.

While this divergence creates friction for global platforms, it also produces positive spillovers. The ‘Brussels Effect’ has pushed companies to adopt GDPR-level privacy protections worldwide rather than maintain separate compliance regimes, raising the baseline of consumer trust in digital services. At the same time, the OECD’s latest Economic Outlook stresses that avoiding excessive fragmentation will require countries to cooperate in making trade policy more transparent and predictable, while also diversifying supply chains and aligning regulatory standards on key production inputs.

Taken together, these trends suggest that even in a fragmented environment, stronger rules in one region can ripple outward, whether by shaping global business practices or by encouraging cooperation to build resilience. Of course, this can work both positively and negatively, but let’s focus on the positive for the moment. “Model the change you want to see in the world” is a really good philosophy.

Technical / infrastructural separation National shutdowns or partial shutdowns are still used by governments during conflict, elections, or periods of dissent. Internet Society’s explainer catalogues many examples, but even better is their Pulse table that shows where there have been Internet shutdowns in various countries since 2018. Some countries are building or mandating their own national DNS, national gateways, or other chokepoints—either to control content, enforce digital sovereignty, or “protect” their citizens. These create friction with global addressing, with trust, with how routing and redundancy work. More information on that is, again, in that Internet Society fragmentation explainer.

That said, fragmentation at the infrastructure level can also accelerate experimentation with alternatives. In regions that experience shutdowns or censorship, communities have adopted mesh networks and peer-to-peer tools as resilient stopgaps. Research from the Internet Society’s Open Standards Everywhere project, no longer a standalone project but still offering interesting observations, shows that these architectures, once fringe, are being refined for broader deployment, pushing the Internet to become more fault-tolerant.

Commercial & trade-driven fragmentation Platforms serving global audiences must adapt to local laws (e.g., privacy laws, content moderation laws) so they build variants. The result is that features, policies, even user experience diverge by country. I’m not even going to try to link to a single source for that. It’s kind of obvious. Also, restrictive trade policies (export controls, sanctions) affect what hardware/software can move across borders. Fragmentation in what devices can be used, which cloud services, etc., often comes from supply-chain / trade policy rather than purely from regulation. The UNIDIR primer notes how fragmentation when applied to cybersecurity or export controls ripples through global supply.

Yet duplication of supply chains can also help build redundancy. The CSIS reports on semiconductor supply chains notes (see this one as an example) that efforts to diversify chip fabrication beyond Taiwan and Korea, while expensive, reduce systemic risks. Similarly, McKinsey’s “Redefining Success: A New Playbook for African Fintech Leaders” highlights how African fintechs are thriving by tailoring products to fragmented regulatory and infrastructural environments, turning local constraints into opportunities for growth in areas like cross-border payments, SME lending, and embedded finance. There’s a lot to study there in terms of what opportunity might look like.

I’d also like to point to the opportunities described in the AMRO article “Stronger Together: The Rising Relevance of Regional Economic Cooperation” which describes how ASEAN+3 member states are using frameworks like the Regional Comprehensive Economic Partnership (RCEP), Economic Partnership Agreements, and institutions such as the Chiang Mai Initiative to deepen trade, investment, financial ties, and regulatory cooperation. These are not just formal treaties but mechanisms for cross-border resilience, helping supply chains, capital flows, and finance networks absorb external shocks. This blog post is already crazy long, so I won’t continue, but there is definitely more to explore with how to meet this type of fragmentation with a more positive mindset.

Why does it matter?

Why should we care that the Internet is fragmenting? If there are all sorts of opportunities, do we even have to worry at all? Well, yes. As much as I’m looking for the opportunities to balance the breakages, we still have to keep in mind a variety of consequences, some immediate, some longer-term.

Loss of universality & increased friction

The Internet’s power comes from reach and interoperability: you could send an email or view a website in Boston and someone in Nairobi could see it without special treatment. But as more rules, filters, and walls are inserted, that becomes harder. Services may be blocked, slowed, or restricted. Different regulatory compliance regimes will force more localization of infrastructure and data. Users may need to use different tools depending on where they are. Work that used to scale globally becomes more expensive.

However, constraints often fuel creativity. The World Bank has documented how Africa’s fintech ecosystem thrived under patchy infrastructure, leapfrogging legacy systems with mobile-first solutions. India’s Aadhaar program is another case where local requirements drove innovation that now informs digital identity debates globally. Fragmentation can, paradoxically, widen the palette of local solutions while reducing the palette of global solutions.

Security, surveillance, and trust challenges

Fragmentation creates new attack surfaces and risk vectors. For example:

If traffic must go through national gateways, those are chokepoints for surveillance, censorship, or abuse. If companies cannot use global infrastructure (CDNs, DNS, encryption tools) freely, fragmentation may force weaker substitutes or non-uniform security practices. Divergent laws about encryption or liability may reduce trust in cross-border services or require large overheads. The UNIDIR primer emphasizes these concerns. Economic costs and innovation drag Fragmentation means duplicate infrastructure: separate data centres, duplicated content moderation teams, local legal teams. That’s inefficient. Products and platforms may need multiple variants, reducing scale economies. Cross-border collaboration, which has been a source of innovation (in open source, research, startups) becomes more legally, technically, culturally constrained. Unequal access and power imbalances Countries or regions with weaker regulatory capacity, limited infrastructure, or less technical expertise may be less able to negotiate or enforce their interests. They could be “locked out” of parts of the Internet, or forced to use inferior services. Big tech companies based in powerful jurisdictions may be able to shape global norms (via export, legal reach, or market power) in ways that reflect their values, often without much input from places with less power. This may further amplify inequalities. What counters or moderating factors exist?

Fragmentation is not unilateral nor total. There are forces, capacities, and policies that push in the opposite direction, or at least slow things down.

Standardization bodies / global protocols. The Internet Engineering Task Force (IETF), the W3C, ICANN, etc., continue to undergird a lot of the technical plumbing (DNS, HTTP, TCP/IP, SSL/TLS, etc.). These are not trivial to replace, though it seems like some regional standards organizations are trying. Commercial incentives for compatibility. Many platforms serving global markets prefer to maintain a common codebase, or to comply with the most restrictive regulation so it applies everywhere (bringing us back to the Brussels Effect). If a regulation (e.g., privacy law) in one place is strong, firms may just adopt it globally rather than maintain separate versions. User demand and expectation. Users expect services to “just work” across borders—social media, video conferencing, cloud tools. If fragmentation hurts usability, there is political/popular pushback. Cross-border political/institutional cooperation. Trade agreements, multi-stakeholder governance efforts, and international bodies sometimes negotiate common frameworks or minimum standards (e.g., data flow provisions, privacy protections, cybersecurity norms).

These moderating factors mean that fragmentation is not an all-or-nothing state; it will be uneven, partial, and contested.

What we (you, we, society) can do to navigate & shape the outcome

Fragmentation is already happening; how we respond matters. Here are some ways to think about shaping the future so that it is not simply divided, but more resilient and fair.

Advocate for interoperable baselines. Even as parts diverge, there can be minimum standards—on encryption, addressing, data portability, etc.—that maintain some baseline interoperability. This ensures users don’t fall off the map just because their country has different laws. Design for variation. Product and service designers need to think early about how their tools will work under different regulatory, infrastructural, and socio-political regimes. That means thinking about offline/online tradeoffs, degraded connectivity, local content, privacy expectations, etc. Invest in local capability. Regions with weaker infrastructure, less regulatory capacity, or less technical workforce should invest (or have investment from partners) in building up their tech ecosystems, including data centers, networking, local content, and developer education. This mitigates risk of being passive recipients rather than active shapers. Cross-bloc cooperation & treaties. Trade agreements or regional alliances for digital policies could harmonize rules where possible (e.g., privacy, data flows, cybersecurity), reduce compliance burden, and keep doors open across regions. New infrastructural experiments. Thinking creatively: mesh networks, decentralized Internet architecture, peer-to-peer content distribution, alternative routing, redundancy in undersea cables etc. In context of fragmentation, some of these may move from research curiosities to vital infrastructure. Policy awareness & public engagement. People often take the openness of the Internet for granted. Public debates, awareness of policy changes (online safety, surveillance, digital sovereignty) matter. A more informed citizenry can push for policies that preserve openness and resist overly restrictive fragmentation. Anchor in human rights and global goals. Fragmentation debates can’t just be about pipes and protocols. They must also reflect the fundamentals of an ethical society: protecting human rights, ensuring equitable access, and aligning with global commitments like the United Nations Sustainable Development Goals (SDGs) and the Global Digital Compact. These frameworks remind us that digital infrastructure isn’t an end in itself. It’s a means to advance dignity, inclusion, and sustainable development. Even as the Internet fragments, grounding decisions in these principles can help keep diverse systems interoperable not just technically, but socially. Recalibration

The “global Internet” is fragmenting, if it ever really existed at all. That’s a statement I’m not comfortable with but which I’m also not going to approach as the ultimate technical tragedy. Fragmentation brings friction, risks, and challenges, sure. It threatens universality, raises security concerns, and could amplify inequalities. But it also forces us to imagine new architectures, new modes of cooperation, new ways to build more resilient and locally grounded technologies. It means innovation might look different: less about global scale, more about boundary-crossing craftsmanship, local resilience, hybrid systems.

In the end, fragmentation isn’t simply an ending. It may be a recalibration. The question is: will we let it just fragment into chaos, or guide it into a future where multiple, overlapping digital worlds still connect, where people everywhere are participants and not just objects of regulation?

Question for you, the reader: If the Internet becomes more of a patchwork than a tapestry, what kind of bridges do you think are essential? What minimum interoperability, trust, and rights should be preserved across borders?

If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript

Hi everyone, and welcome back to the Digital Identity Digest. Today’s episode is called The End of the Global Internet.

This episode is longer than usual because there’s a lot to unpack. The global Internet, as we once imagined it, is changing rapidly. While it isn’t collapsing overnight, it is fragmenting. That fragmentation brings real risks — but also some surprising opportunities.

Throughout this month, I’ll be publishing slightly longer episodes, alongside detailed blog posts with links to research and source material. I encourage you to check those out as well.

What Fragmentation Really Means

[00:01:15] Many of us grew up hoping for a single, borderless Internet: a vast network of networks without arbitrary firewalls. I’ve always loved that model, perhaps because I’m a globalist at heart. But that’s not where we’re heading.

In recent years, laws, cultures, infrastructure, and politics have pulled the Internet in different directions. The result? An increasingly fragmented landscape.

Researchers describe three key dimensions of fragmentation:

Technical fragmentation – national firewalls, alternative DNS systems, and content filtering that alter the “plumbing” of the Internet. Regulatory fragmentation – divergent laws on privacy, content, and data, such as the GDPR compared with lighter-touch U.S. approaches. Commercial fragmentation – companies restricting services by geography, whether for compliance, cost, or strategy.

Together, these layers create friction in what once felt like a seamless system.

Evidence of Fragmentation in Practice

[00:04:18] Let’s look at how fragmentation is showing up.

Regulatory divergence – The EU, China, and the U.S. are moving in very different directions. The EU emphasizes heavy regulation and precaution. The U.S. takes a lighter (but shifting) approach. China uses regulation to centralize control. Interestingly, strict laws often set global baselines. The Brussels Effect demonstrates how GDPR effectively raised global privacy standards, since it’s easier for companies to comply everywhere. Technical fragmentation – Governments are experimenting with independent DNS systems, national gateways, and even Internet shutdowns during protests or elections. On the flip side, this has fueled mesh networks and decentralized DNS, once fringe ideas that now serve as resilience tools. Commercial fragmentation – Supply chains and trade policy drive uneven access to hardware and cloud services. For example: Semiconductor fabs are being built outside Taiwan and Korea. New data centers are emerging in Africa and Latin America. African fintech thrives precisely because local firms adapt to fragmented conditions.

McKinsey projects African fintech revenues will grow nearly 10% per year through 2028, showing how local innovation can thrive in fragmented markets.

Why Fragmentation Matters

[00:06:45] Fragmentation has profound consequences.

Universality weakens – The original power of the Internet was its global reach. Fragmentation erodes that universality. Security and trust challenges – Choke points and divergent encryption weaken cross-border trust. Economic costs – Companies must duplicate infrastructure and compliance, slowing innovation. Inequality deepens – Weaker regions risk being left behind, forced to adopt systems imposed by stronger players. Moderating Factors

[00:08:30] Fragmentation isn’t absolute. Several forces hold the Internet together:

Standards bodies like IETF and W3C keep core protocols aligned. Companies often adopt the strictest regimes globally, simplifying compliance. Users expect services to work everywhere — and complain when they don’t. Regional cooperation (e.g., EU, ASEAN, African Union) helps maintain partial cohesion.

These factors form the connective tissue that prevents a total collapse.

Possible Future Scenarios

[00:09:45] Looking ahead, I see four plausible scenarios:

Soft fragmentation Internet stays global, but friction rises. Platforms launch regional versions, compliance costs increase. Opportunity: stronger local ecosystems and regional innovation. Regulatory blocks Countries form digital provinces with internal harmony but divergence elsewhere. Opportunity: specialization (EU in privacy tech, Africa in mobile-first innovation, Asia in super apps). Technical fragmentation Shutdowns, divergent standards, and outages become common. Opportunity: mainstream adoption of decentralized and peer-to-peer networks. Pure isolationism Countries build proprietary platforms, national ID systems, and local chip fabs. Opportunity: preservation of local values, region-specific innovation. What Can We Do?

[00:12:28] In the face of fragmentation, individuals, companies, and policymakers can take action:

Advocate for interoperable baselines (encryption, addressing, data portability). Design for variation so systems degrade gracefully under different regimes. Invest in local capacity — infrastructure, skills, developer ecosystems. Encourage regional cooperation through treaties and data agreements. Experiment with alternative architectures like mesh networks and decentralized identity. Anchor change in human rights — align with UN SDGs, protect freedoms, and center people, not just states or corporations. Closing Thoughts

[00:15:50] The global Internet as we knew it may be ending — but that isn’t necessarily a tragedy.

Yes, fragmentation creates friction, risks, and inequality. But it also sparks resilience, innovation, and adaptation. In Africa, fintech thrives under fragmented conditions. In Europe, strong privacy laws raise global standards. In Asia, regional trade frameworks offer cooperation despite divergence.

The real question isn’t whether fragmentation is coming — it’s already here. The question is:

What kind of fragmented Internet do we want to build? Which bridges are worth preserving? Which minimum standards — technical, ethical, social — should always cross borders?

These questions shape not only the Internet’s future, but our own.

[00:18:45] Thank you for listening to the Digital Identity Digest. If you found this episode useful, please subscribe to the blog or podcast, share it with others, and connect with me on LinkedIn @hlflanagan.

Stay curious, stay engaged, and let’s keep these conversations going.

The post The End of the Global Internet appeared first on Spherical Cow Consulting.


Ontology

How Smart Accounts and Account Abstraction Fit Together

Since the dawn of Ethereum, interacting with blockchains has meant using Externally Owned Accounts (EOAs) - simple wallets controlled by a private key. While functional, EOAs expose serious limitations: lose your key, and you lose your funds. Want features like spending limits, session keys, or social recovery? You’re left with clunky, layered workarounds. Enter Account Abstraction (AA) and Smart

Since the dawn of Ethereum, interacting with blockchains has meant using Externally Owned Accounts (EOAs) - simple wallets controlled by a private key. While functional, EOAs expose serious limitations: lose your key, and you lose your funds. Want features like spending limits, session keys, or social recovery? You’re left with clunky, layered workarounds.

Enter Account Abstraction (AA) and Smart Accounts. Together, these innovations are transforming how users engage with Web3 by merging the flexibility of smart contracts with the usability of traditional wallets. Instead of thinking about wallets as rigid containers of keys, we can now imagine them as programmable, customizable gateways into the blockchain world.

This article explores how Smart Accounts and Account Abstraction fit together, referencing key Ethereum proposals EIP-4337, EIP-3074, and EIP-7702 and why this combination is essential for building the next wave of user-friendly, secure, and innovative blockchain applications.

What is Account Abstraction?

Account Abstraction is the idea of treating all blockchain accounts as programmable entities. Instead of separating EOAs (controlled by private keys) and contract accounts (controlled by code), AA allows accounts themselves to act like smart contracts.

Key benefits of AA include:

Gas abstraction: Pay transaction fees in tokens other than ETH.
Programmable security: Add multi sig, time locks, or social recovery. Batched transactions: Execute multiple actions in one click.
Session keys: Grant temporary permissions for games or dApps. Upgradability: Evolve wallet logic without replacing accounts.

With AA, wallets evolve from being passive key holders into active smart entities capable of executing logic on behalf of their users.

What are Smart Accounts?

If Account Abstraction is the theory, Smart Accounts are the practice. A Smart Account is simply a blockchain account that operates under the AA model.

Instead of relying on a single private key, a Smart Account:

Runs customizable logic like a smart contract. Supports flexible authentication methods (biometrics, passkeys, hardware modules). Allows advanced features such as automatic payments, subscription models, or delegated access. Provides recoverability through trusted guardians or social recovery mechanisms.

In short, Smart Accounts are the user-facing manifestation of Account Abstraction. They bring abstract design principles into tangible experiences, making Web3 more accessible for everyday users.

How They Fit Together

Think of Account Abstraction as the architectural blueprint and Smart Accounts as the actual buildings.

AA defines the rules: It sets the framework for programmable accounts. Proposals like EIP-4337 specify how transactions are validated and bundled without relying solely on EOAs.

2. Smart Accounts implement the

rules:

They apply those AA rules to create practical wallets. Through smart contracts, they support features like gasless transactions, account recovery, and key rotation.

Together, AA and Smart Accounts replace the outdated key-wallet model with a flexible, modular system where user experience comes first.

The Role of Key EIPs

Ethereum’s progress toward AA and Smart Accounts has been guided by several proposals:

EIP-4337 (2021):
Introduced the concept of a “UserOperation” and “bundlers.” This allows smart accounts to function without requiring changes at the consensus layer. It is the backbone of today’s AA-compatible wallets. EIP-3074:
Enables EOAs to delegate control to contracts temporarily, bridging the gap between old wallets and smart accounts. EIP-7702 (2024):
Builds on 3074 but provides a safer and more streamlined way for EOAs to transition into smart accounts. This is critical for onboarding existing users without forcing them to abandon their current wallets.

Together, these proposals ensure that Smart Accounts are not just theoretical they’re backward-compatible, forward-looking, and ready for mainstream adoption.

Why This Matters for Users

For users, the combination of AA and Smart Accounts translates into real-world improvements:

Safety: Lose your key? No problem recover your wallet using guardians or multi-sig setups. Simplicity: Pay fees with stablecoins, batch multiple dApp actions into one transaction, or play a blockchain game without constant wallet prompts. Flexibility: Switch security models as your needs change (e.g., from a simple wallet as a beginner to a multi sig or hardware protected wallet as your assets grow). Innovation: Developers can build richer applications subscription based dApps, automated DeFi strategies, or Web3-native identity systems.

This shifts the user experience from fear of making mistakes to freedom to explore.

A Fresh Perspective: Smart Accounts as Digital Personas

One way to think creatively about Smart Accounts is to view them not just as wallets, but as digital personas.

Just as you might have different identities in real life personal, professional, or gaming Smart Accounts allow you to manage multiple digital personas:

A DeFi persona with automated trading strategies. A gaming persona with session keys and gasless interactions. A professional persona tied to your DAO contributions.

Each persona can run its own logic while remaining linked to your overall identity. This flexibility makes Web3 personalized and intuitive, much like the evolution from simple feature phones to today’s smartphones.

Practical Takeaways for the Community

Developers: Start experimenting with Smart Account SDKs built on EIP-4337. Building dApps with native AA support will set you apart in the next wave of adoption. Users: Explore AA wallets like Safe, ZeroDev, or Soul Wallet. Get familiar with recovery options and gas abstraction to see the difference firsthand. Communities: Advocate for dApps that integrate Smart Accounts, since these models reduce onboarding friction for newcomers.

By engaging now, the community can shape how AA and Smart Accounts evolve, ensuring they remain inclusive, secure, and user first.

Conclusion

Smart Accounts and Account Abstraction are not isolated innovations they are two halves of the same revolution. Account Abstraction lays the foundation, while Smart Accounts bring it to life. Together, they unlock a Web3 experience that is safer, simpler, and infinitely more flexible than today’s wallet paradigm.

Just as the smartphone redefined what we expect from communication devices, Smart Accounts will redefine what we expect from blockchain wallets. They are not just tools to hold assets they are programmable, adaptable, and deeply human centric gateways into the decentralized world.

The future of Web3 isn’t just about protocols or assets it’s about empowering people with smarter, safer, and more intuitive digital identities. And that future begins with Smart Accounts powered by Account Abstraction.

How Smart Accounts and Account Abstraction Fit Together was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


FastID

Design for Chaos: Fastly’s Principles of Fault Isolation and Graceful Degradation

Learn how Fastly builds a resilient CDN through fault isolation & graceful degradation. Discover our principles for minimizing disruption & ensuring continuous service.
Learn how Fastly builds a resilient CDN through fault isolation & graceful degradation. Discover our principles for minimizing disruption & ensuring continuous service.

Monday, 06. October 2025

Ockam

Turn Your Users Into Your Distribution Engine

Engineer Word-of-Mouth Growth Without Paid Ads or Big Budgets Continue reading on Medium »

Engineer Word-of-Mouth Growth Without Paid Ads or Big Budgets

Continue reading on Medium »

Sunday, 05. October 2025

Ockam

Network-Driven Distribution: How to Get Others to Spread Your Work

How to get others to distribute your work (even with $0 budget) Continue reading on Medium »

How to get others to distribute your work (even with $0 budget)

Continue reading on Medium »

Saturday, 04. October 2025

Ockam

Distribution: The Missing Link Between Building and Growing

Yesterday we talked about how Product, Marketing, and Sales must work as one system. Today we focus on the piece that determines whether… Continue reading on Medium »

Yesterday we talked about how Product, Marketing, and Sales must work as one system. Today we focus on the piece that determines whether…

Continue reading on Medium »

Friday, 03. October 2025

Ockam

The Three Circles of SaaS Growth

Why Product, Marketing, and Sales Must Work as One Continue reading on Medium »

Why Product, Marketing, and Sales Must Work as One

Continue reading on Medium »


1Kosmos BlockID

Customer Identity Verification: Overview & How to Do It Right

Key Lessons Customer identity verification is critical for fraud prevention, compliance, and building trust in digital business. Businesses can use layered methods (document verification, biometrics, MFA, and risk scoring) to ensure security without sacrificing user experience. The biggest challenges include synthetic identity fraud, cross-border verification, and balancing compliance with custome
Key Lessons

Customer identity verification is critical for fraud prevention, compliance, and building trust in digital business.

Businesses can use layered methods (document verification, biometrics, MFA, and risk scoring) to ensure security without sacrificing user experience.

The biggest challenges include synthetic identity fraud, cross-border verification, and balancing compliance with customer convenience.

Adopting best practices like multi-layered verification, advanced AI, and risk-based frameworks ensures security while streamlining onboarding.

What Is Customer Identity Verification?

Customer identity verification confirms that customers are who they claim to be, using digital tools and data checks. It involves validating personal details and credentials against official records, documents, or biometric identifiers.

The purpose is simple: stop fraudsters at the gate while giving legitimate customers a seamless, trusted onboarding experience. Verification is no longer optional in a world where synthetic identities can be spun up with a stolen Social Security number and a fake address.
Modern verification systems use artificial intelligence, machine learning, and biometrics to increase accuracy and speed dramatically. Instead of forcing customers to wait days while documents are manually reviewed, businesses can now verify identities in minutes—or even seconds—with confidence levels above 99%.

What Are The Different Types Of Customer Identity Verification?

The main types are document-based, biometric, knowledge-based, database verification, and multi-factor authentication (MFA).

Document-based verification checks the authenticity of passports, driver’s licenses, and other government IDs. Modern systems analyze holograms, fonts, and machine-readable zones (MRZs) to detect forgery attempts. Biometric verification leverages fingerprints, facial recognition, or iris scans. When paired with liveness detection, biometrics are far harder to spoof than traditional credentials. Knowledge-based authentication (KBA) relies on security questions, but with social media oversharing and widespread data breaches, attackers can easily guess or steal these answers. This method is rapidly losing relevance. Database verification cross-checks a customer’s details against government, financial, and sanctions databases to validate legitimacy. MFA strengthens defenses by requiring two or more identity factors: something you know (password), something you have (token), and something you are (biometric).

Each method has strengths and weaknesses, but the most secure strategies don’t pick one; they combine them into a layered, adaptive verification framework.

How Does Customer Identity Verification Work?

Verification breaks down into four stages: data collection, document assessment, identity validation, and risk assessment.

Everything starts with data collection, where customers provide personal details, government-issued IDs, biometrics, and contact information. Once collected, the data moves to document assessment, where AI tools check submitted IDs for authenticity and signs of tampering. This step catches expired, altered, or synthetic documents before they go any further. Next is identity validation, where the information gets cross-referenced against trusted government and financial databases. Biometrics are compared to ID photos, while watchlist screenings flag individuals who could pose regulatory or fraud risks. Last comes risk assessment that generates a trust score based on behavioral anomalies, device intelligence, geolocation data, and known fraud indicators.

What once stretched across days now happens in seconds, allowing organizations to seamlessly onboard good customers while quietly blocking bad actors.

What Are The Challenges To Customer Identity Verification?

Challenges include synthetic fraud, cross-border complexity, balancing user experience with security, advanced attack vectors, and compliance.

Synthetic identity fraud is the fastest-growing financial crime, estimated to reach $23 billion annually by 2030. Attackers stitch together real and fake data to create new “people” that slip past legacy checks. Cross-border verification struggles with inconsistent ID standards, languages, and regulatory frameworks. A passport in Germany won’t have the same features as a driver’s license in Mexico. User experience vs. security is a constant balancing act. Too much friction leads to legitimate users abandoning onboarding, while too little leads to attackers walking right in. Advanced attacks like deepfakes, AI-generated voice phishing, and synthetic biometrics make fraud detection harder than ever. Compliance obligations vary dramatically across sectors. With the General Data Protection Regulation (GDPR) in Europe, the Anti-Money Laundering (AML) rules for banks, and the Health Insurance Portability and Accountability Act (HIPAA) for healthcare, standards and regulations will run the gamut. Businesses must navigate a minefield of global standards.

The reality is that fraudsters innovate faster than regulators. That means businesses need adaptive, technology-driven defenses that evolve continuously.

What Are The Best Practices To Customer Identity Verification?

The best practices boil down to multi-layered checks, AI-driven analysis, risk-based frameworks, data security, and compliance alignment.

Multi-layered verification: Mix documents, biometrics, and databases for solid defense in depth. Advanced AI: Use machine learning models to catch spoofing, deepfakes, and behavioral red flags in real time. Risk-based approaches: Match verification intensity to transaction risk, including tougher checks for wire transfers, lighter touch for low-value stuff. Data protection: Encrypt sensitive data, store it securely, and run regular audits to stay compliant. Or, with blockchain solutions like 1Kosmos, skip centralized data storage entirely and eliminate that major attack vector. Regulatory alignment: Keep up with changing KYC/AML requirements and privacy laws around the world.

Get these right, and you’ll block fraud while making onboarding so quick and smooth that customers actually choose businesses with stronger verification over the competition.

Why Is Customer Identity Verification Important To Businesses?

It prevents fraud, ensures compliance, builds trust, and drives operational efficiency. By verifying users before granting access, businesses can stop account takeovers, impersonation scams, and synthetic identities. But the benefits go beyond just security. Regulatory compliance, from KYC and AML requirements in financial services to HIPAA rules in healthcare, makes verification a must-have for operations.

In an environment where breaches dominate headlines, demonstrating rigorous verification builds confidence with partners and customers alike.

How Should My Business Verify Customer Identities Step By Step?

Businesses should follow a structured six-step implementation framework.

Assess requirements: Figure out your fraud risks, compliance mandates, and customer demographics. Choose methods: Based on your specific risk profile, you can select verification tools such as customer document verification or biometrics. Implement technology: Set up APIs, document scanning, and biometric integrations that scale without messing up your existing systems. Design journeys: Create user-friendly flows that reduce friction without compromising security. Train staff: Make sure employees can escalate suspicious cases, conduct manual reviews, and help customers when needed. Monitor and optimize: Continuously tweak based on fraud detection outcomes, customer drop-off rates, and regulatory changes.

Following this framework ensures verification is both secure and customer-centric.

What Are The Common Customer Identity Verification Methods?

Standard methods include document scanning, facial recognition, fingerprint scans, SMS OTPs, database checks, and MFA.

Some legacy methods are fading. KBA and SMS one-time passcodes, for example, are easily compromised. Attackers can scrape answers from social media or intercept text messages.

By contrast, modern approaches like AI-powered biometrics and blockchain-backed credentials are gaining traction. They’re faster, harder to spoof, and more transparent for users. Forward-looking businesses are already adopting reusable digital identity wallets, allowing customers to authenticate seamlessly across multiple services without re-verifying.

Trust 1Kosmos Verify for Identity Verification

Passwords and outdated MFA create friction for customers, leaving the door open to fraud, account takeovers, and synthetic identities. These obsolete methods slow onboarding, frustrate legitimate users, and fail to deliver the trust today’s digital economy demands.

1Kosmos Customer solves this by replacing weak credentials with a mighty, privacy-first digital identity wallet backed by deterministic identity proofing and public-private key cryptography. In just one quick, customizable registration, legitimate customers are verified with 99%+ accuracy and given secure, frictionless access to services, while fraudsters are stopped at the first attempt. From instant KYC compliance to zero-knowledge proofs that protect sensitive data, the result is a seamless authentication experience that customers love and businesses can rely on.

Ready to eliminate fraud, streamline onboarding, and delight your customers? Discover how 1Kosmos Customer can transform your digital identity strategy today.

The post Customer Identity Verification: Overview & How to Do It Right appeared first on 1Kosmos.


Ocean Protocol

DF157 Completes and DF158 Launches

Predictoor DF157 rewards available. DF158 runs October 2nd — October 9th, 2025 1. Overview Data Farming (DF) is an incentives program initiated by ASI Alliance member, Ocean Protocol. In DF, you can earn OCEAN rewards by making predictions via ASI Predictoor. Data Farming Round 157 (DF157) has completed. DF158 is live, October 2nd. It concludes on October 9th. For this DF round, Predictoo
Predictoor DF157 rewards available. DF158 runs October 2nd — October 9th, 2025 1. Overview

Data Farming (DF) is an incentives program initiated by ASI Alliance member, Ocean Protocol. In DF, you can earn OCEAN rewards by making predictions via ASI Predictoor.

Data Farming Round 157 (DF157) has completed.

DF158 is live, October 2nd. It concludes on October 9th. For this DF round, Predictoor DF has 3,750 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF158 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF158

Budget. Predictoor DF: 3.75K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and ASI Predictoor

Ocean Protocol was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF157 Completes and DF158 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Recognito Vision

AI Face Recognition Explained with Benefits and Challenges

Artificial Intelligence is no longer science fiction. From unlocking your phone to passing through airport security, AI face recognition has become part of daily life. It is powerful, practical, and sometimes a little controversial. But how does it actually work, and where is it headed? Let’s break it down in simple terms.   What is...

Artificial Intelligence is no longer science fiction. From unlocking your phone to passing through airport security, AI face recognition has become part of daily life. It is powerful, practical, and sometimes a little controversial. But how does it actually work, and where is it headed? Let’s break it down in simple terms.

 

What is AI Face Recognition

At its core, AI and face recognition is a technology that identifies or verifies a person using their facial features. Think of it as a digital detective. It looks at your face the same way you look at a fingerprint, comparing unique details like the distance between your eyes or the curve of your jaw.

This isn’t just about matching a selfie to your phone. The technology is also applied in banking apps, airports, healthcare, and even retail stores. It is driven by facial AI models trained on massive datasets, allowing systems to quickly learn the differences and similarities between millions of faces.

 

How AI Face Recognition Works

The process might sound complex, but let’s simplify it. The system works in three big steps:

Face Detection AI
The camera identifies that a human face is present. It locates key landmarks such as eyes, nose, and mouth.

Face Encoding
The software converts the face into a unique numerical code. This code is like a fingerprint for your face.

Face Match AI
The system compares this code with stored data to verify identity or find a match.

Step Action Real-Life Example Detection Identifies a face Phone camera sees your face Encoding Converts to unique code Creates a “faceprint” Matching Compares with database Unlocks your device

These steps are powered by artificial intelligence face recognition algorithms that become more accurate over time.

 

Accuracy and Global Benchmarks

Not all systems are created equal. Some are lightning fast with near-perfect accuracy, while others struggle in low light or with diverse facial features. The NIST Face Recognition Vendor Test (FRVT) has become the gold standard for measuring how well different systems perform.

Visit NIST FRVT for performance data.

Explore detailed evaluation results on FRVT 1:1 tests.

These benchmarks give businesses and governments confidence before deploying large-scale projects.

 

Everyday Uses of Facial AI

You may not notice it, but facial AI is everywhere. Here are some real-world applications:

Smartphones: Unlocking devices without passwords.

Airports: Quicker boarding with automated gates.

Healthcare: Patient verification for secure records.

E-commerce: AI face search for trying products virtually.

Banking: Identity checks for fraud prevention.

Fun fact: Some retailers even use AI facial systems to analyze customer demographics and improve shopping experiences.

Privacy Concerns and Regulations

With great power comes great responsibility. While the technology is convenient, it also raises concerns about surveillance and misuse. Governments are stepping in with data protection laws like the GDPR to ensure individuals have control over their biometric data.

Companies using AI face recognition must follow strict compliance rules such as:

Informing users how their data will be used.

Allowing opt-outs where possible.

Storing encrypted biometric data securely.

Failure to follow these rules can lead to massive fines and public backlash.

 

Challenges Facing Face Detection AI

Even with rapid progress, the technology isn’t flawless. Common challenges include:

Bias in datasets: Some systems perform better on certain skin tones.

Spoofing attempts: Photos or videos tricking the system.

Environmental issues: Poor lighting or extreme angles can reduce accuracy.

To tackle spoofing, researchers are exploring liveness detection techniques, making sure the system knows the difference between a real human face and a photo.

The Future of AI and Face Recognition

Looking ahead, experts believe ai and face recognition will only get smarter. Here are a few trends shaping the future:

Edge computing: Processing done on local devices for speed and privacy.

Cross-industry adoption: From gaming to education, new uses are emerging.

Open-source innovation: Platforms like Recognito GitHub encourage collaboration and transparency.

As systems improve, the balance between convenience and privacy will continue to dominate the conversation.

 

Final Thoughts

AI face recognition is changing the way the world verifies identity. It simplifies daily tasks, strengthens security, and opens doors to new possibilities. Yet, it also comes with challenges like privacy risks and the need for unbiased data. With organizations such as NIST setting global benchmarks and strict regulations like GDPR shaping policy, the future looks promising but carefully monitored.

And as innovation keeps moving forward, one name that continues to contribute in this space is Recognito.

 

Frequently Asked Questions

 

1. What is AI face recognition used for

AI face recognition is used for unlocking smartphones, airport security checks, banking identity verification, and even retail experiences like virtual try-ons.

2. How accurate is face detection AI

Accuracy depends on the system. Some advanced tools tested by NIST FRVT report accuracy rates above 99 percent, especially in controlled environments.

3. Can AI face search find someone online

AI face search can match faces within specific databases or platforms, but it cannot scan the entire internet. Accuracy depends on the size and quality of the database.

4. Is AI facial recognition safe to use

Yes, when regulated properly. Systems that follow privacy rules like GDPR and use encryption keep user data protected.

5. What is the difference between face match AI and face detection AI

Face detection AI only spots if a face is present. Face match AI goes further by verifying if the detected face matches an existing one in the database.


uquodo

How AI is Enhancing Sanctions Screening and Adverse Media Monitoring

The post How AI is Enhancing Sanctions Screening and Adverse Media Monitoring appeared first on uqudo.

Thursday, 02. October 2025

Holochain

Finding Our Edge: A Strategic Update

Blog

I want to share the Holochain Foundation’s evolving strategic approach to our subsidiary organizations, Holo, and Unyt.

Strategic work always involves paying attention to the match between your efforts, and where the world is ready to receive those efforts. Since our inception there has always been a small group of supporters who have understood the potential and need for the kind of deep p2p infrastructure that we are building, which allows for all kinds of un-intermediated direct interactions and transactions of all types. But at this moment in time we are seeing a new convergence.

As Holochain is maturing significantly, the main-stream world is also maturing into understanding the need for p2p networks and processes. As my colleague Madelynn Martiniere says: “we are meeting the moment and the moment is meeting us.”

And there’s a key domain in which this is happening: the domain of value transfer.  

The Unyt Opportunity

As you know, the foundation created a subsidiary, Unyt, to focus on building HoloFuel, the accounting system for Holo to use for its Hosting platform. But it turns out that the tooling Unyt built has a far broader application than we had initially realized. This is part of the convergence, and also a huge opportunity.

Unyt’s tooling turns out to be what people are calling “payment-rails”: generalized tooling for value tracking, and because it’s built on Holochain, it’s already fully p2p.  This is part of the convergence. There is a huge opportunity for this technology to bring the deep qualitative value that p2p provides: increased transparency, agency, reduced cost, & privacy. And also in huge volumes: when talking about digital payments and transactions you count in the trillions. 

The implications are huge, and they need and deserve the focus of the Foundation and our resources so we can fully develop the opportunity ahead of us.

Interactions with Integrity: Powered by Holochain

Our original mission was to provide the underlying cryptographic fabric to make decentralized computing both easy and real - and ultimately, at a scale that could have a global impact.

That mission remains intact. The evolution we’re sharing today is not only directly connected to our original strategy, and a logical extension of it, but are ones that we believe will - over time - substantially increase the scale of and opportunities for anyone and everyone within the Holochain ecosystem.

When we introduced the idea of Holochain and Holo to the world in December of 2017, our goal was to provide a technology infrastructure that allowed people to own their own data, control their identities, choose how they connect themselves and their applications to the online world, and intrinsic to all of the above, interact and transact directly with each other as producers and consumers of value.

The foundation of the Holochain ecosystem has thus always required establishing a payment system where every transaction is an interaction with integrity: value is supported by productive capacity, validated on local chains (vs. global ledgers) by a unit of accounting - in our case, HoloFuel - and value and supply is grounded by a real-world service with practical value. 

The Holochain Foundation entity charged with developing and delivering the technology infrastructure for this payment system is Unyt Accounting. 

For almost a year now, the team at Unyt has been quietly working hard to develop the payment rails software that will permit users to build and deploy unique currencies (including HoloFuel), allow those currencies to circulate and interact, and ensure the integrity of every transaction. As it turns out, we got more than we bargained for, in the best possible way.

Meaning: in Unyt, we have software that not only enables HoloFuel, but we can see a brilliant way to link into both the blockchain and crypto world, and also the non-crypto world. As Holochain matures, with the application of Unyt technology, we see a major opportunity in the peer-to-peer payments space, and a chance to lead the non-financial transaction space. 

These are, objectively, huge markets, as Unyt products and tools are not only aimed squarely at solving real-world crypto accounting and payment challenges, but will combine to create the infrastructure needed to launch HoloFuel, and additionally address multiple real-world use cases for anyone interested in high-integrity, decentralized, digital interactions.

Given Unyt’s progress, we arrived at a point where it became clear to everyone on our leadership team that it was time to make an important strategic decision about where to best devote our focus, time, and resources. 

Strategic Decisions and Our Path Forward

Here’s where we landed:

When we reorganized Holo Ltd. last year, it was because we wanted to spur growth, and felt having a focus on a commercial application could expand the number of end users. But, it also put us into competition with some of the largest and best-capitalized tech companies on the planet. 

We haven’t gotten enough traction yet for this to be our sole strategy. As part of our ongoing evaluation over the last months, the Holo dev team pursued an exploration of a very different approach - both technical and structural - to deploying Holochain always-on nodes.

Holo is calling it Edge Node, an open-source, OCI-compliant container image that can be run on any device, physical or virtual, to serve as an always-on-node for any hApp .

Today, Edge Node is available on GitHub for technically savvy folks to use. You can run the Docker container image or opt to install via the ISO onto HoloPorts or any other hardware

What’s different about this experiment is that it appeals to a much wider audience - those familiar with running docker containers, rather than the smaller audience who know Nix. And we’re releasing it now, as open-source, and actively seeking immediate feedback from the community on how this might evolve and contribute to Holo’s goals.

Second, it is equally clear we need to accelerate the timeline for Unyt. Unyt’s software underpins the accounting infrastructure necessary to create and launch HoloFuel, and subsequently allow owners of HOT to swap into it. More broadly, the multiple types of connectivity Unyt can foster have enormous potential to influence the size, growth, and overall value of Holochain - it is the substrate of peer-to-peer local currencies, and the foundation for future DePIN alliances. 

This acceleration is already under way - in fact, Unyt has released its first community currency app, Circulo, which is meant for real-world use but also acts as proof-of-concept for the broader Unyt ecosystem.

Third, and finally, the Holochain Foundation will continue to focus on the stability and resilience of the Holochain codebase, prioritize key technical features required for the currency swap execution, and remain at the center of all our entities to ensure cohesion and coordination.

Leadership Transition

As part of the next stage of Holo’s evolution, I want to share an important leadership update.

Mary Camacho, who has served as Executive Director of Holo since 2018, will be stepping down from that role, and I will be stepping in. Mary will continue to support Holo during this transition, particularly in guiding financial and strategic planning. We are deeply grateful for her years of leadership, steady guidance, and dedication to Holo’s vision.

At the same time, we also thank Alastair Ong, who has served as a Director of Holo, for his contributions on the board. We wish him the very best in his next endeavors.

These transitions mark a natural shift in leadership that allows Holo to move forward with renewed focus, alongside ongoing collaboration with Unyt and the wider Holochain ecosystem.

Looking Ahead

From the outset, we knew we were undertaking an extraordinary challenge. In conceiving of and developing Holochain, we set out to compete with some of the largest, best-resourced, and most powerful companies in the world. No part of what we have done, or intend to do, has been easy. 

In many ways Holochain has always been a future-looking technology that users had difficulty fully appreciating and adopting at scale. Now, the world seems to have caught up to us, and is interested in implementing peer-to-peer networks and processes away from centralized structures. 

When we formed Unyt to build the software infrastructure to permit the creation and accounting for HoloFuel, we also caught up to the world: A Major Opportunity Emerges(the volume of digital payments and transactions last year alone are measurable in the trillions).

We’ve spent a long time working to deliver on our commitments to our community, and there is much still to do.

As challenging as it is not to have crossed the finish line yet, it’s exciting to see it appearing on the horizon. We continue to experiment with how to best expand the potential for Holo hosting. And with Unyt, what we’re proposing to do here - if we are successful - is significantly grow the scale, potential, optionality, and value of every aspect of the Holochain ecosystem. 

For those interested, please take the time to watch our most recent livestream, where we talk about this evolution and the opportunities it represents for all of us. 

We have a lot to look forward to, and we look forward to continuing to work closely with our most valuable, and reliable, resource: you, the members of the Holochain community.

Wednesday, 01. October 2025

liminal (was OWI)

Third-Party Fraud: The Hidden Threat to Business Continuity

Last week marked our sixth Demo Day, this one focused on Fighting Third-Party Fraud. Ten vendors stepped up to show how their solutions tackle account takeover (ATO), business email compromise (BEC), and synthetic identity fraud. Each had 15 minutes to prove their case, followed by a live Q&A with an audience of fraud, risk, and […] The post Third-Party Fraud: The Hidden Threat to Business C

Last week marked our sixth Demo Day, this one focused on Fighting Third-Party Fraud. Ten vendors stepped up to show how their solutions tackle account takeover (ATO), business email compromise (BEC), and synthetic identity fraud. Each had 15 minutes to prove their case, followed by a live Q&A with an audience of fraud, risk, and security leaders.

Across the sessions, a consistent theme emerged: the biggest shift in the fraud prevention market isn’t in the tactics fraudsters use, but how enterprises are buying solutions. Detection is expected; what matters now is whether a tool can keep the business running without stalling growth or turning away good customers. Buyers want assurance that fraud prevention supports stability by keeping customers moving, revenue intact, and trust unbroken when fraud inevitably spikes.

What is third-party fraud?

For readers outside the space, third-party fraud happens when criminals exploit someone else’s identity to gain access. Unlike first-party fraud, where the individual misrepresents themselves, third-party fraud relies on stolen or fabricated credentials to impersonate a trusted user.

Classic examples include:

Account takeover (ATO): hijacking legitimate accounts, often through phishing or stolen credentials Business email compromise (BEC): impersonating executives or vendors to redirect payments Synthetic identity fraud: blending real and fake data to create convincing personas

In 2024, consumers reported losing $12.5 billion to fraud, a 25% jump year-over-year and the highest on record. Account takeover attacks alone rose nearly 67% in the past two years as fraudsters leaned on phishing, social engineering, and increasingly AI-driven methods.

As Miguel Navarro, Head of Applied Emerging Technologies at KeyBank, put it: “Think about deepfakes like carbon monoxide — you may think you can detect it, but honestly, it’s untraceable without tools.” That risk is no longer theoretical; it’s already showing up in contact centers and HR pipelines.

Walking the friction tightrope

Every fraud solution has to walk a tightrope: protect the business without slowing customers down. In this Demo Day, that balance was explored in the Q&A, with audience questions focusing on onboarding delays, false positives, and manual review trade-offs. What happens when onboarding drags? How are false positives handled? Where do manual reviews fit?

Miguel also added:“…a tool might be a thousand times more effective, but if it’s too complex for teams to adopt, it’s effectively useless.”

Providers responded with different approaches. Several leaned on behavioral and device-based analytics to make authentication seamless, layering signals like keystroke patterns and device intelligence so genuine users pass in the background. Others showed risk-based orchestration, combining machine learning models and workflows so only high-risk activity triggers extra checks.

Protecting customers from themselves

One theme that stood out was how solutions are evolving to address social engineering. As Mzu Rusi, VP of Product Development at Entersekt, explained: “It’s not enough to protect customers from outsiders — sometimes we have to protect them from themselves when they’re being socially engineered to approve fraud.”

That means fraud platforms are no longer judged only on blocking malicious logins. They’re also expected to intervene in context, analyzing signals like whether the user is on a call while approving a transfer, or whether a new recipient account shows signs of mule activity.

Human touch as a deterrent

Technology was the backbone of every demo, but Proof emphasized how human interaction remains a powerful fraud defense. Lauren Furey, Principal Product Manager, shared how stepping up to a live identity verification can shut down takeover attempts while preserving trust: “The deterrence of getting a fraudster in front of a human with these tools is enormous. Strong proofing doesn’t have to feel heavy, and customers leave reassured rather than abandoned.”

This balance — minimal friction for real customers, targeted intervention for fraudsters — ran through the day.

From fraud loss to balance sheet risk

Fraud was reframed as a balance sheet problem, not just a technology one. As Sunil Madhu, CEO & Founder of Instnt, put it: “Fraud is inevitable. Fraud loss is not. For the first time, businesses can transfer that risk off their balance sheet through fraud loss insurance.”

That comment landed because it spoke to CFO and board-level concerns. Fraud is no longer just an operational hit; it’s a financial exposure that can be shifted, managed, and priced. But shifting fraud into financial terms doesn’t reduce the pressure on prevention teams — it only raises the bar for the technology that keeps fraud within acceptable limits.

How detection is evolving

On stage, several demos highlighted identity and device scoring as the new baseline, layering biometrics, transaction history, and tokenization to judge risk in milliseconds. Others pushed detection even earlier in the journey, using pre-submit screening to catch bad actors before they hit submit.

Machine learning also played a central role in the demos. Several providers showed how adaptive models can cut down false positives while continuously improving through feedback loops. Phil Gordon, Head of Solution Consulting at Callsign, described it as creating a kind of “digital DNA”: “Every customer develops a digital DNA — how they type, swipe, or move through a session. That lets us tell genuine users apart from bots, malware, or account takeover attempts in milliseconds.”

That theme carried into the fight against synthetic identities. Alex Tonello, SVP Global Partnerships at Trustfull, explained how fraudsters engineer personas to slip through traditional checks: “Synthetic fraudsters build identities with new emails, new phone numbers, no history. By checking hundreds of open-source signals at scale, we see right through that façade.”

Others extended the conversation to fraud at the network level. Artem Popov, Solutions Engineer at Sumsub, noted: “Fraudsters reuse documents, devices, and identities across hundreds of attempts. By linking those together, you expose entire networks — not just single bad actors.”

The boardroom shift

Fraud used to be a line item in operations, managed quietly by fraud prevention teams and written off as the cost of doing business. That’s no longer the case. The scale of losses, reputational damage, and operational disruption means fraud has moved up the agenda and onto boardrooms.

Executives now face a harder challenge: choosing tools that don’t just stop fraud, but that protect business continuity. They want proof that investments in prevention will keep revenue flowing when attacks spike, not just reduce fraud losses on a spreadsheet. Boards are asking whether controls are strong enough to protect customer trust, whether onboarding processes can scale without breaking, and whether the business can keep moving if a wave of account takeovers hits overnight.

They are right to pay attention. Fraud and continuity now rank among the top five enterprise risks. Technology shifts like Apple and Google restricting access to device data are making established defenses less reliable, reframing fraud not only as a security issue but as a continuity problem.

Watch the Recording

Did you miss our Third-Party Fraud Demo Day? You can still catch the full replay of vendor demos and expert insights:
Watch the Third-Party Fraud Demo Day recording here

Key Takeaways Liminal’s sixth Demo Day spotlighted 10 vendors tackling third-party fraud. Global fraud losses are nearing $1 trillion annually, with ATO alone costing banks $6,000–$13,000 per incident. Audience Q&A revealed that the hardest problems are manual reviews, onboarding delays, and false positives. Leading vendors balance speed, scale, and user experience, reducing both fraud losses and abandonment. Fraud prevention has shifted from a back-office function to a board-level resilience strategy.

The post Third-Party Fraud: The Hidden Threat to Business Continuity appeared first on Liminal.co.


HYPR

Announcing the HYPR Help Desk Application: Turn Your Biggest Risk into Your Strongest Defense

The call comes in at 4:55 PM on a Friday. It’s the CFO, and she’s frantic. She’s locked out of her account, needs to approve payroll, and her flight is boarding in ten minutes. She can’t remember the name of her first pet, and the code sent to her phone isn’t working. The pressure is immense. What does your help desk agent do? Do they bypass security to help the executive, or do they ho

The call comes in at 4:55 PM on a Friday. It’s the CFO, and she’s frantic. She’s locked out of her account, needs to approve payroll, and her flight is boarding in ten minutes. She can’t remember the name of her first pet, and the code sent to her phone isn’t working. The pressure is immense. What does your help desk agent do? Do they bypass security to help the executive, or do they hold the line, potentially disrupting a critical business function?

This isn’t a hypothetical scenario; it's a daily, high-stakes gamble for support teams everywhere. And it’s a gamble that attackers are counting on. They know your help desk is staffed by humans who are measured on their ability to resolve problems quickly. They exploit this pressure, turning your most helpful employees into unwitting accomplices in major security breaches. It's time to stop gambling.

Why Is Your Help Desk a Prime Target for Social Engineering?

The modern IT help desk is the enterprise's nerve center. It’s also its most vulnerable entry point. According to industry research, over 40% of all help desk tickets are for password resets and account lockouts (Gartner), each costing up to $70 to resolve (Forrester). This makes the help desk an incredibly attractive and cost-effective target for attackers.

Why? Because social engineers don't hack systems; they hack people. They thrive in environments where security relies on outdated, easily compromised data points:

Knowledge-Based Questions (KBA): The name of your first pet or the street you grew up on isn't a secret. It's public information, easily found on social media or purchased for pennies on the dark web. SMS & Email OTPs: Once considered secure, one-time passcodes are now routinely intercepted via SIM swapping attacks and sophisticated phishing campaigns. Employee ID Numbers & Manager Names: This information is often exposed in data breaches and is useless for proving real-time identity.

Relying on this phishable data forces your agents to become human lie detectors, a role they were never trained for and a battle they are destined to lose. The result is a massive, unmitigated risk of help desk-driven account takeover.

Shifting from Guesswork to Certainty with HYPR's Help Desk App

Today, we're fundamentally changing this dynamic. To secure the help desk, you must move beyond verifying what someone knows and instead verify who someone is. That's why we're proud to introduce the HYPR Affirm Help Desk Application.

This purpose-built application empowers agents by integrating phishing-resistant, multi-factor identity verification directly into their workflow. Instead of asking agents to make high-pressure judgment calls, we give them the tools to verify identity with NIST IAL 2 assurance fast. This transforms your help desk from a primary target into a powerful line of defense against fraud.

How Can You Unify Identity Verification for Every Help Desk Scenario?

The core of the solution is the HYPR Affirm Help Desk App, a command center for agents that integrates seamlessly with your existing support portals (like ServiceNow or Zendesk) and ticketing systems. This provides multiple, flexible paths to resolution, ensuring security and speed no matter how an interaction begins.

Initiate Verification from Anywhere: Self-Service: Empower users to resolve their own issues by launching a secure verification flow directly from your company's support portal. Agent-Assisted: For live calls or chats, an agent can use the HYPR Help Desk App to instantly send a secure, one-time verification link via email or SMS. User-Initiated (with PIN): A user can start the verification process on their own and receive a unique PIN. They provide this PIN to a support agent, who uses it to look up the verified session, ensuring a fast and secure handoff without sharing any PII. Verify with Biometric Certainty:
The user is guided to scan their government-issued photo ID with their device's camera, followed by a quick, certified liveness-detecting selfie. This isn't just a photo match; the liveness check actively prevents spoofing and deepfake attacks, proving with certainty that the legitimate user is physically present and in control of their ID. Resolve with an Immutable Audit Trail:
Once verification is complete, the result is instantly reflected in the agent's Help Desk App. The agent can now confidently proceed with the sensitive task – whether it's a password reset, MFA device recovery, or access escalation. Every step is logged, creating a tamper-proof, auditable record that satisfies the strictest compliance and governance requirements. HYPR vs. Legacy Methods: A New Reality for Help Desk Security

The gap between traditional methods and modern identity assurance is staggering. One relies on luck, the other on proof.

End the Gamble: Stop Account Takeover at the Help Desk

Your organization can't afford to keep rolling the dice. Every interaction at your help desk is a potential entry point for a catastrophic breach. The pressure on your agents is immense, the methods they've been given are broken, and the attackers are relentless.

But there is a different path. A path where certainty replaces guesswork. Where your support team is empowered, not exposed. Where your help desk transforms from a cost center and a risk vector into a secure, efficient enabler of the business. By removing the impossible burden of being human lie detectors, you free your agents to do what they do best: help people. Securely. 

Ready to secure your biggest point of contact? Schedule your personalized HYPR Affirm demo today.

Frequently Asked Questions about HYPR Affirm’s Help Desk App (FAQ)

Q. What is NIST IAL 2 and why is it important for help desk verification?
A: NIST Identity Assurance Level 2 (IAL 2) is a standard from the U.S. National Institute of Standards and Technology. It requires high-confidence identity proofing, including the verification of a government-issued photo ID. For help desk scenarios, meeting this standard ensures you are protected against sophisticated attacks, including deepfakes and social engineering, and is crucial for preventing fraud.

Q. How long does the verification process actually take for the user?
A: The entire user-facing process, from receiving the link to scanning an ID and taking a selfie, is designed for speed and simplicity. A typical full verification is completed in under 2 minutes, and the process is completely configurable.

Q. What happens if a user doesn't have their physical ID available?
A: HYPR Affirm's policy engine is fully configurable. While ID-based verification is the most secure method, organizations can define alternative escalation paths and workflows to securely handle exceptions based on their specific risk tolerance and needs.

Q. Is this solution just for large enterprises?
A: HYPR Affirm for Help Desk is for any organization that needs to eliminate the significant risk of account takeover fraud originating from support interactions. It scales from mid-sized companies to the world's largest enterprises, securing sensitive tasks like password resets, MFA recovery, and access escalations.


Dark Matter Labs

Many-to-Many: The Messy, Meta-Process of Prototyping on Ourselves

Welcome back to our ongoing reflections on the Many-to-Many project. In our last three posts, we’ve taken you through the journey of building our digital platform — from initial concepts and wrestling with complexity to creating our first tangible outputs like the Field Guide and Website. We’ve shared how the project’s tools have emerged from a living, iterative process. Today, we’re taking a ste

Welcome back to our ongoing reflections on the Many-to-Many project. In our last three posts, we’ve taken you through the journey of building our digital platform — from initial concepts and wrestling with complexity to creating our first tangible outputs like the Field Guide and Website. We’ve shared how the project’s tools have emerged from a living, iterative process.

Today, we’re taking a step back to look at the foundational methodology behind this entire initiative. How do you go about creating new models for collaboration when no blueprint exists? Our approach has been a “proof of possibility” — a live experiment where we, along with our ecosystem of partners, served as the primary test subjects.

In this post, the initiative’s co-stewards, Michelle and Annette, discuss the profound challenges and unique learnings that come from trying to build the plane while flying it.

How the Proof of Possibility fits within a wider context of predecessor work, and flows into other initiatives and partial testing in live contexts

Michelle: We wanted to reflect on the “proof of possibility” we ran, where we essentially decided to live prototype on ourselves with a small group of partners in a Learning Network. While it sounds simple, we learned it’s incredibly complex. You’re making decisions and sense-making within a specific prototype, but you’re also constantly trying to translate those learnings into something more generalised and applicable for others. In many ways, it’s a cool, experimental way of working, but it was also a bit of a nightmare.

The prototype, test, learn loop that we started to develop in the Proof of Possibility

Annette: It was very meta. In this proof of possibility, one of the things we were testing was a learning infrastructure for the ecosystem itself. So you’re testing learning within the experiment, while also prototyping the experiment, and then you have to step back and ask: what did we learn from this specific context versus what is context-agnostic and applicable elsewhere? Then there’s another layer: what did we learn about the wider external landscape and its readiness for this work? And finally, what did we learn about the process of learning about all of that? There’s this feeling of learning about learning about learning.

It’s representative of the fractal nature of this work. For instance, we were a core team working on our own governance while simultaneously orchestrating and supporting the ecosystem’s governance. The ecosystem itself was then focused on building capabilities of the system for many-to-many governance. It was navigating so many layers. On one hand, this has immense value because you’re looking at one question from multiple angles at once. On the other hand, it has been incredibly cognitively challenging.

Michelle: It’s that old adage of trying to build the plane whilst flying it — except there are no blueprints for the plane. I think the complexity we bumped into is probably present for anyone trying to do this kind of work, because everyone has to work at fractals all the time. So I was thinking, what are some things we bumped into, and how did we overcome them? The first breakthrough that comes to my mind was when we started to explicitly ask, “Are we talking about this specific prototype right now, or are we talking about the generalised model?” Just having that clear distinction, a shared vocabulary that the whole learning network could use, was a huge moment of alignment for us. It gave people a way to see we were working on at least two layers at the same time.

The draft “Layers of the Project” which was created during the project as a visual representation and description of the different spaces we were trying to hold and build all at once. We note that the thinking has evolved and this image has been superseded, but share it here as a point in time image.

Annette: Yes and we found that the difference in thinking required for each of those layers was huge. Thinking through the specifics of what we did in one context versus pulling out principles applicable across all/any contexts was such a massive gear shift. Turning a specific example — “here’s something we tried” — into a generalised tool — “here’s something useful for others” — was probably a five-fold increase in workload, if not more. The amount of planning and thinking required was significantly different.

Michelle: What else comes up for you from this experience of prototyping on ourselves?

If nothing comes to mind, I can jump in. For me it was the dynamic of being the initiators. We were the ones who convened the group and set the mission. In these complex collaborations, the initiator tends to hold a lot of relational capital, power, and responsibility. This was exacerbated because we were managing all these different layers of learning. It centralised the knowledge and the relational dynamics back to us. If one of us was missing from a budget conversation, for example, it was difficult for others to proceed. For me, the bigger point is that to do good demonstration work, it has to be experimental and emergent. But that doesn’t come for free; it has downsides. This re-centralisation was one of them, and it was a lot for us to hold.

Annette: That makes me wonder if a certain degree of that centralisation is inevitable in organising for these kind of ‘proof of possibilities’. When something is this complex and emergent, you can only distribute so much, so early. To meet the real-time needs of the collaboration, you need an agile core team. This is where it gets interesting — we were operating in the thin space between a sandbox environment and a live context. It had to be a genuine live context for people to want to participate, but it was also a sandbox for testing the general model. You have to meet the timelines of the live context; you can’t just pause for six months to work out team dynamics, or the collaboration collapses. So you almost need a team providing strong leadership to hold both realities at once.

Michelle: So, would you do it the same way again?

Annette: I think if we did it again, the things we’ve learned would make it smoother. We’d be more explicit from the start about which layer we’re discussing. We’d have a better sense of how to capture live learning and translate it into a model as we go. When we started, most of our attention was on hosting the live context, and a lot of the synthesis happened afterwards. Having done it once, I’d be more conscious of doing that synthesis in real-time — though the cognitive lift to switch between those modes is still immense.

Michelle: I agree, I would do it again with those additions. The other thing is that when we started, we didn’t even really have the process that we wanted to go through. Now we do. We’ve learned more about what works. Starting fresh, we would have a decent sketch of a process to begin with. Not perfect, and you still have to wing it, but it’s a good start. I’d be interested to do it again and see what happens.

This meta-reflective process — learning about learning while doing — has been a central part of the Many-to-Many initiative creating a ‘Proof of Possibility’ as a way to learn about what’s possible at a system level. While navigating these fractal layers is cognitively demanding, it’s what allows for true emergence, distinguishing this deep, systemic work from simple chaos. It is a messy, challenging, and ultimately fruitful way to discover what’s possible.

In the Many-to-Many website [coming soon] you will find some resources based on what we did in the Proof of Possibility (Experimenter’s Logs and example methods and artefacts like the Contract) and some based on what might be applicable across contexts (a Field Guide, some tools and an overview of System Blockers we’ve encountered) along with case studies and top tips from other contexts in the learning network.

Thanks for following our journey. You can find our previous posts [here], [here] and [here] and stay updated by joining the Beyond the Rules newsletter [here].

Visual concept by Arianna Smaron & Anahat Kaur.

Many-to-Many: The Messy, Meta-Process of Prototyping on Ourselves was originally published in Dark Matter Laboratories on Medium, where people are continuing the conversation by highlighting and responding to this story.


Indicio

Decentralized identity: The superpower every 2026 budget needs

The post Decentralized identity: The superpower every 2026 budget needs appeared first on Indicio.
Verifiable Credentials are the foundation for faster, safer, and more cost-effective digital strategy. In this new report from Indicio we look at  examples of successful deployments, the benefits to business, and explain the risks of waiting too long to adopt this technology. We also explain how to eliminate the cost and uncertainty of developing from scratch, laying out a blueprint for making adoption simple.

By Helen Garneau

Every 2026 budget decision will come down to a simple question: does this investment deliver measurable value. Leaders are expected to cut costs, reduce risk, and still deliver growth. In that environment, the way you handle digital identity can no longer be an afterthought—it has to be a priority.

This is especially true as identity fraud accelerates across all fronts, driven by generative AI brute force attacks, deepfakes, and social media scams. Legacy technology isn’t just failing to keep up, it’s the root cause of these problems.

That is why we wrote  Decentralized Identity: The Superpower Every 2026 Budget Needs. It explains why Verifiable Credentials are  a transformational new technology that combines authentication and fraud prevention in one, simple, and cost effective solution that you can easily inject into your systems and operations.

Can you inoculate your IAM processes against deepfakes?

Yes you can — by incorporating authenticated biometrics into Verifiable Credentials. We explain how organizations are already doing just that to cut fraud and costs, and how you can too, by showing a practical path for adoption.

Now is the time to act. As 2026 budgets are finalized, the organizations that plan for Verifiable Credentials today will be the ones that are positioned to lead their markets. Get an in-depth knowledge and actionable insights that you can turn into immediate savings.

Download the report and see how Indicio Proven can help you reduce costs, protect against fraud, and accelerate growth in 2026.

The post Decentralized identity: The superpower every 2026 budget needs appeared first on Indicio.


BlueSky

Bluesky's Patent Non-Aggression Pledge

Bluesky develops open protocols. We're taking a short and simple patent non-aggression pledge to ensure that everybody feels confident building on them.

Bluesky develops open protocols, and we want everybody to feel confident building on them. We have released our software SDKs and reference implementations under Open Source licenses, but those licenses don’t cover everything. To provide additional assurance around patent rights, we are making a non-aggression pledge.

This commitment builds on our recent announcement that we’re taking parts of AT to the IETF in an effort to establish long-term governance for the protocol.

Specifically, we are adopting the short and simple Protocol Labs Patent Non-Aggression Pledge:

Bluesky Social will not enforce any of the patents on any software invention Bluesky Social owns now or in the future, except against a party who files, threatens, or voluntarily participates in a claim for patent infringement against (i) Bluesky Social or (ii) any third party based on that party's use or distribution of technologies created by Bluesky Social.

This pledge is intended to be a legally binding statement. However, we may still enter into license agreements under individually negotiated terms for those who wish to use Bluesky Social technology but cannot or do not wish to rely on this pledge alone.

We are grateful to Protocol Labs for the research and legal review they undertook when developing this pledge text, as part of their permissive intellectual property strategy.


FastID

Fastly's Seven Years of Recognition as a Gartner® Peer Insights™ Customers’ Choice

Fastly was named a 2025 Gartner® Peer Insights™ Customers’ Choice for Cloud WAAP, marking seven consecutive years of recognition driven by customer trust and reviews.
Fastly was named a 2025 Gartner® Peer Insights™ Customers’ Choice for Cloud WAAP, marking seven consecutive years of recognition driven by customer trust and reviews.

Tuesday, 30. September 2025

Mythics

Mythics' Strategic Acquisitions Amplify Cloud-Powered, AI-Driven Transformation at Oracle AI World

The post Mythics' Strategic Acquisitions Amplify Cloud-Powered, AI-Driven Transformation at Oracle AI World appeared first on Mythics.

Spherical Cow Consulting

Delegation and Consent: Who Actually Benefits?

When not distracted by AI (which, you have to admit, is very distracting) I’ve been thinking a lot about delegation in digital identity. We have the tools that allow administrators or individuals grant specific permissions to applications and service.  In theory, it’s a clean model. The post Delegation and Consent: Who Actually Benefits? appeared first on Spherical Cow Consulting.

“When not distracted by AI (which, you have to admit, is very distracting), I’ve been thinking a lot about delegation in digital identity. We have the tools that allow administrators or individuals to grant specific permissions to applications and services.” 

In theory, it’s a clean model: you delegate only what’s necessary to the right party, for the right time. Consent screens, checkboxes, and admin approvals are supposed to embody that intent.

That said, the incentive structures around delegation don’t actually encourage restraint. They encourage permission grabs and reward broader access, not narrower. And when that happens, what was supposed to be a trust-building mechanism—delegation with informed consent—turns into a trust-eroding practice.

A Digital Identity Digest Delegation and Consent: Who Actually Benefits? Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:11:54 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link Embed

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

Delegation’s design intent versus product incentives

Delegation protocols like OAuth were designed to solve a simple problem: how can an application act on your behalf without you handing over your password? Instead of giving a third-party app your full login, OAuth lets you grant that app a limited token, scoped to specific actions, like “read my calendar” or “post to my timeline.” In enterprise settings, administrators can approve apps at scale, effectively saying, “this tool can access certain company data on behalf of all our employees.”

The intent is least privilege: give just enough access to accomplish the task, nothing more. Tokens should be narrowly scoped, time-bound, and transparent.

But the product incentives push in the opposite direction. If you’re a developer or growth team, every extra permission opens new doors: richer analytics, better personalization, and potentially more revenue. Why ask for the bare minimum when you can ask for a lot more, especially if you can get away with it?

And so the pattern of permission creep emerges. There is an interesting study of Android apps, for example, shows that popular apps tend to add more permissions over time, not fewer. The reason isn’t technical necessity; it’s incentive alignment. More access means more opportunities, even if it slowly undermines the trust that delegation was supposed to build.

This is scope inflation: when “read metadata from one folder” somehow balloons into “read and write all files in your entire cloud drive.” From a delegation perspective, it looks absurd. From an incentive perspective, it looks entirely rational.

Consent as a manufactured outcome

Let’s talk about “consent.” It’s the shiny wrapper that’s supposed to make delegation safe. The idea is simple: a user sees what’s being requested, makes an informed choice, and either agrees or doesn’t. That’s the theory. In practice, consent is manufactured.

Consent screens are optimized like landing pages. The language is written to minimize friction. The buttons are designed to maximize acceptance. Companies treat “consent rates” the same way they treat sign-up conversions or click-through rates: a metric to push upward.

And the tactics aren’t subtle:

Dark patterns in consent UIs. Regulators in the EU have formally called out manipulative design in cookie banners and social media interfaces; tricks like highlighting the “accept” button in bright colors while burying “reject” in a subtle link. That’s not neutral presentation. That’s steering. Consent-or-pay models. The latest battleground is whether “pay or accept tracking” constitutes valid consent. European regulators have said that if refusal carries a cost, then consent may not be “freely given.” Yet many sites lean into exactly this model: you can either hand over your data or hand over your credit card. Consent fatigue. When users see banners, pop-ups, and consent prompts multiple times a day, they stop reading. They click whatever gets them through fastest. At that point, it’s no longer informed consent, it’s consent theater.

Delegation without trust is already fragile. Delegation wrapped in manufactured consent is worse: it’s a contract of adhesion where one party has all the power and the other clicks “accept” because they have no real choice.

If you’d like to dive into the consent debate further, I HIGHLY recommend you follow Eve Maler’s The Venn Factory. She has a great blog series on consent (example here) and an even greater whitepaper (for a fee but totally worth it).

Enterprise delegation and the admin consent problem

It’s tempting to think this is just a consumer problem involving cookie banners and mobile apps. But enterprise delegation has its own set of perverse incentives.

Take Microsoft 365 and Entra ID as an example (though let’s be clear that this is absolutely a common scenario). Enterprises can allow third-party apps to request access to user or organizational data through OAuth. To reduce noise, Microsoft lets administrators “consent on behalf of the organization.” Sounds efficient, right? Fewer pop-ups, fewer interruptions for the workforce, saving time (and time = money, right?).

But that efficiency comes at a cost. Attackers exploit this very model through “consent phishing”: tricking a user or admin into approving a malicious app that requests broad API scopes. Once granted, those permissions are durable and hard to detect. Microsoft now publishes guidance on identifying and remediating risky OAuth apps precisely because the model’s incentives tilt toward convenience over caution.

For administrators, the path of least resistance is to click “Approve for the organization” once and move on. That makes life incrementally easier for everybody: administrators, their users, and the attackers.

Enforcement as a belated correction

If the incentives reward broad access, who actually keeps things in check? Increasingly, it’s regulators and courts.

In the U.S., the Federal Trade Commission has penalized companies like Disney and Mobilewalla for collecting data under misleading labels or without meaningful consent. The penalties aren’t just financial; they force changes in how products are designed and how defaults are set. In Europe, the IAB’s Transparency and Consent Framework—the standard that underpins much of adtech—has faced repeated rulings (see examples here and here) that its consent strings are personal data, that aspects of the framework violate GDPR, and that “consent at scale” is not a free pass. Legal battles continue, but I think the message being sent is pretty obvious: broad, opaque consent mechanisms don’t hold up under scrutiny. Regulators have also zeroed in on “consent-or-pay” and dark pattern interfaces, explicitly saying that these undermine the principle of freely given consent.

What’s happening is essentially a regulatory realignment of incentives. If the market rewards permission grabs, fines, and rulings change the cost-benefit equation. In some markets, but not all, the cheapest path is shifting to grabbing less data, not more.

Why this erodes trust

From the individual’s point of view, none of this is subtle. They notice when an app requests more permissions than it should. They notice when every website they visit demands cookie consent in confusing ways (it is SO ANNOYING). They notice when their IT department approves a sketchy app and they’re the ones who end up phished.

The result is trust erosion. Individuals stop believing that “consent” means choice and assume that every request for access is a data grab in disguise. They are probably not wrong.

And once trust is gone, it’s not easily rebuilt. Every new protocol, every new delegation model, has to fight against that backdrop of suspicion.

What good looks like

If delegation and consent are to survive as trust-building mechanisms, they have to look different from how they look today. Here are a few ways to realign the incentives:

Purpose-bound scopes. Tokens should be tied to specific actions, not broad categories. “Read file metadata for this folder” is a very different ask than “Read all your files.” Time-boxed tokens. Access should expire quickly unless explicitly renewed. Long-lived tokens are an incentive to attackers and a liability for providers. Refusal symmetry. The “reject” button should be as prominent and easy to click as the “accept” button. Anything less is manipulation. Transparent change logs. Apps should publish what scopes they request and why, with a clear history of when those scopes changed. If permissions creep is inevitable, at least make it visible. Admin consent boards. In enterprises, app approval should involve more than a single overworked admin. Formal review processes—similar to change advisory boards—can slow down risky delegation without grinding everything to a halt. Trust reports. Companies could publish regular “trust reports” that show how delegation and consent are actually being managed. Which apps request what? How often are tokens revoked? How many requests are denied? Turning these into KPIs re-aligns incentives toward trust, not just conversion. Who actually benefits?

So, back to the original question: who actually benefits from delegation and consent as practiced today?

Companies benefit from broader access because it feeds product features, analytics, and monetization. Attackers benefit when that broad access is abused, because consent tokens and admin approvals often outlive user awareness. Regulators benefit politically when they enforce, because they’re seen as protecting consumer rights. Users? Users benefit in theory, but in practice, they’re the least likely to see real advantage. Their consent is optimized against, their delegation scopes are inflated, and their trust is constantly eroded.

Delegation and consent were supposed to empower users. Right now, they mostly empower everyone else.

The path forward

Delegation is too valuable to discard; it is definitely having its moment given the complexities of doing it correctly. Consent is too foundational to abandon; the alternative of not asking at all is at least as bad as asking too much. But both need to be reclaimed from the incentive structures that have warped them.

That means treating trust as the KPI, not just consent click-through rates. It means designing delegation flows that prioritize least privilege, not maximum access. It means regulators continuing to push back against manipulative practices, and companies recognizing that the long game is trust, not just data.

If the only people who benefit from delegation and consent are companies and attackers, then the rest of us have been sold a story. And the longer that story holds, the harder it will be to convince users that their “yes” actually means something. If your bosses are having a hard time understanding that, feel free to print out this post and slide it under their office door. They might think a bit more deeply about their decisions going forward.

If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript

[00:00:30] Welcome back to A Digital Identity Digest. Today’s episode is called Delegation and Who Actually Benefits?

[00:00:37] This piece builds on earlier conversations and writing about delegation and digital identity.

[00:00:44] Today, we’ll explore how incentive structures push companies to grab broader permissions than they really need—and how that erodes trust.

The Clean Model of Delegation

[00:00:53] When not distracted by all the AI news—which you have to admit is very distracting—I’ve been thinking a lot about delegation and digital identity.

We have tools that allow administrators or individuals to grant specific permissions to applications and services. In theory, this is a very clean model:

Delegate only what’s necessary To the right party For the right time

[00:01:18] Consent screens, checkboxes, and admin approvals are all supposed to embody this principle.

[00:01:24] Unfortunately, incentives don’t encourage restraint. They encourage permission grabs. That reward system favors broader access, not narrower. What should be a trust-building mechanism often turns into a trust-eroding practice.

OAuth and the Design of Least Privilege

[00:01:40] Delegation protocols like OAuth were created to solve a practical problem:

[00:01:47] How can an application act on your behalf without requiring your password?

Instead of handing over login credentials, OAuth allows granting a limited token. Ideally, that token is:

Scoped to a specific action (e.g., read my calendar) Time-bound Transparent

[00:02:17] In enterprise settings, administrators can approve apps at scale. That way, employees aren’t asked to answer the same questions repeatedly.

[00:02:28] But here’s the issue: incentives push in the opposite direction.

[00:02:32] Service builders want broader access because:

More permissions unlock richer analytics Data enables personalization Extra information can be monetized

[00:02:42] Growth teams treat every consent screen as a conversion funnel to optimize. Why ask for less when asking for more is easier?

[00:02:59] The result is permission creep. Studies of Android apps show that popular apps add permissions over time—not fewer.

Consent in Theory vs. Consent in Practice

[00:03:34] On paper, consent is the safeguard. Users see what’s requested and make an informed choice.

[00:03:48] In practice, consent is manufactured. Consent screens are optimized like landing pages.

Language minimizes friction Buttons maximize acceptance Consent rates are tracked as key metrics

[00:04:00] Dark patterns dominate: cookie banners where “Accept All” is bright and obvious, while “Reject” hides as a faint gray link.

[00:04:15] Regulators in Europe have called this out as manipulative.

[00:04:20] Then there are “consent or pay” models: accept tracking or pay for access. Regulators argue this undermines freely given consent.

[00:04:33] And, of course, there’s consent fatigue. Repeated banners train users to click without thinking. What’s left isn’t informed consent—it’s consent theater.

[00:04:46] Delegation without trust is fragile. Delegation wrapped in manufactured consent is worse.

Enterprise Risks and Consent Phishing

[00:05:01] This isn’t just a consumer problem. Enterprise environments like Microsoft 365 and Entra ID carry their own risks.

[00:05:13] Enterprises can let third-party apps request organizational data. To reduce friction, admins can consent on behalf of the entire company.

[00:05:22] Efficient, yes. Dangerous, absolutely.

[00:05:24] Attackers exploit this through consent phishing—tricking admins into approving malicious apps with broad permissions. Once granted, this access is durable and hard to detect.

[00:05:39] Microsoft even publishes playbooks to spot risky OAuth apps, acknowledging the problem.

[00:05:44] But incentives still tilt toward convenience. For overworked admins, approving once feels easier than vetting thoroughly.

Regulatory Realignment of Incentives

[00:06:03] If incentives reward broad access, who reins it in? Increasingly, regulators.

[00:06:11] In the U.S., the Federal Trade Commission has penalized companies for misleading consent practices.

Disney and Mobilewalla paid fines Companies were required to change product design, not just pay penalties

[00:06:26] In Europe, the IAB’s Transparency and Consent Framework has been ruled non-compliant with GDPR. Courts held that consent at scale does not equal valid consent.

[00:06:46] Regulators are also challenging “consent or pay” models, stating they undermine freely given consent.

[00:06:59] This is a regulatory re-alignment of incentives. If the market rewards permission grabs, fines and rulings push companies in the opposite direction—toward less data collection.

The User’s Perspective and Erosion of Trust

[00:07:14] From the user’s point of view, the problem is visible:

Apps request more permissions than needed Cookie banners are confusing IT teams approve apps that later lead to phishing

[00:07:46] The result is erosion of trust. Users stop believing that:

Consent equals choice Delegation equals least privilege

[00:07:56] Once trust is lost, it’s hard to rebuild. Every new product must fight against this backdrop of suspicion.

How Do We Fix This?

[00:07:58] So how can delegation and consent become real trust-building mechanisms instead of hollow rituals?

[00:08:04] Here’s a list:

Purpose-bound scopes: tokens tied to specific actions, not broad categories Time-boxed tokens: access that expires quickly unless renewed Refusal symmetry: reject buttons as visible and easy as accept buttons Transparent change logs: apps publishing history of permission requests Admin consent boards: enterprise review panels instead of one pressured approver Trust reports: companies disclosing how often requests are denied, access revoked, and policies enforced

[00:09:05] Each of these shifts incentives toward making trust the key performance indicator.

Who Actually Benefits?

[00:09:16] Returning to the original question: who benefits from delegation and consent today?

Companies: more permissions, more data, more revenue Regulators: political capital when stepping in Attackers: durable, broad tokens for persistence People: benefit mostly in theory, but often remain the least protected

[00:09:57] Delegation and consent were meant to empower users. Today, they mostly empower everyone else.

[00:10:04] But both are too important to discard. They must be reclaimed from warped incentives.

[00:10:18] That means:

Treating trust as the KPI Designing delegation for least privilege, not maximum access Regulators continuing to push back against manipulation

[00:10:30] Because if only companies and attackers benefit, we’ve lost the plot.

Closing Thoughts

[00:10:44] If you want to dive deeper, explore the work of Eve Maler at the Venn Factory. Her white paper on consent is a fantastic resource worth reading.

[00:11:06] Thanks again for joining A Digital Identity Digest.

[00:11:17] If this episode made things clearer—or at least more interesting—share it with a friend or colleague. Connect with me on LinkedIn @hlflanagan.

And don’t forget to subscribe and leave a review on Apple Podcasts or wherever you listen. The written post is always available at sphericalcowconsulting.com.

Stay curious, stay engaged, and let’s keep these conversations going.

The post Delegation and Consent: Who Actually Benefits? appeared first on Spherical Cow Consulting.


ComplyCube

The CryptoCubed Newsletter: September Edition

In this month’s edition, we cover Australia’s $16.5 million warning to unlicensed crypto firms, KuCoin’s legal battle with Canada’s FINTRAC, the married duo who scammed over 145 crypto investors, Poland’s new crypto bill, and more! The post The CryptoCubed Newsletter: September Edition first appeared on ComplyCube.

In this month’s edition, we cover Australia’s $16.5 million warning to unlicensed crypto firms, KuCoin’s legal battle with Canada’s FINTRAC, the married duo who scammed over 145 crypto investors, Poland’s new crypto bill, and more!

The post The CryptoCubed Newsletter: September Edition first appeared on ComplyCube.


FastID

Make Sense of Chaos with Fastly API Discovery

Discover, monitor, and secure your APIs with Fastly API Discovery. Get instant visibility, cut the noise, and keep your APIs secure and compliant.
Discover, monitor, and secure your APIs with Fastly API Discovery. Get instant visibility, cut the noise, and keep your APIs secure and compliant.

Monday, 29. September 2025

liminal (was OWI)

Identity Market & Policy Trends 2026: Intelligence for a Changing Landscape

Intelligence for a Changing Landscape The post Identity Market & Policy Trends 2026: Intelligence for a Changing Landscape appeared first on Liminal.co.

Intelligence for a Changing Landscape

The post Identity Market & Policy Trends 2026: Intelligence for a Changing Landscape appeared first on Liminal.co.


Ontology

The Role of EOAs in Long-Term Web3 Identity

Hand someone a ledger full of cold storage and they’ll sleep fine at night. Hand them the same ledger and tell them it’s their daily identity and they’ll start sweating. That’s the dividing line between Externally Owned Accounts (EOAs) and the future of Web3 identity. 👉 [7 Proven Ways Smart Wallets Transform Web3 Identity Forever] EOAs are the oldest and most widely used model for blockchai

Hand someone a ledger full of cold storage and they’ll sleep fine at night. Hand them the same ledger and tell them it’s their daily identity and they’ll start sweating. That’s the dividing line between Externally Owned Accounts (EOAs) and the future of Web3 identity.

👉 [7 Proven Ways Smart Wallets Transform Web3 Identity Forever]

EOAs are the oldest and most widely used model for blockchain accounts. They were introduced in Ethereum’s earliest days, designed around a single principle: one private key controls one account. That design is elegant in its simplicity and still unmatched when it comes to long-term security.

But as Web3 evolves into a world of portable, reputation-based, and privacy-first identity, it’s worth asking: where do EOAs fit in?

What Are EOAs in Web3?

An EOA is the most basic account type in Ethereum and many other blockchains. Unlike smart contracts, EOAs have no internal code or logic. They exist to send and receive assets, secured entirely by a private key.

If you control the key, you control the account. Lose the key, and the account is gone forever. There is no backup, no recovery, and no reset button.

That rigidity is why EOAs are perfect for what they were built for: vaults.

EOAs as Vaults in Web3 Identity

When it comes to cold storage and long-term custody, EOAs are unmatched. Pair one with a hardware wallet and you have one of the most secure setups in all of crypto.

Staking: EOAs work perfectly for locking up assets in staking positions. Governance tokens: If you plan to hold voting power for years, an EOA keeps it safe. NFT collections: For high-value NFTs meant for long-term ownership, EOAs are the best option. Institutional custody: Funds and DAOs often rely on EOAs for their simplicity and auditability.

The lack of flexibility is what makes them secure. No extra logic means fewer attack vectors. No recovery flows means fewer trust assumptions. Just a private key, a wallet, and assets locked away until you decide to move them.

Why EOAs Struggle as Daily Web3 Identity

The problem comes when EOAs are forced into a role they weren’t designed for: identity.

Daily Web3 identity requires accounts that are:

Recoverable if a key is lost or a device breaks Readable with human-friendly identifiers instead of 42-character hex strings Portable across chains, dApps, and platforms Flexible enough to hold credentials, permissions, and reputation

EOAs can’t do any of this. They’re silent vaults. They don’t carry context or history. They can’t evolve as your needs change. And they put every bit of risk onto one fragile key.

This is where smart wallets and Account Abstraction take over.

EOAs vs Smart Wallets: Dividing the Labor

It’s easy to frame EOAs and smart wallets as competitors, but that’s the wrong way to look at it. They’re complements. Each plays a specific role in the Web3 stack.

EOAs are vaults: best for long-term asset storage, cold custody, and high-value holdings. Smart wallets are identity: built for daily use, recovery, credentials, cross-chain logic, and compliance.

Instead of replacing EOAs, smart wallets expand Web3 identity beyond them. The vaults still exist, but identity moves into programmable, human-friendly infrastructure.

Why EOAs Still Matter for the Future of Web3

Even as smart wallets gain adoption, EOAs will remain essential for three reasons:

Security: The simplicity of EOAs makes them the most secure baseline for storage. Reliability: They are battle-tested and widely supported across every major blockchain. Foundation: Many smart wallets ultimately anchor to EOAs under the hood, ensuring that the vault layer remains intact.

In other words, EOAs aren’t going away. They are the bedrock of Web3. But they can’t carry the entire weight of identity.

The Balance Ahead

The future of Web3 identity is not either-or. It’s both.

Use EOAs for vaults: keep long-term assets locked down in their simplest, most secure form. Use smart wallets for identity: manage recovery, credentials, and interactions across chains and applications.

Together they cover the full spectrum of what Web3 demands: immovable security on one end, human usability on the other.

Try It Yourself: EOAs with ONT ID in ONTO Wallet

EOAs are the backbone of long-term Web3 security. With ONT ID, you can anchor an EOA to your decentralized identity and keep assets safe while still unlocking future-ready features like staking and verifiable credentials.

Download ONTO Wallet to:

Manage EOAs for secure asset storage Stake directly from your vaults Connect your EOA to ONT ID for portable identity Explore verifiable credentials while keeping full self custody

Whether you’re holding tokens, securing NFTs, or preparing for the next phase of Web3 identity, ONTO Wallet gives you the flexibility of smart features with the permanence of EOAs.

Learn More: How Smart Wallets Complete the Picture

EOAs may be the vaults of Web3, but they’re only half the story. To see how Account Abstraction and smart wallets transform identity into something portable, recoverable, and privacy-first, read the full breakdown:

👉 [7 Proven Ways Smart Wallets Transform Web3 Identity Forever]

The Role of EOAs in Long-Term Web3 Identity was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


FastID

From Climate Week NYC to Fastly’s 100% Renewable Commitment

Fastly commits to 100% renewable electricity coverage across its global network and offices, advancing a sustainable internet and supporting customers' climate goals.
Fastly commits to 100% renewable electricity coverage across its global network and offices, advancing a sustainable internet and supporting customers' climate goals.

Friday, 26. September 2025

Anonym

Your Complete Guide to Online Privacy in 2025: Who is Taking Your Personal Info and How to Stop Them

Every time you buy something, open an account, search the internet, interact on social media, and use smart devices, public WiFi, and AI, you leave a trail of personal information or “personal data” that is being collected, shared, used, and abused. Suddenly you’re getting spam calls, phishing emails, smishing texts, and data breach alerts, all […] The post Your Complete Guide to Online Privacy

Every time you buy something, open an account, search the internet, interact on social media, and use smart devices, public WiFi, and AI, you leave a trail of personal information or “personal data” that is being collected, shared, used, and abused. Suddenly you’re getting spam calls, phishing emails, smishing texts, and data breach alerts, all while someone is booking flights to Ibiza with your credit card and taking out mortgages in your name!   

In 2025, our digital footprints are vast and vulnerable— and online privacy is an urgent issue.

This guide covers everything you need to know about online privacy:

What are personal data and your digital footprint? Who’s collecting your personal information and why? What happens when your information gets into the wrong hands? What is data privacy? Are there data privacy laws? What you can do to protect yourself

What are personal data and your digital footprint?

Your digital footprint is all the information about you that exists on the internet because of your online activity. It’s sometimes called your digital exhaust because, just as engine exhaust is residue from using a car, digital exhaust is residue from using the internet. 

Your data is collected from:

Websites (cookies, tracking pixels, session recording) Mobile apps (permissions, background data sharing) Social media (likes, shares, behaviour analysis in social graphs and interest graphs) Smart devices Artificial intelligence (AI) tools Public WiFi and location tracking

Your digital footprint contains what’s called your personal data. Data is information, and  personal data (or personal information or (to get technical) personally identifiable information) is officially defined as any data that can be used to distinguish or trace an individual’s identity and any other information that is linked or linkable to an individual, such as medical, educational, financial, and employment information. 

Examples of PII are:

your full name, maiden name, mother‘s maiden name, and alias your date of birth, place of birth, race, religion, weight, activities, geographical indicators employment information, medical information, education information, financial information personal ID numbers such as your SSN and passport and driver license numbers your addresses your telephone numbers IP or MAC address personal characteristics, including photographic images, x-rays, fingerprints, or other biometric image your vehicle registration number or title number

Who’s collecting your personal information and why?

Our digital world is now so reliant on user data it’s described as surveillance capitalism and the data economy. Loads of players have their fingers in this “personal data pie”, including:

Big tech

Tech companies like Alphabet (Google), Meta, Amazon, Apple, Microsoft are giving you “free” access to their platforms and products in return for your personal information, time, and attention. Have you heard the saying, “If you’re not paying for the product, you ARE the product?”

Part of your digital footprint is also what’s known as your social graph and your interest graph. A social graph is a digital map of who you know—your relationships within a social network including your friends, family, coworkers, etc., while an interest graph maps what you like—it connects you to other people based on shared interests, hobbies and topics, rather than personal relationships.

Big tech uses all this personal data to:

Sell ads to third-party advertisers that serve you personalized ads (those scarily coincidental ads that pop up within seconds of your search for a product) Control the content you see, including news feeds and social media posts Set higher prices (you search for something high risk like “motor racing” and suddenly your insurance premium goes up) Influence your political decisions (read up on Cambridge Analytica for a famous example). 

And here’s another thing: most users never consent to their information being used in these ways. Most privacy policies are long, vague, and unreadable, and user consent is complex. What’s more, many apps use dark patterns—design tricks that pressure users to share more information and buy more products than they want to.

Data brokers

Data brokers, which are about 4000 legitimate but unregulated organizations worldwide, are gathering and collating your lucrative data to sell profiles to advertisers, insurers, and political groups. These profiles can include:

your age marital status where you live your email address employer how much money you make how many children you have where you shop what you buy your medical conditions and health issues who you vote for and support

Data brokers usually sell user information to brands in list form. Your email address on a list of people with a particular medical condition such as diabetes would be worth about $79 and on a list of a particular class of traveller about $251. And that’s another thing: A lot of your personal data online isn’t stuff you’d want to share around. While data brokers say the data is anonymized, it’s scarily simple to re-identify so-called “anonymous” data. In fact, some researchers say anonymous data is a lie, and that unless all aspects of de-identifying data are done right, it is incredibly easy to re-identify the subjects.

Governments

Worldwide, governments use citizens’ personal data for surveillance under the guise of national security, public safety, and crime prevention. For example, Proton recently reported that Google, Apple and Meta have handed over data on 3.1 million accounts to the US authorities over the last decade (regardless of which political party was in the White House), providing information such as emails, files, messages, and other highly personal information.

“In the past, the government relied on massive, complex and legally questionable surveillance apparatus run by organizations like the NSA. But thanks to the advent of surveillance capitalism, this is no longer necessary,” said Raphael Auphan, Proton’s chief operating officer.

“All that’s required for the government to find out just about everything it could ever need is a request message to big tech in California. And as long as big tech refuses to implement widespread end-to-end encryption, these massive, private data reserves will remain open to abuse,” Auphan added.

Hackers and scammers

Criminals exploit stolen data in many different ways, which brings us to the next point …

What happens when your personal information gets into the wrong hands?

We’ve covered what brands and governments do with your personal information. Bad actors can also do a lot of damage with your data:

Identity theft: Using your stolen information to impersonate you for financial gain or to commit crimes Financial fraud: Accessing your bank accounts, credit card information, or other financial accounts to make unauthorized transactions Phishing: Sending fraudulent emails or messages pretending to be from legitimate organizations to trick you into revealing more information or clicking on malicious links Social engineering: Manipulating you into divulging confidential information, often by posing as someone you trust or using your stolen information to build credibility Account takeover: Gaining unauthorized access to your online accounts (email, social media, etc.) using your stolen usernames and passwords Tax fraud: Using stolen personal information to file fraudulent tax returns and claim refunds Medical identity theft: Using your stolen information to get medical services and prescriptions, or to fraudulently file insurance claims Employment fraud: Using your stolen information to illegally gain employment or benefits Blackmail or extortion: Threatening to expose your sensitive information unless you pay a ransom Creating fake identities: Using your stolen information to create new identities for various fraudulent purposes.

Data breaches are the new normal

One way bad actors get your information is through data breaches. A data breach is a security event where highly sensitive, confidential or protected information is accessed or disclosed without permission or is lost.

We’ve almost come to expect massive, damaging data breaches. The year 2024 had the most data breaches on record, and 2025 has already seen the largest data breach of all time: the leaking of more than 16 billion usernames and passwords to user accounts with Apple, Facebook, Google, other social media accounts, and government services.

AI is making data privacy worse

AI is connecting just about everything in our lives, from our vehicles to eyewear, and we’re using it in all sorts of everyday ways. But AI presents privacy risks not only in what we share but also in how AI can analyze, infer, and act on that information without our permission (think: deep fakes, for example).

Academics have already identified at least 12 privacy risks from AI, and safe and ethical AI governance is a priority.

What is online privacy?

You might say, “I have nothing to hide”, “Privacy tools are only for criminals” or “Social media is harmless fun,” but against this backdrop of risks and damage, you can see the urgent need to protect your online privacy (or data privacy). This is about your rights to control your personal information and how it’s used.

Data privacy matters because it protects our fundamental right to privacy and means we can:

Limit others’ control over us to know about us and to cause us harm Better manage our professional and personal reputations  Put in place boundaries and encourage respect Maintain trust in relationships and interactions with others Protect our right to free speech and thought Pursue second chances for regaining our privacy Feel empowered that we’re in control of our life.  Are there data privacy laws?

Data privacy laws are designed to give users more control over their personal data by regulating how organizations can collect, store, and use that information.

As at 2024, 137 countries have national data privacy laws, which means 70% of nations worldwide, 6.3 billion people, or 79.3% of the world’s population is covered by some form of national data privacy law.

Despite many attempts, the United States is one of the only major global economies without a strong national privacy law similar to the European Union’s GDPR—the gold standard for consumer data privacy protections and with regulatory impact around the world. Instead, the US has a patchwork of state-based privacy laws. A dedicated working group was recently formed to try again on a US federal privacy law, so watch this space.

What you can do to protect your personal information and online privacy

Regardless of the laws, you can do a lot to protect yourself. First, you need to cover some basics:

Use strong, unique passwords for each of your online accounts. Store them securely in a password manager. Enable two-factor authentication (2FA). Don’t share sensitive details on public platforms or unsecured websites. Keep your software and devices updated. Be cautious of phishing emails and smishing texts, links, and attachments. Know what to do in the event of a data breach. Switch to a private browser that stops ads and tracking. Use end-to-end encrypted messaging and calling, wherever possible. Regularly review your privacy settings on platforms like Facebook, X, Instagram, and LinkedIn to limit data collection. Limit app permissions to stop third-party services from accessing your data. Regularly audit your online activity to remove old or inactive connections, unfollow accounts, and mute topics you’re not interested in. Unsubscribe from unnecessary services. Clear browsing history and cookies regularly

If that seems a lot, we have good news: MySudo all-in-one privacy app deals with many of those actions in one simple app—and the other apps in the MySudo family take you even further.

MySudo

MySudo all-in-one privacy app is built around the Sudo, a secure digital profile with email, phone, and virtual cards to use instead of your own. Anywhere you usually give your personal details, you simply give your Sudo details instead. Sudos let you live your life online without spam, scams, and constant surveillance.

What’s in a Sudo? 1 email address – for end-to-end encrypted emails between app users, and standard email with everyone else 1 handle* – for end-to-end encrypted messages and video, voice and group calls between app users 1 private browser – for searching the internet without ads and tracking 1 phone number (optional)* – for end-to-end encrypted messaging and video, voice and group calls between app users, and standard connections with everyone else; customizable and mutable 1 virtual card (optional)* – for protecting your personal info and your money, like a proxy for your credit or debit card or bank account

*Phone numbers and virtual cards are only available on a paid plan. Phone numbers are available for US, CA and UK only. Virtual cards are for US only. Handles are for end-to-end encrypted comms between app users.

You can have up to 9 separate Sudos in the app. With your Sudos, you can:

Protect your information. Basically, with MySudo, you decide who gets your personal information, and everyone else gets your Sudo information. 

Instead of using your own email, phone number, and credit card all over the internet, use the alternative contact details from your Sudo. So, you would use your Sudo email and phone number to open and log into accounts and contact people; use the private browser to search online without ads and tracking; and use your Sudo virtual card to pay for purchases without exposing your own credit or debit card. Virtual cards are linked to your own credit card or debit card but don’t reveal those details during transactions.

In this way, you … Break your data trail. When you compartmentalize your life into different Sudos, you silo your information and make it impossible for anyone to track you across sites and apps to sell or steal your personal information. And if one Sudo’s details get caught in a data breach or is heavily spammed, you can either ignore it, mute it, or delete it and start again.

Uses for Sudos are limited only by your imagination. Sign up for deals and discounts, book rental cars and hotel rooms, order food or sell your stuff – all without giving away your personal information. Be creative with your Sudos: Setting up a dedicated Sudo to stay safe while volunteering is a popular choice, for example.

You might like:
How MySudo lets you control who sees your personal info online and in real life
From Yelp to Lyft: 6 ways to “do life” without using your personal details
4 steps to setting up MySudo to meet your real life privacy needs Use the end-to-end encrypted messaging and calling within each Sudo to keep your conversations private. Your Sudo phone number works like a standard number but also gives you secure connections to other MySudo users, making MySudo a great private messaging app.

You can also use your Sudo handle (instead of a phone number) for end-to-end encrypted communications between other MySudo users, too (invite your friends to the app!). Read: How to get 9 “second phone numbers” on one device. Use the end-to-end encrypted email between MySudo users for secure communications. MySudo email is a popular secure email service with full send and receive support. It’s entirely separate from your personal email account and intentionally protects your personal email from spam and email-based scams.

Read: 4 ways MySudo email is better than masked email. Use the private browser within each Sudo in MySudo to search the internet free of ads and trackers. Use the virtual card within each Sudo in MySudo to hide your transaction history from your bank and others that they sell your data to. (Yes, they do!).

Discover more about how MySudo lets you control who sees your personal information online and in real life. Also check out how MySudo keeps you safe on social media even in a data breach.

Once you’ve got MySudo on your side, do these 3 things:

Reclaim your information from companies that store and might sell it with RECLAIM personal data removal tool. See who has your information, discover whether it’s been caught in a data breach, and then either ask the company to delete it or substitute it for your Sudo information using MySudo. RECLAIM is part of the MySudo app family.
Encrypt your internet connection and hide your IP address with MySudo VPN, the only VPN on the market that’s actually private. MySudo VPN is the perfect companion for MySudo privacy app since they’re engineered to work seamlessly together. 
Be first in line to use the new MySudo password manager to securely store, autofill, and organize every log-in, password, and more. Coming soon!

Why should I trust MySudo?

MySudo does things differently from other apps:

We won’t ask for your email or phone number to create an account. You don’t need a registration login or password to use MySudo. Access is protected by a key that never leaves your device. We’ll only ask for personal information for virtual cards, and UK phone numbers, when a one-time identity verification is required.

By securing your own information, you take back control of your life, money, safety, and reputation. There’s never been a better time.

Get started today:

Download MySudo
Download RECLAIM
Download MySudo VPN

You might also like:

What constitutes personally identifiable information or PII? 14 real-life examples of personal data you definitely want to keep private What is digital exhaust and why does it matter? Californians, this is why you still need MySudo despite the new “Delete Act” This is why MySudo is essential, even 10 years after Snowden What is a data breach? What should I do if I’ve been caught in a data breach?

The post Your Complete Guide to Online Privacy in 2025: Who is Taking Your Personal Info and How to Stop Them appeared first on Anonyome Labs.


Recognito Vision

Face Recognition Software Explained in Simple Words

Imagine walking into an airport and breezing through security just because a camera recognized your face. That’s not science fiction anymore. This is the power of face recognition software, a technology that maps your unique facial features and matches them against stored data. From unlocking smartphones to catching criminals, this software is shaping our everyday...

Imagine walking into an airport and breezing through security just because a camera recognized your face. That’s not science fiction anymore. This is the power of face recognition software, a technology that maps your unique facial features and matches them against stored data.

From unlocking smartphones to catching criminals, this software is shaping our everyday lives. But along with convenience come questions about accuracy, privacy, and trust. Let’s break it down in simple words so you know what’s happening behind the lens.

 

What is Face Recognition Software

Face recognition software is a type of biometric technology that identifies or verifies a person by analyzing facial features. Think of it as a digital fingerprint, but for your face.

The process usually starts with face matching software, which compares a captured image to existing images in a database. This allows systems to confirm if two faces belong to the same individual.

For everyday people, the most relatable example is your smartphone. Every time you unlock it by looking at the screen, the phone uses a form of this software to confirm your identity.

 

How Face Recognition Software Works Behind the Scenes

At first glance, it feels magical. But under the hood, face recognition is powered by math, algorithms, and a whole lot of data crunching.

 

1. Data Capture and Photo Face Detection Software

It starts when a camera captures your face. The photo face detection software identifies the position of your eyes, nose, mouth, and chin. These landmarks form the foundation of your facial “map.”

2. Feature Extraction with Algorithms

Next, the software measures distances between facial features, like the space between your eyes or the curve of your jawline. These measurements are converted into numerical data known as a faceprint.

3. Matching Process with Databases

Finally, the system compares this faceprint against a database of known faces. If there’s a match within the confidence threshold, the system identifies the individual.

Best Face Recognition Software Applications in Real Life

This technology is not limited to spy movies. It’s deeply integrated into industries we interact with daily.

Here are the most common applications:

Smartphones and gadgets – Unlocking phones, securing payments, and managing app access.

Airports and border control – Faster identity checks, reducing wait times for travelers.

Healthcare – Identifying patients and protecting medical records.

Banking – Preventing fraud with stronger security measures.

Retail – Recognizing VIP customers or preventing theft.

Law enforcement – Finding missing persons or identifying suspects in crowds.

A growing use is facial recognition software for photos, where apps automatically tag friends or group images. Social media platforms rely heavily on this feature, which has made photo management much easier for users worldwide.

Comparing the Top Facial Recognition Software Options

With so many tools available, how do you know which one stands out? Independent evaluations, like the NIST Face Recognition Vendor Test, provide objective data on performance. You can also check the FRVT 1:1 performance reports for in-depth benchmarking.

Here’s a simplified comparison table of criteria that matter most:

Criteria Why It Matters What to Look For Accuracy Correctly identifying or verifying faces High true positive rate Speed How quickly results are delivered Real-time or near real-time Scalability Handling millions of faces Cloud or distributed systems Compliance Following laws like GDPR Transparent privacy policies Cost Fits your business budget Flexible pricing models

This breakdown helps businesses pick the top facial recognition software for their specific needs.

 

Privacy and Legal Concerns with Face Recognition

Now comes the elephant in the room. As powerful as this technology is, it raises eyebrows when it comes to personal freedom.

Data storage – Where are your facial scans stored, and for how long?

Consent – Are you being recognized without agreeing to it?

Misuse – Could governments or companies abuse this technology for surveillance?

In Europe, these questions tie directly into GDPR compliance. The rules emphasize transparency, data minimization, and user rights. If an organization mishandles face data, the penalties can be steep.

A 2021 study found that 56 percent of people worry about misuse of facial recognition by authorities. This shows that while the tech is impressive, trust remains fragile.

 

Open Source Face Recognition Options for Developers

Not all solutions are locked behind expensive paywalls. Developers and small businesses often turn to face recognition opensource tools. These options allow for flexibility, customization, and cost savings.

Advantages of open-source tools include:

Free or low-cost access to powerful libraries.

Large communities that support development.

Ability to customize for unique projects.

Faster innovation through collaboration.

One notable resource is the Recognito Vision GitHub, where developers can explore codebases, contribute, and experiment with new applications.

 

Future Trends in Face Recognition Technology

The pace of innovation isn’t slowing down. Researchers are refining algorithms to improve speed and reduce bias.

Future trends to watch:

Ethical AI – Systems that reduce bias across race and gender.

Edge computing – Processing data on devices instead of servers for faster results.

Integration with IoT – Smart cities that use recognition for traffic, safety, and efficiency.

Privacy-first models – More tools will adopt privacy-by-design frameworks.

Experts predict that within the next decade, face recognition will be as common as passwords are today, though hopefully far more secure.

 

Conclusion

Face recognition software is no longer futuristic tech, it’s a reality shaping security, convenience, and even social interactions. From photo face detection software to face matching software, its reach is growing rapidly. Yet, the real challenge is balancing innovation with privacy. Companies that master this balance will win trust in the long run.

And speaking of innovation, Recognito is one brand pushing these boundaries with responsible and practical applications.

 

Frequently Asked Questions

 

What is the difference between face detection and face recognition?

Face detection finds and locates a face in an image, while recognition goes a step further by identifying or verifying who that person is.

Is face recognition software always accurate?

No, accuracy depends on the algorithms, quality of data, and lighting conditions. According to NIST tests, top systems can reach over 99 percent accuracy in controlled settings.

Can face recognition software work with old photos?

Yes, many systems can analyze older images. However, accuracy may decrease if the photo quality is low or the person has aged significantly.

Is open source face recognition safe to use?

Yes, but it depends on how it’s implemented. Open-source tools are flexible, but developers must ensure strong security practices when handling sensitive data.

How does face recognition affect privacy rights?

It raises major concerns about surveillance and consent. Laws like GDPR in Europe require companies to handle facial data transparently and responsibly.

Thursday, 25. September 2025

Extrimian

How Extrimian Drives Digital Trust in Healthcare

Why are identity and data critical in healthcare? Healthcare —both public and private— faces a structural challenge: managing massive volumes of sensitive data from patients, professionals, and institutions while ensuring accuracy, security, and transparency. So, how Extrimian Drives Digital Trust in Healthcare? Today’s systems are fragmented. Patient admissions, authorizations, professional valid
Why are identity and data critical in healthcare?

Healthcare —both public and private— faces a structural challenge: managing massive volumes of sensitive data from patients, professionals, and institutions while ensuring accuracy, security, and transparency. So, how Extrimian Drives Digital Trust in Healthcare?

Today’s systems are fragmented. Patient admissions, authorizations, professional validations, or organ transplant waiting lists still rely on manual processes or disconnected databases. The consequences are severe:

Excessive bureaucracy → long delays for authorizations, transplants, or referrals.

Hidden costs → thousands of hours in manual administrative work.

Fraud risks → falsified medical degrees or manipulated patient records.

Social distrust → patients unsure if they are on the correct waiting list; doctors lacking visibility into processes.

In a sector where every minute can make the difference between life and death, the question becomes urgent: How can healthcare systems modernize identity and data management without sacrificing security or trust?

What does Extrimian propose to solve these challenges?

Extrimian provides an ecosystem of Verifiable Credentials (VCs) and digital identity tools enabling hospitals, clinics, insurers, and public agencies to:

Issue and validate credentials in seconds, instead of manual processes taking days.

Guarantee advanced security, with tamper-proof, instantly verifiable records.

Ensure compliance with international standards (W3C, DIF, GDPR, HIPAA).

Optimize costs and resources, cutting bureaucracy and human errors.

Improve patient and professional experience, simplifying access and workflows.

All built on principles of privacy by design, interoperability, and open standards.

How does self-sovereign identity (SSI) apply to healthcare?

Self-Sovereign Identity (SSI) places individuals at the center of control over their personal data.

For patients: medical history, diagnoses, or lab results can be issued as portable, verifiable credentials.

For medical professionals: degrees, licenses, and certifications are turned into tamper-proof VCs that any hospital can instantly verify.

For institutions: each credential is validated without intermediaries and easily integrated into existing hospital systems.

SSI does not replace health systems; it strengthens them with a new layer of trust.

Case Study: How Extrimian helped INCUCAI improve Argentina’s transplant system

The Instituto Nacional Central Único Coordinador de Ablación e Implante (INCUCAI) faced a long-standing challenge: managing the national emergency transplant waiting list.

The problem

Slow processes in organ allocation.

Limited transparency in prioritization.

Patients and families receiving little real-time information.

The Extrimian implementation

Extrimian introduced verifiable credentials to build traceability and trust into the national list:

Every update in the list is issued as a verifiable credential.

Patients and doctors can instantly verify position and status.

All changes are validated securely, without risks of tampering.

The results

Significant time reduction in allocation and updates.

Full transparency for patients, professionals, and regulators.

Improved patient experience through clear communication.

Strengthened trust in one of the most sensitive areas of healthcare.

This pioneering use case demonstrated how Extrimian’s technology can save lives by enhancing transparency and efficiency in public healthcare.

More about this case studie: Extrimian & INCUCAI

What other use cases does Extrimian enable in healthcare? 1. Medical professional identity verification

Problem: manual validation of degrees and licenses.
Solution: verifiable credentials that confirm authenticity instantly, eliminating fraud risks.

2. Verifiable medical records

Problem: fragmented medical histories between hospitals, insurers, and regions.
Solution: interoperable VCs that patients can carry and present anywhere, securely and instantly.

3. Smart access to healthcare services

Secure login for hospital web portals.

QR- and VC-based access control for labs, operating rooms, and medical events.

Automated attendance for in-person and virtual consultations.

4. Patient benefit networks

VCs as digital passes for transportation or pharmacy discounts.

Integration with insurance, pharmacies, and wellness services.

5. Academic and professional certifications

Credentials for courses, residencies, and specializations issued as VCs.

Streamlined hiring and international mobility for healthcare professionals.

What tangible benefits do healthcare institutions gain?

Institutional prestige: issuing VCs with the institution’s brand boosts trust and modernity.

Advanced security: tamper-proof credentials reduce fraud.

Operational efficiency: automated processes cut costs and errors.

Enhanced patient experience: simpler, faster, user-centric interactions.

Strategic partnerships: connection with fintech, insurance, and other key sectors.

Global compliance: alignment with W3C and DIF standards ensures global acceptance.

How is Extrimian implemented in healthcare institutions? Step 1: Personalized demo

Showcasing practical use cases like patient admission or credential verification.

Step 2: Modular implementation

Start with one specific case (e.g., issuing medical certificates) and scale up to a full ecosystem.

Step 3: Continuous support

Training workshops and Extrimian Academy.

Ongoing technical support.

ROI measurement with clear impact metrics.

What is the ROI of verifiable credentials in healthcare?

Administrative savings: up to 60% time reduction in credential verification.

Fraud reduction: fewer legal risks and malpractice cases.

Efficiency gains: processes that once took days now take seconds.

Intangible value: reinforced patient trust and institutional reputation.

For a hospital serving 10,000 patients annually, the potential savings amount to hundreds of thousands of dollars, alongside a substantial boost in credibility.

Conclusion: towards a more trusted, efficient, and human healthcare system

Healthcare needs trust, agility, and security. With Extrimian, identity verification and data management stop being a problem and become a competitive advantage.

The INCUCAI case proves it is possible to reduce delays, increase transparency, and improve patient and professional experiences. And this is just the beginning: from private hospitals to national public networks, verifiable credentials can raise the standard of trust in healthcare worldwide.

👉 Want to explore how these benefits could work in your institution?
Schedule a personalized demo with the Extrimian team today.

The post How Extrimian Drives Digital Trust in Healthcare first appeared on Extrimian.


Holochain

How Does Desirable Social Coherence Evolve?

Blog
Reflections from the DWeb Seminar

In August I had the privilege of participating in the DWeb Seminar 2025, an intimate gathering designed to “map the current DWeb technological landscape, learn from each other, and define the challenges ahead”.  For those unfamiliar with the event, Wendy Hanamura’s excellent recap captures the spirit and outcomes beautifully. As part of the event we were invited to offer a 15 minute “input talk” to the other participants.   I chose to share a fundamental question that has driven Holochain from its inception – and explore how this question shapes not just our technology, but our entire approach to building decentralized systems.

The Core Question: How Does Desirable Social Coherence Evolve?

Everything we do at Holochain (and the projects that I've been nurturing through Lightningrod Labs, like Moss and Acorn) stems from this central inquiry. But what do I mean by “desirable social coherence” and why does it matter? 

You can think of social coherence as a group’s long-term stability. Like most things this property exists along a gradient: some social bodies have more coherence than others, which depends on their capacity to respond and adapt to environmental changes as a result of the patterns, practices and organizing principles that they operate by.  But therein lies the rub.  Some of  these patterns provide lots of coherence, but they may not be desirable or pleasant for the individuals taking part in them!  It’s no fun for almost everybody involved in an authoritarian regime, but it does have a real degree of stability.   My fundamental belief, however,  is that not only is it possible to evolve these patterns and processes in directions that participants will find pleasant and desirable, but that doing so actually yields the most long term stability because they will by that fact not contribute to destabilizing it.

The Challenge: Current digital systems scale through centralization and intermediation of critical social functions. Unfortunately, this creates undesirable forms of social coherence – power imbalances that enable both intentional and unintentional abuse. When a few entities control the platforms where billions interact, we may get coherence, but it's often extractive rather than generative. Furthermore our current systems are difficult to evolve because of their very centralization and the interests that want to keep them that way to maintain power.

The Opportunity: Decentralized technology can create substrates for evolvable social coherence – essentially, DNA for social organisms. Instead of rigid, centralized structures, we can build infrastructure that enables new forms of social fabric to emerge and multiple scales, yielding increasing collective intelligence

A key insight here is that there is no single “correct” form of social coherence. What works is contextual, diverse across time, space, and scale. What we need is infrastructure that enables continuous evolution and discovery – balancing stability with emergence. 

How This Shapes Our Work at Holochain

This framework isn’t abstract philosophy - it directly informs every architectural decision we make. When building technology to support evolvable social coherence, several principles become essential:

Engagement Spaces as Building Blocks

Human social fabric is built out of layers of interacting and layered “engagement spaces” – essentially social contracts with defined rules. We need infrastructure that makes it easy to create, use, and compose these spaces. The current web may have “solved for” decentralization of publishing - anyone can create a website or blog without permission. But the places where people actually interact and engage with each other (social media platforms, forums, collaborative tools, even finance and accounting tools) remain under intermediary controlled web-servers. Our approach requires protocols where neither the data nor the rules of the group interaction are held by intermediaries. 

Agency AND Accountability, Mutually Interwoven

Individuals need genuine agency through their technology - the ability to participate in multiple spaces, move between them, and take their data with them. But this autonomy must be paired with accountability within the contexts where they participate. This tension between empowerment and responsibility is productive, not problematic.

Uncapturable/Unenclosable Carriers: The infrastructure itself must be immune to capture - meaning no single entity can gain enough control to dictate rules, extract value, or shut down the system. We’ve seen far too many examples of infrastructure capture” governments shutting down internet services during protests, platform owners changing terms to benefit their shareholders, or cloud providers being pressured to deplatform users. Even when specific engagement spaces have their own defined rules, the underlying “carrier” of those interactions must remain decentralized. This enables autonomous group formation without intermediation - groups can organize however they choose without worrying that their technological foundation can be pulled out from under them.  

Local State, Global Visibility: Rather than forcing artificial global consensus (like blockchains do), we recognize that state is inherently local but can achieve consistent global visibility if nodes share data.  Operating this way eliminates unnecessary coordination bottlenecks while maintaining system coherence. 

Architectural and Design Consequences

The principles stated above have very concrete design and implementation consequences.  For those technically familiar with Holochain you already know how they show up in the design, but here I list some of the key aspects along with pointers to documentation that describe each consequence in more detail.

Start with a capacity to define & create a known “engagement space”.  The “rules of a game”.  This consists of the hash of a set of data-types & relations and deterministic validation rules for creation of that data. In Holochain we call this the DNA Allow agents to be the authoritative source of all data, i.e. agents “make plays” according to the rules of the DNA.  Ensure that when this data is shared, it has intrinsic data integrity, i.e. it’s a cryptographically signed append-only ledger for that source (in Holochain we call this the Source Chain), and ensure that it is identifiable as being part of an engagement space by having the first entry in the chain being an agent’s signing of the space’s hash.  This is also “I consent to play this game”. Share data to an eventually consistent Graphing Distributed Hash Table (DHT), in which other agents validate that all shared data follows the rules of the game. Ensure that agents who don’t follow the rules can be blocked/ignored.  This prevents capture. Allow for “bridge” calls between engagement spaces at the agentic locus (i.e. not at the group level) for composability of spaces.  This ensures composibility, autonomy, and accountability

There are of course more details in the design, but these are some of the key ones that fall out of the principles.

Resonance at the DWeb Seminar

What struck me most about the seminar was how much of our framework resonated with challenges other participants were grappling with, even when they approached them from different angles.  I would even say that the Seminar itself was fundamentally an example of this thinking.  It was a carefully designed set of patterns and processes  for a literal engagement space (this time physical instead of digital) whose purpose was to increase the social coherence of players in the p2p domain.  These patterns not only included the processes of the input-talks, the unconference sessions, and commitment to production of a collaborative write-up, but also the relational parts of cooking together and sharing non-work time together.  All of this together created desirable social coherence.   And it’s this pattern that we are all trying to create powerful affordances for in the digital world.

Some further examples: During the unconference sessions, conversations kept circling back to fundamental questions about coordination, autonomy, and accountability. 

When we discussed "UI Patterns for Peer-to-peer," I saw it as asking: how do we make decentralized engagement spaces feel natural and empowering to users? When we debated collaborative data model requirements, I saw it as exploring: how do we maintain coherence across distributed participants without sacrificing agency?

When Rae McKelvey shared her focus on "purpose-built apps" that solve real social problems to me that aligned perfectly with the engagement space concept—recognizing that different contexts require different rules and structures. 

At the technical level David Thompson's work on object capabilities and Duke Dorje’s work on recryption and identity both live into the same autonomy-with-accountability tension we see as central to social coherence.  The ever-present discussions about how best to implement CRDTs (Conflict-free Replicated Data Types, of which Holochain’s DHT is an example) revealed the shared underlying assumption: that meaningful coordination really is possible without central control, that local autonomy and global coherence can coexist, and most profoundly that the infrastructure we build shapes the social possibilities it enables.

But if everything resonated so well, what’s the big deal?

Why This Matters for the Decentralized Ecosystem

Probably the most common complaint I’ve heard over the years from folks who see the astounding potential of decentralized infrastructure goes something like this:  “There are so many different p2p solutions, and teams that seem to be working in isolation, why can’t you just agree on a single solution and work together?”  On the surface, this sounds like a reasonable complaint, but the lens of coherence helps understand why “working together” is actually such a hard problem to solve.  

Recalling from the start of this article: what creates coherence are the patterns, practices and organizing principles of a group.  Just because groups have the same goals and want the same outcomes, does not mean that they start their patterns, processes and organizing principles are similar and compatible.  In fact, almost always, they aren’t.  But this relates to why the DWeb Seminar was so important.  It successfully operated according to a higher order organization principle that created an engagement space precisely for the purpose of getting at what patterns, practices and organizing principles folks in the broad DWeb community were operating by, and making them visible and .  

So to me this was an example of exactly the underlying principles that we’ve been embedding in Holochain’s architecture from the start.

So, while the decentralized web movement often focuses on technical capabilities – faster consensus, better cryptography, more efficient protocols, we are now seeing the community beginning to seriously see these as means, not ends. The higher level question remains: what kinds of social possibilities can these technologies enable? 

This approach enables us to build towards greater “commons enabling infrastructure” - technology that strengthens shared resources and collective capacity rather than extracting value. The creation of digital, unenclosable fabric of engagement spaces is central to this goal. Instead of platforms that capture value from user interactions, we can build infrastructure that enables communities to create and govern their own spaces, according to their own values. 

When the decentralized ecosystem embraces this approach, many new possibilities emerge:

Interoperability with Purpose: We can more easily build bridges between systems that share compatible social intentions. A climate action network could seamlessly share data and coordinate with a local food co-op using a different protocol, supporting community resilience initiatives that address both environmental and food security challenges, while using mutual-credit currencies backed by the productive capacity of the local farms supplying the co-op. Governance that Evolves: We can build infrastructure that enables continuous governance innovation rather than trying to solve governance once and for all. A neighborhood mutual aid group could start with simple coordination tools, then gradually evolve more sophisticated decision-making processes as their needs change, without having to migrate to entirely new platforms. Network Effects that Serve Users: We can create composable ecosystems where network effects benefit participants rather than extracting from them. As more people join a decentralized social network, the benefits – better content discovery, richer discussions, stronger community bonds - flow to the participants themselves rather than to a platform owner’s advertising revenue.  The Path Forward

The grand challenge of decentralized software is ensuring it actually delivers on evolvable social coherence. This means building infrastructure that serves the flourishing of people and planet rather than extracting from it. 

At Holochain, we’re committed to this path, not just in our technology choices, but in how we organize ourselves, engage with our community, and collaborate with other projects. The conversations at the DWeb Seminar reinforced that we’re not alone in this commitment. 

The adjacent possibility that Wendy described in her recap isn’t just about new technical capabilities – it’s about new forms of social organization that those capabilities make possible. That’s both a tremendous responsibility and an extraordinary opportunity for all who choose to walk to this path. 


Veracity trust Network

Are AI Agents a threat to all industries or just another digital tool?

AI Agents are a growing influence on how we do business online and it pays to be aware of how they work – and the potential risks they expose. Also known as Agentic AI, they are defined as autonomous systems that perceive, make decisions, and take action to achieve specific goals within an environment. The post Are AI Agents a threat to all industries or just another digital tool? appeared f

AI Agents are a growing influence on how we do business online and it pays to be aware of how they work – and the potential risks they expose.

Also known as Agentic AI, they are defined as autonomous systems that perceive, make decisions, and take action to achieve specific goals within an environment.

The post Are AI Agents a threat to all industries or just another digital tool? appeared first on Veracity Trust Network.


Ocean Protocol

DF156 Completes and DF157 Launches

Predictoor DF156 rewards available. DF157 runs September 25th — October 2nd, 2025 1. Overview Data Farming (DF) is an incentives program initiated by ASI Alliance member, Ocean Protocol. In DF, you can earn OCEAN rewards by making predictions via ASI Predictoor. Data Farming Round 156 (DF156) has completed. DF157 is live today, September 25th. It concludes on October 2nd. For this DF roun
Predictoor DF156 rewards available. DF157 runs September 25th — October 2nd, 2025 1. Overview

Data Farming (DF) is an incentives program initiated by ASI Alliance member, Ocean Protocol. In DF, you can earn OCEAN rewards by making predictions via ASI Predictoor.

Data Farming Round 156 (DF156) has completed.

DF157 is live today, September 25th. It concludes on October 2nd. For this DF round, Predictoor DF has 3,750 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF157 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF157

Budget. Predictoor DF: 3.75K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and ASI Predictoor

Ocean Protocol was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF156 Completes and DF157 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


FastID

4 Tips for Developers for Using Fastly’s Sustainability Dashboard

Track the real-world emissions of your Fastly workloads. This blog shares practical tips on using the Sustainability dashboard for greener, faster code.
Track the real-world emissions of your Fastly workloads. This blog shares practical tips on using the Sustainability dashboard for greener, faster code.

Wednesday, 24. September 2025

liminal (was OWI)

The Silent Killer in Third-Party Risk: Why Behavioral Red Flags Matter More Than Checklists

The hidden risks behind vendor relationships It starts innocently enough. A supplier begins missing deadlines. A long-trusted partner suddenly resists contract changes. Payments arrive late, documentation lags, and small deviations creep into everyday interactions. These aren’t just operational hiccups—they’re behavioral red flags. For years, third-party risk management (TPRM) relied on static com
The hidden risks behind vendor relationships

It starts innocently enough. A supplier begins missing deadlines. A long-trusted partner suddenly resists contract changes. Payments arrive late, documentation lags, and small deviations creep into everyday interactions. These aren’t just operational hiccups—they’re behavioral red flags.

For years, third-party risk management (TPRM) relied on static compliance checklists: audits, certifications, and one-off questionnaires. But today’s risk environment has outpaced that model. Subtle engagement shifts often signal vendor instability—or even fraud—well before a failed audit or regulatory breach brings it to light.The stakes are growing. A single vendor misstep can trigger multimillion-dollar losses, regulatory scrutiny, and reputational fallout. In 2025, the risk that matters most isn’t what the audit catches—it’s what it misses.

What Is Third-Party Risk Management?

Third-party risk management (TPRM) is the discipline of identifying, assessing, and mitigating risks that arise from vendors, suppliers, and business partners. It goes beyond contract compliance to cover financial, cybersecurity, operational, and reputational exposures.

Why compliance checklists fall short

Traditional compliance frameworks provide assurance, but they’re backward-looking. By the time an issue surfaces in an audit, the damage may already be done.

Complex risks are growing: According to Liminal’s Market & Buyer’s Guide for TPRM, 33% of organizations cite complexity of risks as the top barrier to effectiveness—outranking resources or legacy systems. Budgets are shifting: The same research shows that two years ago, 77% of businesses devoted 10% or less of their budgets to TPRM. Today, 84% say funding is sufficient—a 42% improvement. Maturity remains low: Despite rising investment, only 9% of organizations have achieved “advanced” TPRM maturity, underscoring how far the market still has to go.

Static compliance isn’t enough when risk signals emerge daily in behavior, process, and relationships.

The market is moving fast

The risk isn’t just theoretical—the market for third-party risk management is expanding quickly. Liminal’s research shows that while sentiment on budget sufficiency has improved by 42% in two years, only 9% of organizations have achieved advanced maturity.

It’s a sign that boards and executives see TPRM as too important to ignore—but most are still playing catch-up. As Gartner notes, organizations that fail to modernize vendor risk programs face increasing exposure across cybersecurity, compliance, and operational resilience.

Market & Buyer’s Guide for Third-Party Risk Management 2025, p.19 From checklists to behavioral red flags

Behavioral red flags—missed SLAs, contract resistance, data delivery delays, unusual communication shifts—are leading indicators of risk. Unlike static compliance, they reveal real-time vulnerabilities and allow earlier intervention. Behavioral risk monitoring is the practice of tracking deviations in how vendors operate and interact that can signal early signs of instability or misconduct.

The most effective programs are:

Embedding continuous monitoring rather than point-in-time reviews. Integrating behavioral insights into enterprise-wide dashboards. Automating alerts when engagement patterns deviate from norms.

This shift mirrors risk management trends across Data Access Control and AI Data Governance—executives no longer want box-checking. They want predictive visibility into the risks that can derail operations, undermine vendor resilience, and erode supplier trust.

Market & Buyer’s Guide for Third-Party Risk Management 2025, p.18 What executives are demanding now

For boards and CISOs, vendor risk has become strategic infrastructure: as vital to credibility as financial reporting or data security. The new priorities are clear:

Continuous monitoring: Liminal’s Regulatory TPRM Link Index shows that 63% of buyers rank this as their top priority. Automation at scale: 42% cite automation of TPRM activities as their top optimization goal. Data quality: Cybersecurity TPRM buyers emphasize accuracy (89%) and monitoring (85%) as table stakes, guided by emerging frameworks such as NIST’s Cybersecurity Framework. Cross-functional orchestration: Operational buyers demand interoperability across compliance, procurement, and security.

These shifts signal the end of siloed vendor risk teams. The winners will be those who connect behavioral risk detection into broader enterprise resilience strategies.

The executive reality check

Boards no longer accept “checklist compliance” as proof of safety. Regulators and investors expect real-time assurance. Yet with only 9% of organizations achieving advanced TPRM maturity, most enterprises remain exposed.

The Wall Street Journal recently reported on how supply chain disruptions and vendor failures are forcing boards to elevate TPRM to a core resilience strategy—not just a compliance function. It’s a signal that the market is moving fast, and expectations are rising. Regulatory frameworks are evolving in parallel. The SEC now requires detailed cyber disclosures, the EU GDPR continues to impose significant fines, and NIST provides baseline guidance for organizations modernizing their risk programs.

By acting on behavioral red flags, enterprises strengthen resilience and trust. Ignoring them leaves blind spots that regulators and investors won’t overlook.

Turning behavioral insight into advantage

Behavioral risk monitoring isn’t just a compliance upgrade. It’s a competitive advantage. By weaving continuous monitoring and behavioral insights into third-party risk management, executives can:

Protect against operational and financial losses. Demonstrate resilience to regulators. Build stronger trust signals with investors, customers, and suppliers.

👉 Dive deeper in the Market & Buyer’s Guide for Third-Party Risk Management and explore the Cybersecurity, Operational, and Regulatory Link Indexes to see how leading enterprises are raising the bar.

👉 Watch our Webinar on TPRM Strategy & Stronger Risk Management to hear how leaders are operationalizing these shifts in real time.

The post The Silent Killer in Third-Party Risk: Why Behavioral Red Flags Matter More Than Checklists appeared first on Liminal.co.


Indicio

How decentralized identity delivers next generation authentication and fraud prevention

The post How decentralized identity delivers next generation authentication and fraud prevention appeared first on Indicio.
Decentralized identity and Verifiable Credentials remove the vulnerabilities driving generative-AI, social engineering, and synthetic identity fraud at a significantly lower cost than legacy or alternative solutions. How? The technology allows you to just bypass these problems. With Indicio Proven, you get authentication and fraud prevention in a single, affordable, globally interoperable platform.

By Trevor Butterworth

The new report by Liminal — The Convergence of Authentication and Fraud Prevention — makes for stark reading.

Fraud losses in the U.S. alone are projected to double in just three years to $63.9 billion, with account takeover fraud accounting for half. Seventy-one percent of respondents to their survey of 200 buyers in retail, ecommerce, financial services and tech believe current methods of authentication may be insufficient to thwart generative-AI social engineering attacks. And almost two-thirds worry that additional security layers will add unacceptable friction to customer and user experience.

One could say the problem is that the technology powering fraud is more powerful than the technology powering authentication and fraud prevention. And the latter’s weakness is compounded by authentication and fraud prevention being two separate processes, often managed by multiple different vendors.

The solution is more of everything — more layers of defense, multi-level signals analysis, more authentication factors, and good AI to battle the bad AI. All of which translates into more complexity, friction, and cost. No surprise, Liminal also reports increasing budgets for authentication, account takeover protection, and social engineering scam prevention, and it projects these budgets will continue increasing year-on-year.

Meanwhile, customers and consumers — many of whom are digital natives — expect seamless, frictionless interaction and not painful multifactor authentication. As a result, organizations face brutal tradeoffs: cater to digital behavior and increase risk, or decrease risk but make customers pay in friction and risk losing them.

Fix the fundamental problem

There’s a reason the technology powering fraud has the upper hand: The legacy systems organizations rely on — username/password,  stored biometrics, centralized databases filled with personal data — are all vulnerabilities easily exploited by brute-force AI attacks, synthetic identity fraud, and deepfakes.

Remove these vulnerabilities and you remove these problems. That’s what decentralized identity does. It removes the need for usernames, passwords, and the centralized storage of personal data needed to manage identity and access.

That’s what Indicio’s customers are doing — sweeping away the digital structures and processes that are the cause of all these problems.

We replace this with Verifiable Credentials. They’re a simple way for each party in a digital interaction — customers, organizations, employees, devices, virtual assistants — to authenticate the other in a way that can’t be phished, hacked, or faked; and we do this authentication before any data is shared.

Verifiable Credentials reduce fraud by enabling digital credentials to be bound to individuals in a way that is cryptographically tamper-proof, and which can incorporate biometrics that have been authenticated. This closes off attack vectors like phishing, synthetic identities, and — with an authenticated biometric in a Verifiable Credential — deepfakes.

A person with an authenticated biometric in a Verifiable Credential has a portable digital proof of themselves that can be instantly corroborated against a liveness check.

A decentralized identity architecture changes everything. It integrates authentication and fraud prevention, creates unified digital identities, and enables data to be fully portable, trusted and acted on immediately — without friction to businesses or customers.

Just as important, it’s significantly less expensive than legacy or alternative solutions; it can be layered into existing systems, meaning that it’s a solution that, depending on the scope, can be implemented in days or weeks.

Don’t take our word, see what our customers are doing

Indicio and its customers — enterprises, financial services,  governments — have had enough of the same old same old. We and they are using Verifiable Credentials to cross borders, onboard customers, and authenticate account access — all seamlessly with the highest level of digital identity assurance.

It might be hard to believe that a solution could be that simple — that you can just remove the core vulnerabilities fueling the surge in identity-related fraud and not have to rip and replace your entire authentication infrastructure.

Contact us to see a demo — and discover how Indicio Proven is being used as a single authentication and fraud prevention system to create seamless and trusted digital interaction.

The post How decentralized identity delivers next generation authentication and fraud prevention appeared first on Indicio.


FastID

Fastly’s Pillars of Resilience: Building a More Robust Internet

Discover Fastly's Pillars of Resilience: unwavering availability, minimized latency, and disruption resistance for a robust internet experience with our global network.
Discover Fastly's Pillars of Resilience: unwavering availability, minimized latency, and disruption resistance for a robust internet experience with our global network.

Tuesday, 23. September 2025

IDnow

Why banks need modular KYC solutions to future-proof compliance: Insights from Finologee’s Carlo Maragliano.

We sat down with Carlo Maragliano from digital platform Finologee to explore how financial institutions are getting ready for the evolving regulatory landscape and how they use technology to accelerate their go-to-market while staying audit-ready and resilient.  As new regulations such as eIDAS 2.0, AMLR and DORA reshape the compliance landscape across Europe, financial institutions […]
We sat down with Carlo Maragliano from digital platform Finologee to explore how financial institutions are getting ready for the evolving regulatory landscape and how they use technology to accelerate their go-to-market while staying audit-ready and resilient. 

As new regulations such as eIDAS 2.0, AMLR and DORA reshape the compliance landscape across Europe, financial institutions are under pressure to future-proof their onboarding and KYC processes.

Luxembourg-based Finologee, a leading digital platform operator for the financial industry, is helping banks and payment institutions meet regulatory challenges through its KYC Manager, an orchestration layer that combines flexibility with embedded regulatory readiness. By integrating IDnow’s automated identity verification technology, Finologee enables its clients to accelerate go-to-market, simplify compliance and tailor onboarding journeys across regions. With Carlo Maragliano, Head of Delivery and Customer Success at Finologee, we discussed how technology, automation and orchestration are transforming digital identity at scale.

Navigating the evolving regulatory landscape Regulations such as eIDAS 2.0, AMLD6 and DORA are coming into force soon. How are the changes brought about by these regulations influencing you and your banking clients’ KYC and digital onboarding priorities?

Heightened regulatory complexity is pushing banks to adopt more modular and future-proof KYC solutions. These upcoming regulations are significantly reshaping compliance priorities for financial institutions. For example, eIDAS 2.0 introduces Qualified Electronic Identity (QeID), which makes interoperability and eID support essential. AMLD6 expands criminal liability and due diligence obligations, which increases the need for granular audit trails and automated, risk-based workflows. And with DORA, operational resilience becomes a key focus, requiring stronger vendor oversight, digital continuity and secure third-party integrations. Finologee’s orchestration layer, combined with IDnow’s embedded identity verification, equips institutions to meet these regulatory shifts without having to re-engineer their core systems. 

IDnow’s Automated Identity Verification 

IDnow provides a fully automated identity verification solution that integrates seamlessly with Finologee’s KYC Manager. It supports document authentication from more than 215 international issuing authorities, uses AI-driven checks and biometric liveness detection and helps banks and other regulated industries to reduce onboarding times while ensuring full regulatory compliance. This technology enables companies to verify the identities of their users seamlessly and securely.

Ensuring adaptability in a dynamic regulatory environment How do you ensure that your solutions remain adaptable as regulations and customer expectations continue to evolve?

We’ve built everything on an API-first modular architecture that enables quick adaptation to regulatory shifts. On top of that, Finologee continuously engages with clients to align roadmap priorities with industry changes. The platform is also fully customisable and configurable, so institutions can tailor onboarding flows, verification steps and compliance logic to specific regulatory requirements, customer segments and regional markets without extensive development effort.

Did you know? Over 55% of consumers are more likely to apply for services if the onboarding process is entirely digital, including online identity verification.

The role of automation in scaling operations What role does automation play in helping banks scale their operations without sacrificing security or compliance?

Automation is really important for all businesses. It reduces dependency on manual reviews, thus lowering both cost and error rates. Automated decisioning also helps apply consistent compliance logic. With real-time workflows, customers can be onboarded faster without sacrificing auditability, while compliance teams gain transparency and control through dashboards and exception handling flows. 

What challenges do financial institutions face when trying to scale their compliance and onboarding processes across multiple markets and how does KYC Manager help overcome these hurdles?

Scaling across markets brings several hurdles. Institutions face varying regulatory requirements across countries, different acceptable ID document types and verification standards, and operational silos that slow down onboarding harmonisation. With KYC Manager, we address these challenges through a centralised orchestration layer with localised compliance modules, document coverage across 157 countries enabled by IDnow and a flexible flow builder that allows journeys to be adapted by region or customer type.

Did you know? Banks that increased end-to-end KYC-process automation by 20% saw a triple benefit effect : increased quality-assurance scores by 13%; improved customer experience by reducing the number of customer outreaches per case by 18% and enhanced productivity by increasing the number of cases processed per month by 48%. In what ways does the integration between Finologee’s KYC Manager and IDnow’s automated identity verification technology enable faster go-to-market for banks and other financial institutions? Can you share a concrete example?

Because identity verification is pre-integrated, deployment timelines are shortened considerably. This means clients such as banks or other financial institutions can launch new services or expand to new markets faster thanks to embedded regulatory readiness.  

A concrete example: the IDnow verification flow is especially useful when identifying ultimate beneficial owners (UBOs) and persons with significant control (PSCs), so people who ultimately own or control their company and are legally required to identify during onboarding. If the person responsible for their dossier doesn’t have their IDs, they can trigger an SMS to the phone number of the UBO or PSC to complete the verification directly. 

Scaling across markets and customization How do you support financial institutions in customizing onboarding journeys for different regions or customer segments?

The Finologee KYC platform enables journey segmentation by geography, product line or a risk profile. For instance, workflow logic can automatically route high-risk users to manual review or enhanced due diligence paths.

Looking ahead, what trends do you anticipate will most impact the way banks approach digital identity and compliance at scale?

We see AI and biometrics becoming standard components of fraud prevention. There will also be greater emphasis on accessibility, inclusivity and cross-device onboarding. And more broadly, banks and other financial institutions will be looking to reduce fragmentation through orchestration platforms.

On a personal level, what excites you most about working at the intersection of technology, compliance and financial services? Is there a particular moment or project that made you feel especially proud of the impact you’re making?

For me, it’s seeing how all the pieces come together in practice. One moment that really stood out was supporting a client launch in Luxembourg under tight regulatory deadlines they needed to comply with. It was a great example of how the platform can unlock speed, compliance, and user experience all at once – we successfully implemented KYC Manager within just three months, enabling a fully digital account opening process with no paper or printing requirements. On average, our clients see the submission process reduced to under 10 minutes and conversion rates doubled compared to traditional KYC remediation processes, while substantially lowering human error and workload.

Interested in more from our customer interviews? Check out: Docusign’s Managing Director DACH, Kai Stuebane, sat down with us to discuss how secure digital identity verification is transforming digital signing amid Germany’s evolving regulatory landscape. DGGS’s CEO, Florian Werner, talked to us about how strict regulatory requirements are shaping online gaming in Germany and what it’s like to be the first brand to receive a national slot licence.

By

Nikita Rybová
Customer and Product Marketing Manager at IDnow
Connect with Nikita on LinkedIn


Ockto

Column: AI is een briljante puber

Voor de HypoVak-special van InFinance schreef Gert Vasse de volgende column: AI lijkt al snel een BFF © (best friends forever) te worden. Het duikt in rap tempo op in allerlei handige toepassingen. Sindskort geeft bijvoorbeeld Google bij veel zoekopdrachten een handig AI-overzicht. Die functie bespaart je veel zoekwerk en geeft een goede samenvatting inclusief bronvermeldingen.

Voor de HypoVak-special van InFinance schreef Gert Vasse de volgende column:
AI lijkt al snel een BFF © (best friends forever) te worden. Het duikt in rap tempo op in allerlei handige toepassingen. Sindskort geeft bijvoorbeeld Google bij veel zoekopdrachten een handig AI-overzicht. Die functie bespaart je veel zoekwerk en geeft een goede samenvatting inclusief bronvermeldingen.


Duitse en Franse cybersecurity autoriteiten: let op AI-fraude bij digitale identificatie

Betrouwbare en veilige klantidentificatie is binnen de financiële sector een kernvoorwaarde om te voldoen aan wet- en regelgeving (Wwft, AML5, eIDAS, AVG). Met de introductie van ID-Wallets en eIDAS2.0 in 2028/2029 zal vanuit de overheid een structurele oplossing voor veilige digitale identificatie worden geboden.

Betrouwbare en veilige klantidentificatie is binnen de financiële sector een kernvoorwaarde om te voldoen aan wet- en regelgeving (Wwft, AML5, eIDAS, AVG). Met de introductie van ID-Wallets en eIDAS2.0 in 2028/2029 zal vanuit de overheid een structurele oplossing voor veilige digitale identificatie worden geboden.


Geverifieerde brondata: betere risico-inschatting met minder handwerk

Incomplete dossiers, ontbrekende documenten, langdurige doorlooptijden. Het verzamelen van klantdata is in veel kredietprocessen nog een tijdrovende stap. Er zijn meerdere contactmomenten nodig, aangeleverde gegevens zijn onduidelijke en er is het risico op fouten of fraude.

Incomplete dossiers, ontbrekende documenten, langdurige doorlooptijden. Het verzamelen van klantdata is in veel kredietprocessen nog een tijdrovende stap. Er zijn meerdere contactmomenten nodig, aangeleverde gegevens zijn onduidelijke en er is het risico op fouten of fraude.


Spherical Cow Consulting

Pirates, Librarians, and Standards Development

With the right motivation, even I will write a blog post on a dare. And the dare I got was to write a post about what librarians and pirate captains have in common, and why it matters for standards development. (If you can’t have fun when writing, what’s the point?) The post Pirates, Librarians, and Standards Development appeared first on Spherical Cow Consulting.

“With the right motivation, even I will write a blog post on a dare. And the dare I got today was to write a post about what librarians and pirate captains have in common, and why it matters for standards development.”

(If you can’t have fun when writing, what’s the point?)

I’m sure you all want to know what on earth THAT conversation was about. It started with the desire to assign vanity titles to friends. One friend was assigned “Intrepid bass-playing sailor cyber warrior” (though that one is possibly still a work in progress). So, of course, I had to ask what my title would be.

She thought something pirate-based. I thought maybe mob boss was more appropriate. But, no: “Nah, you don’t rule through fear. You set rules, and then people come to learn that obeying the rules brings progress while disobeying the rules brings a walk down the plank. Very impersonal, no bloodshed, just terminal disapproval.” Which I read not so much as Pirate as Librarian, and in either case, reminds both of us of what the standards development process is like.

In a way, this builds on a post I wrote a few weeks ago about needing all kinds of people and skills to develop good standards.

A Digital Identity Digest Pirates, Librarians, and Standards Development Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:07:50 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link Embed

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

Librarians and pirates: unlikely comparisons

On the surface, librarians and pirates couldn’t be more different. One rules a quiet, organized room full of catalogues and classification systems. The other shouts orders across a storm-tossed deck, treasure map in hand.

But scratch at the stereotypes, and the similarities pop up:

Both guard treasure — knowledge or gold. Both rely on codes that aren’t strictly laws, but that everyone learns to respect. Both lead crews (or patrons) who don’t always agree but who need to move in the same direction. And both know that without discipline, the whole ship — or library — quickly sinks.

Standards development, in its own way, needs a bit of both. Librarians bring order, taxonomies, metadata, and interoperability. Pirates bring the consequences: if you won’t play along with the standard, good luck finding allies or charting your course without a map.

Leadership characteristics

So what’s actually useful, whether you’re wrangling sailors, cataloguing a collection, or chairing a standards meeting?

Ability to engage people so they pay attention. Whether it’s a weary deckhand, a confused student, or a standards group at the two-hour mark, keeping attention is half the battle. Ability to raise one eyebrow sternly. Every ship, library, or working group needs That Person. The person who has one eyebrow that says: “Are you sure you want to keep going down that path?” Sometimes it’s more effective than three paragraphs of meeting minutes. Ability to lead people to their own conclusions. Neither pirate captains nor librarians hand you the final answer. The captain points at the map and lets you realize the treasure’s yours to dig up. The librarian nudges you toward the right catalogue entry. In standards, this is the art of facilitation — nudging until consensus emerges. What doesn’t work Leading purely through fear. Fear doesn’t build commitment — it drives people away. Pirates who rule by terror end up facing mutiny, and librarians who inspire only dread will find books mysteriously mis-shelved out of spite (I hate it when that happens). In standards, disengagement is fatal: if people only show up to avoid backlash, the work stalls and the draft sinks. Letting others set the tone of fear. A crew ruled by grudges goes nowhere, and a library ruled by petty turf wars becomes unusable. The same is true in standards: if flame wars and side agendas become scarier than the actual process, people stop showing up; without participation, no standard survives. Romance, intrigue, and life

Obviously, this is a very romanticized version of a pirate (and of a librarian, for that matter). Real librarians don’t spend their time swashbuckling, and real pirates were often violent criminals (also without the swashbuckling). But when I’m not writing, editing, researching, or running meetings, I’m reading trashy romance novels. Romanticized life in my spare time is my idea of entertainment.

And maybe that’s the point: we bring our own metaphors and stories to how we think about leadership and collaboration. Whether you fancy yourself the stern-eyebrowed librarian or the captain with a plank, the truth is that standards need both. Someone to keep the ship steady, someone to keep the records straight, and all of us learning when to raise an eyebrow at just the right time.

Hopefully, this post made you smile. And if it didn’t, I have a Very Stern Look at the ready for you.

If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript Introduction

00:00:31 Hello and welcome back to A Digital Identity Digest.

00:00:35 Today’s episode comes from a dare. And honestly, if you know me, you’ll understand that’s a very dangerous way to start anything.

The dare was simple: write a post about what librarians and pirate captains have in common and why that matters to standards.

How could I say no to that?

00:00:52 Because let’s be honest—if you can’t have fun with your writing, what’s the point?

Pirates and Librarians: Not So Different

00:00:57 At first glance, pirates and librarians couldn’t be more different.

Pirates live on the high seas, sword in hand, shouting orders across storm-tossed decks. Librarians work in hushed halls, surrounded by catalogs and metadata, raising an eyebrow when needed.

And yet, if you look closely, there’s surprising overlap.

00:01:25 This all started with a conversation about vanity titles—those fun, unofficial roles we give each other.

A friend was dubbed the Intrepid Bass-Playing Cyber Sailor Warrior. Mine was harder: pirate? mob boss? librarian?

00:02:06 The final suggestion landed: I don’t rule through fear—I set rules. And when followed, they bring progress. Ignore them, and… well, it’s a walk down the plank.

That sounded far less like a pirate and far more like a librarian—which is fitting, since I have a degree in library science.

Shared Treasures and Shared Codes

00:02:24 So, what do pirates and librarians actually do?

Pirates guard treasure: gold, jewels, captured loot. Librarians guard knowledge: books, archives, collections, and digital resources.

00:02:42 Both operate according to a code.

Pirates had their Pirate Code—rules about dividing loot, settling disputes, and running the ship. Librarians have cataloging standards, metadata schemas, and classification systems.

00:03:08 Neither set of rules carries the weight of law, but ignoring them leads to chaos.

00:03:19 And both depend on their crews. Pirates don’t sail alone; librarians don’t run libraries without staff, volunteers, and community support.

This is the essence of standards development:

Gathering crews Establishing codes Protecting shared treasure (protocols, specifications, best practices)

Ignore the structure, and everything sinks fast.

The Keys to Leadership

00:03:39 So, what makes leadership work—whether on a ship, in a library, or in a standards group?

00:03:53 First: the ability to engage people.

Pirates had to keep their crews motivated. Librarians help people navigate information overload. Standards leaders cut through noise and keep focus.

00:04:02 Second: the power of the raised eyebrow.
Every community has that one look that says: “Are you sure you want to go down that path?” Subtle signals can be powerful leadership tools.

00:04:22 Third: leading people to their own conclusions.

Pirates pointed to treasure maps. Librarians point to catalogs and shelves. Standards leaders facilitate consensus rather than forcing agreement. What Doesn’t Work

00:04:41 Now, let’s talk about what doesn’t work.

Leading through fear. Fear breeds disengagement. Pirates who ruled by terror faced mutiny. Librarians who ruled by dread found books deliberately mis-shelved. In standards, disengagement kills progress. Letting others set the tone of fear. If grudges rule the ship, it goes nowhere. If turf wars rule a library, the whole community suffers. If flame wars dominate standards groups, the work halts.

Leaders must set the tone. If fear takes over, participation drops—and without participation, nothing survives.

Romanticizing the Metaphor

00:05:43 If you’ve stayed with me this long, you’re probably either giggling or dismayed.

Yes, this is a romanticized version of pirates and librarians.

Real pirates were often violent criminals. Real librarians are not criminals—and do far more than raise their eyebrows.

00:06:13 But that’s exactly what makes the metaphor fun. We all bring our own stories into how we think about leadership and collaboration.

The Balance We Need

00:06:24 Whether you see yourself as a pirate captain, a librarian, or something in between, the truth is: standards need both.

Someone to keep the ship steady. Someone to keep the record straight. And all of us knowing when to raise that well-timed eyebrow.

00:06:41 This episode was short—part reflection, part fun—but with a reminder: standards are made by people. People with quirks, with stories, and sometimes with pirate hats or card catalogs.

Closing Thoughts

00:06:56 Thanks for listening to A Digital Identity Digest.

If you enjoyed this episode:

Subscribe and share it with someone who needs to know that standards don’t have to be boring. Connect with me on LinkedIn at @hlflanagan. Leave a rating or review on Apple Podcasts or wherever you listen.

00:07:14 You can also find the written post at sphericalcowconsulting.com.

Stay curious, stay engaged, and let’s keep these conversations going.

The post Pirates, Librarians, and Standards Development appeared first on Spherical Cow Consulting.


FastID

The Tools Gap: Why Developers Struggle to Code Green

77% of developers want to code sustainably, but most lack the tools to measure impact. Fastly’s survey reveals the barriers and opportunities in green coding.
77% of developers want to code sustainably, but most lack the tools to measure impact. Fastly’s survey reveals the barriers and opportunities in green coding.

Monday, 22. September 2025

Anonym

How to use MySudo phone numbers for free international calls

If you love travelling, you know the value of unlimited possibilities. That’s why you’re going to want to travel with MySudo app. MySudo is the original all-in-one privacy app that lets you protect your identity and your information with second phone numbers, secure email, private browsers, and virtual cards – all wrapped into secure digital […] The post How to use MySudo phone numbers for free

If you love travelling, you know the value of unlimited possibilities. That’s why you’re going to want to travel with MySudo app.

MySudo is the original all-in-one privacy app that lets you protect your identity and your information with second phone numbers, secure email, private browsers, and virtual cards – all wrapped into secure digital profiles called Sudos.

Every MySudo feature is handy for international travel, but it’s using the phone numbers for free international calls that will really save you money while you’re away.

But even if you’re not about to hop on a plane, MySudo is still your go-to for free international calls to family and friends.

Here’s how to use MySudo for free international calls whether you’re travelling overseas or calling loved ones from home:

Overseas traveller

If you’re travelling overseas, MySudo gives you free international calling in a choice of regions and area codes. That means no fees and no need for an international roaming plan. Here’s how to set it up:

Download MySudo for iOS or Android. Choose Sudo Max plan for unlimited minutes and messages for up to 9 separate Sudo phone numbers. (Read: What do I get with SudoMax?) Choose a phone number and area code in the region you want to travel. MySudo numbers are currently available in the US, UK*, and Canada. Call and message anyone for free within the region under your SudoMax plan. Give your Sudo number to locals and they can call you as if it’s a local call (and you can avoid high inbound charges).

So long as you’ve got access to hotel or public wi-fi you can use MySudo for free calls. If you think you’ll be out of WiFi range sometimes, you can get an e-sim or international data roaming plan to use local data and MySudo will also work with those.

Calling loved ones from home

MySudo lets you call anyone anywhere in the world for free so long as the person you’re calling is using MySudo. Calls between users are end-to-end encrypted, so you can talk privately and securely. Here’s how to Invite your friends to MySudo:

Tap the menu in the top left corner. Tap Invite your friends. Choose to invite your friends from your device via another app or from your MySudo account.  Select the Sudo you want to invite from (if you have more than one Sudo). Follow the prompts.

After you’ve invited a friend, they will receive a link with your MySudo contact information (email, handle and phone number if you have one), which will prompt them to install MySudo. Once they have the app installed, they can instantly start communicating with you. Remember, all video and voice calls, texts and email between MySudo users are end-to-end encrypted.

But wait, there’s more …

7 more facts about MySudo phone numbers MySudo numbers are real, unique, working phone numbers. Each phone number has customizable voicemail, ringtones, and contacts list. You can also mute notifications and block unwanted callers. MySudo numbers are fully functional for messaging, and voice, video and group calling.  Calls and messages with other MySudo users are end-to-end encrypted. Calls and messages out of network are standard. MySudo phone numbers don’t expire. Your phone numbers will auto-renew so long as you maintain your paid plan. Calling with MySudo works like WhatsApp or Signal, but with the privacy advantage that you’re not handing over your real number to sign up. You can manage multiple numbers all in one app (read: How to Get 9 “Second Phone Numbers” on One Device). Under SudoGo plan, you get 1 included phone number; under SudoPro plan, you get 3 included phone numbers; and under SudoMax plan, you get 9 included phone numbers. If you need additional phone number resets, you can purchase them within the app for a small fee. You can always check your plans screen to see how many phone numbers you have remaining before you’ll be prompted to purchase one.

So, to recap how to use MySudo for free international calls:

To make free calls while travelling overseas, choose a Sudo number and area code in your region of travel and get unlimited minutes and messages under SudoMax plan. Available regions are the United States, United Kingdom*, and Canada. To make free, end-to-end encrypted calls anywhere in the world, invite your friends to the app. To call or message regular numbers abroad, use a Sudo number in their region, but sign up to SudoMax so there’s no limit on minutes or messages.

*In order to comply with government and service provider regulations to limit the risk of fraud, users are required to provide their accurate and up-to-date legal identity information before they can obtain UK phone numbers. 
Read: Why are you asking for my personal information when creating a phone number?

Take control and simplify your communication today. Download MySudo.

Before you go, explore the full MySudo suite.

The post How to use MySudo phone numbers for free international calls appeared first on Anonyome Labs.


ComplyCube

ComplyCube Named as an AML Industry Leader in the G2 Fall 2025 Report

ComplyCube has reinforced its Leader status in G2's 2025 Fall Grid Report. The company has achieved recognition for its ease of implementation and ROI in categories including AML, customer onboarding, and biometric authentication. The post ComplyCube Named as an AML Industry Leader in the G2 Fall 2025 Report first appeared on ComplyCube.

ComplyCube has reinforced its Leader status in G2's 2025 Fall Grid Report. The company has achieved recognition for its ease of implementation and ROI in categories including AML, customer onboarding, and biometric authentication.

The post ComplyCube Named as an AML Industry Leader in the G2 Fall 2025 Report first appeared on ComplyCube.


uquodo

UAE’s Move Beyond OTPs: Biometric Authorization for Seamless Transactions

The post UAE’s Move Beyond OTPs: Biometric Authorization for Seamless Transactions appeared first on uqudo.

Kin AI

Kinside Scoop 👀 #14

Better customisation, better memory, better Kin

Hey folks 👋

We’ve kept busy working on Kin - it’s been two weeks already!

Read on to hear what we’ve been up to, and reach the end for this edition’s super prompt.

What’s new with Kin 🚀 Smarter characters, easier flow ✏

We’ve cleaned up the home screen, and made it possible to edit advisor characters right from the homepage selector.

This way, you can make sure all the sides of Kin are exactly who you need them to be - not just your own custom prompt.

Advisors that advise 🧙‍♂️

Your advisors are no longer passive chat partners - when they’ve got something to say (like wondering whether you’ve remembered that meeting you usually forget), they’ll reach out to you personally with a push notification.

You’re in control of this: feel it’s too much? You can turn down the frequency in the app. But if you like it? You can turn it up too.

Memory that remembers who matters 🫂

Our next memory update means Kin now does a better job of extracting people from your messages into your Kin’s private database.

Conversations about important folks should feel more accurate and natural now, as Kin remembers more of the important stuff about them.

Help getting what you need 💡

We’ve also added advisor interaction reminders and frequency tracking. Now you can see how often you’ve chatted with each advisor, and set up reminders to make sure you’re talking with each advisor as often as you’d like.

Voice mode 🎙

We’ve heard your thoughts loud and clear: voice mode is a favorite, but more stability and longer usage times are needed.

There was also an issue for Android users with headsets - we’ve dealt with that, so now Kin’s voice mode shouldn’t get so confused by wires.

For everything else, we’re working on improvements to make it feel seamless. More soon.

Other fixes & polish 🛠

Removed emojis from filter types for better readability

Tweaked chat font design for smoother legibility

Fixed the journal voice button floating mid-screen (no more runaway buttons)

Cleaned up chat formatting in general

Further fixes for Android keyboard issues (hopefully the last!)

Fixed Journal title generation, so auto-generated titles should work much better now

Resolved the double user issue, for those that had it!

Your turn 💭

Kin is moving fast. We have big plans to reach by the end of the year - and we want to make sure we arrive at a place you love as much as us.

So, like we say every time, there are multiple ways to tell us your thoughts about Kin. Good, bad, strange… we want them all!

You can reach out to the KIN team at hello@mykin.ai with anything, from feature feedback to a bit of AI discussion (though support queries will be better helped over at support@mykin.ai).

For something more interactive, the official Kin Discord is still the best place to talk to the Kin development team (as well as other users) about anything AI.

We have dedicated channels discussing the tech behind Kin, networking users, sharing support tips, and for hanging out.

We also regularly run three casual calls every week, and you’re invited:

Monday Accountability Calls - 5pm GMT/BST
Share your plans and goals for the week, and learn tips about how Kin can help keep you on track.

Wednesday Hangout Calls - 5pm GMT/BST
No agenda, just good conversation and a chance to connect with other Kin users.

Friday Kin Q&A - 1pm GMT/BST
Drop in with any questions about Kin (the app or the company) and get live answers in real time.

You’re the centre of this conversation - make sure you take your place. Kin’s for you, not for us.

Finally, you can also share your feedback in-app. Just screenshot to trigger the feedback form!

Our current reads 📚

Article: How people really use AI (Claude vs ChatGPT)
READ - thedeepview.co

Report: Mobile app trends in Denmark
READ - franma.co

Article: Apple launch the iPhone 17 pro, featuring the new A19 chipset built with running LLMs in mind (making a truly-local Kin instance more possible)
READ - Apple

Report: a16z’s app affinity scores for AI users (what other AI apps are users of particular AI most likely to have?)
READ: Olivia Moore via x

This edition’s super prompt 🤖

This time, we’re asking your Kin:

What kind of support do I best respond to?”

If you have Kin installed and up to date, you can tap the link below (on mobile!) to explore how you think about pressure, and how you can keep cool under it.

As a reminder, you can do this on both iOS and Android.

Try prompt in Kin

This is your journey 🚢

Kin always has been and always be for you as users. We want to build the most useful and supportive AI assistant we can.

So, please: email us, chat in our Discord, or even just shake the app to reach out to us with your thoughts and ideas.

Kin is only what our users make of us.

With love,

The KIN Team


Veracity trust Network

2025 bot trends see rise of Gen-AI continuing

One of the 2025 bot trends which will continue into the future is the use of GenAI-powered technology to spearhead attacks on both private business and critical infrastructure. This rising trend has been growing at a pace since 2023 and shows no sign of slowing down and, according to many reports, is likely to become an even greater threat. The post 2025 bot trends see rise of Gen-AI continu

One of the 2025 bot trends which will continue into the future is the use of GenAI-powered technology to spearhead attacks on both private business and critical infrastructure.

This rising trend has been growing at a pace since 2023 and shows no sign of slowing down and, according to many reports, is likely to become an even greater threat.

The post 2025 bot trends see rise of Gen-AI continuing appeared first on Veracity Trust Network.


Okta

Introducing the Okta MCP Server

As AI agents and AI threats proliferate at an unprecedented rate, it becomes imperative to enable them to communicate safely with the backend systems that matter the most. A Model Context Protocol (MCP) server acts as the bridge between an LLM and an external system. It translates natural language intent into structured API calls, enabling agents to perform tasks like provisioning users, managi

As AI agents and AI threats proliferate at an unprecedented rate, it becomes imperative to enable them to communicate safely with the backend systems that matter the most.

A Model Context Protocol (MCP) server acts as the bridge between an LLM and an external system. It translates natural language intent into structured API calls, enabling agents to perform tasks like provisioning users, managing groups, or pulling reports, all while respecting the system’s security model. Establishing a universal protocol eliminates the need to build custom integrations. Enterprises can now easily connect their AI agents with Okta’s backend systems to achieve automation of complex chains of activities, quick resolution of issues, and increased performance throughput.

Table of Contents

What the Okta MCP Server brings Tools and capabilities Highlights at a glance Getting started with the Okta MCP Server Initializing the project Authentication and authorization Configuring your client Using the Okta MCP Server with VS Code Enable agent mode in GitHub Copilot Update your VS Code settings Start the server Examples in action Read more about Cross App Access, OAuth 2.0, and securing your applications What the Okta MCP Server brings

The Okta MCP Server brings this capability to your identity and access management workflows. It connects directly to Okta’s Admin Management APIs, giving your LLM agents the ability to safely automate organization management.

Think of it as unlocking a new interface for Okta, one where you can ask an agent:

“Add this new employee to the engineering group.” “Generate a report of inactive users in the last 90 days.” “Deactivate all users who tried to log in within the last 30 minutes.” Tools and capabilities

In its current form, the server allows the following actions:

User Management: Create, list, retrieve, update, and deactivate users. Group Management: Create, list, retrieve, update, and delete groups. Group Operations: View assigned members, view assigned applications, add, and remove users. System Information: Retrieve Okta system logs.

And many more actions with application and policies APIs as well.

Using the above operations as a base, complex real-life actions can also be performed. For example, you can ask the MCP server to generate a security audit report for the last 30 days and highlight all changes to user and group memberships according to your desired report template.

Highlights at a glance Flexible Authentication: The server supports both interactive login (via Device Authorization Grant) and fully automated, browserless login (via Private Key JWT). Whether you’re experimenting in development or running a headless agent in production, you can authenticate in the way that fits your workflow. More Secure Credential Handling: Your authentication details are managed through scoped API access and environment variables, keeping secrets out of code. Tokens are issued only with the permissions you explicitly grant, following least-privilege best practices. Seamless Integration with Okta APIs: Built on Okta’s official SDK, the server is tightly integrated with Okta’s Admin Management APIs. That means reliable performance, support for a wide range of identity management tasks, and an extensible foundation for adding more endpoints over time. Getting started with the Okta MCP Server

Now that you know what the Okta MCP server is and why it’s useful, let’s dive into how to set it up and run it. Before you proceed, you will need VS Code, Python environment (Python 3.9 or above), and uv.

Initializing the project

The Okta MCP server comes packaged for quick setup so you can clone and run it. We use uv (a fast Python package manager) to help ensure your environment is reproducible and lightweight.

Install uv

Clone the repository: git clone https://github.com/okta/okta-mcp-server.git Install dependencies and set up the project: cd okta-mcp-server && uv sync

At this point, you have a working copy of the server. Next, we’ll connect it to your Okta org.

Authentication and authorization

Every MCP server needs a way to prove its identity and access your Okta APIs more securely. We support two authentication modes, and your choice depends on your use case.

Option A: Device authorization grant (recommended for interactive use)

This flow is best if you’re running the MCP server locally and want a quick, user-friendly login. After you start the server, it triggers a prompt to log in via your browser. Here, the server exchanges your browser login for a secure token that it can use to communicate with Okta APIs.

Use this if you’re experimenting, developing, or want the simplest way to authenticate.

Before you begin, you’ll need an Okta Integrator Free Plan account. To get one, sign up for an Integrator account. Once you have an account, sign in to your Integrator account. Next, in the Admin Console:

Go to Applications > Applications Click Create App Integration Select OIDC - OpenID Connect as the sign-in method Select Native Application as the application type, then click Next

Enter an app integration name

Configure the redirect URIs: Redirect URI: com.oktapreview.java-oie-sdk:/callback Post Logout Redirect URI: http://com.oktapreview.java-oie-sdk/ In the Controlled access section, select the appropriate access level Click Save Where are my new app's credentials?

Creating an OIDC Native App manually in the Admin Console configures your Okta Org with the application settings.

After creating the app, you can find the configuration details on the app’s General tab:

Client ID: Found in the Client Credentials section Issuer: Found in the Issuer URI field for the authorization server that appears by selecting Security > API from the navigation pane. Issuer: https://dev-133337.okta.com/oauth2/default Client ID: 0oab8eb55Kb9jdMIr5d6

NOTE: You can also use the Okta CLI Client or Okta PowerShell Module to automate this process. See this guide for more information about setting up your app.

Note: While creating the app integration, make sure to select the Device Authorization in the Grant type.

Once the app is created, follow these steps:

Grant API scopes (for example: okta.users.read, okta.groups.manage).


Copy the Client ID for later use.

Note: Why “Native App” and not “Service”?
Device Auth is designed for user-driven flows, so it assumes someone is present to open the browser.

Option B: Private key JWT (best for automation, CI/CD, and “headless” environments)

This flow is perfect if your MCP server needs to run without human intervention, for example, inside a CI/CD pipeline or as part of a backend service. Instead of prompting a person to log in, the server authenticates using a cryptographic key pair.

Here’s how it works:

You generate or upload a public/private key pair to Okta. The server uses the private key locally to sign authentication requests. Okta validates the signature against the public key you registered, ensuring that only your authorized server can act on behalf of that client.

Use this if you’re automating, scheduling jobs, or integrating into infrastructure.

In your Okta org, create a new API Services App Integration.


Under Client Authentication, select Public Key / Private Key.


Add a public key: either generate it in Okta (recommended) and copy it in PEM format, or upload your own keys.


Copy the Client ID and Key ID (KID).


Grant the necessary API scopes (e.g., okta.users.read, okta.groups.manage) and provide Super Administrator access.

Configuring your client

You can use Okta’s MCP server with any MCP-compatible client. Whether running a lightweight desktop agent, experimenting in a local environment, or wiring it into a production workflow, the setup pattern is the same.

For this guide, we’ll walk through the setup in Visual Studio Code with GitHub Copilot - one of the most popular environments for developers. The steps will be similar if you use another client like Claude Desktop or AWS Bedrock.

Using the Okta MCP Server with VS Code Enable agent mode in GitHub Copilot

The Okta MCP server integrates with VS Code through Copilot’s agent mode.

Install the GitHub Copilot extension Open the Copilot Chat view in VS Code.

To enable the Agent mode, checkout the steps mentioned in the VS Code docs.

Update your VS Code settings

Next, you’ll tell VS Code how to start and communicate with the Okta MCP server. This is done in your settings.json. You can also create your own mcp.json and set this up.

{ "mcp": { "inputs": [ { "type": "promptString", "description": "Okta Organization URL (e.g., https://trial-123456.okta.com)", "id": "OKTA_ORG_URL" }, { "type": "promptString", "description": "Okta Client ID", "id": "OKTA_CLIENT_ID", "password": true }, { "type": "promptString", "description": "Okta Scopes (separated by whitespace, e.g., 'okta.users.read okta.groups.manage')", "id": "OKTA_SCOPES" }, { "type": "promptString", "description": "Okta Private Key. Required for 'browserless' auth.", "id": "OKTA_PRIVATE_KEY", "password": true }, { "type": "promptString", "description": "Okta Key ID (KID) for the private key. Required for 'browserless' auth.", "id": "OKTA_KEY_ID", "password": true } ], "servers": { "okta-mcp-server": { "command": "uv", "args": [ "run", "--directory", "/path/to/the/okta-mcp-server", "okta-mcp-server" ], "env": { "OKTA_ORG_URL": "${input:OKTA_ORG_URL}", "OKTA_CLIENT_ID": "${input:OKTA_CLIENT_ID}", "OKTA_SCOPES": "${input:OKTA_SCOPES}", "OKTA_PRIVATE_KEY": "${input:OKTA_PRIVATE_KEY}", "OKTA_KEY_ID": "${input:OKTA_KEY_ID}" } } } } }

Running the server for the first time prompts you to enter the following information:

Okta Organization URL: Your Okta tenant URL. Okta Client ID: The client ID of the application you created in your Okta organization. Okta Scopes: The scopes you want to grant to the application, separated by spaces. For example: "OKTA_SCOPES": "${input:OKTA_SCOPES = okta.users.read okta.users.manage okta.groups.read okta.groups.manage okta.logs.read okta.policies.read okta.policies.manage okta.apps.read okta.apps.manage}"

Note: Add scopes only for the APIs that you will be using.

Okta Private Key and Key ID: You only need to enter this key when using browserless authentication. If you’re not using that method, just press Enter to skip this step and use the Device Authorization flow instead. Start the server

When you open VS Code, you’ll now see okta-mcp-server as an option to start.

Click Start to launch the server in your mcp.json file.

The server will check your authentication method:

If using Device Authorization, it triggers a prompt to log in via your browser.

If using Private Key JWT, it will authenticate silently using your key.

Once connected, Copilot will automatically recognize the Okta commands you can use.

At this point, the MCP server has established a connection between VS Code and your Okta organization.You can now manage your organization using natural language commands directly in your editor.

Examples in action

1. Listing Users

2. Creating Users

3. Group Assignment

4. Creating an Audit Report

We invite you to try out our MCP server and experience the future of identity and access management. Meet us at Oktane, and if you run into issues, please open an issue in our GitHub repository.

Read more about Cross App Access, OAuth 2.0, and securing your applications Integrate Your Enterprise AI Tools with Cross App Access Build Secure Agent-to-App Connections with Cross App Access (XAA) OAuth 2.0 and OpenID Connect overview Why You Should Migrate to OAuth 2.0 From Static API Tokens How to Secure the SaaS Apps of the Future

Follow us on LinkedIn, Twitter, and subscribe to our YouTube channel for more developer content. If you have any questions, please leave a comment below!

Sunday, 21. September 2025

Rohingya Project

Rohingya Project Launches R-Coin Presale on PinkSale, Powering Blockchain Ecosystem for Stateless Rohingya

The Rohingya Project today announced the launch of its R-Coin token presale on the PinkSale launchpad, inviting impact-driven and crypto-savvy investors to support an innovative social-impact initiative. R-Coin (RCO) is the native token of the project’s SYNU Platform, a blockchain-based network designed to empower over 3.5 million stateless Rohingya refugees worldwide. By participating in the […]
The Rohingya Project today announced the launch of its R-Coin token presale on the PinkSale launchpad, inviting impact-driven and crypto-savvy investors to support an innovative social-impact initiative. R-Coin (RCO) is the native token of the project’s SYNU Platform, a blockchain-based network designed to empower over 3.5 million stateless Rohingya refugees worldwide. By participating in the […]

Saturday, 20. September 2025

Recognito Vision

Everything You Need to Know About Face Recognition Systems

Facial recognition is no longer just a sci-fi plot twist. It is now a part of daily life, from unlocking smartphones to airport security checks. A face recognition system uses advanced algorithms to scan, analyze, and verify identities in seconds. Businesses, schools, and governments are rapidly adopting it, but it’s worth digging deeper into how...

Facial recognition is no longer just a sci-fi plot twist. It is now a part of daily life, from unlocking smartphones to airport security checks. A face recognition system uses advanced algorithms to scan, analyze, and verify identities in seconds. Businesses, schools, and governments are rapidly adopting it, but it’s worth digging deeper into how it works, its benefits, and what challenges still exist.

 

Facial Recognition System

At its core, a facial recognition system relies on biometric technology. It captures a person’s facial features, converts them into a digital template, and compares that data with stored profiles to confirm identity. Unlike fingerprints or ID cards, you don’t need to touch anything. Just look at the camera, and the system does the rest.

This technology uses complex neural networks trained on thousands of images. The system maps out key points like the distance between eyes, nose shape, and jawline. The result is a unique faceprint that is nearly impossible to duplicate. Accuracy levels are improving quickly thanks to evaluations like the NIST Face Recognition Vendor Test, which tracks the performance of leading algorithms worldwide.

 

How Face Recognition Technology Works

Understanding the process makes it clear why it is so widely trusted. Here’s a simple breakdown:

Image Capture – A camera captures a person’s face in real time.

Face Detection – The system locates the face in the image and isolates it from the background.

Feature Extraction – Algorithms analyze facial features such as cheekbones, chin curves, and lip contours.

Template Creation – The extracted data is turned into a digital faceprint.

Comparison and Match – The faceprint is compared with existing records to confirm identity.

Accuracy rates are consistently improving. According to NIST FRVT 1:1 testing, leading systems now achieve over 99% verification success under ideal conditions.

 

Face Anti-Spoofing and Its Role in Security

Every great lock needs a strong defense. This is where face anti spoofing comes in. Without it, someone could trick the system using a photo, video, or even a 3D mask. Spoofing attempts are surprisingly common in fraud-heavy industries like finance.

Modern systems fight this using liveness detection. The camera checks for natural movements such as blinking, skin texture changes, and depth. Some solutions even shine light on the face and measure reflections to confirm the presence of a real person. These layers of defense ensure that recognition remains both fast and secure.

 

Face Recognition Attendance System

Schools, offices, and even factories are adopting a face recognition attendance system. No more long queues at biometric scanners or manual sign-in sheets. Employees just walk in, glance at a camera, and their presence is automatically logged.

The benefits are clear:

No contact required which keeps it hygienic.

Faster processing compared to manual punching.

Reduced buddy punching where one employee marks attendance for another.

Accurate reporting that syncs directly with payroll systems.

Organizations save time and prevent fraud while employees enjoy a hassle-free experience.

 

Face Scanning Attendance System in Education

Schools and universities are also experimenting with a face scanning attendance system. Teachers can focus on teaching instead of wasting class time marking attendance. Parents get real-time updates if their child is present, while administrators gain detailed records for compliance.

Though promising, it does raise questions about student privacy. Educational institutes must handle such systems responsibly and align with global data protection standards like GDPR.

 

Benefits of Face Recognition in Real-World Applications

Let’s talk numbers and impact. The global facial recognition market is projected to reach over $16 billion by 2030. Here’s why it’s growing so fast:

Security – Airports use it to screen passengers quickly.

Fraud Prevention – Banks use it to stop identity theft.

Convenience – Smartphones unlock instantly with a glance.

Efficiency – Attendance and access control become effortless.

Quick Fact Table:

Application Benefit Example Use Case Banking Stops account fraud Mobile banking logins Airports Speeds up security checks Passport verification Education Saves teaching time Student attendance Workplace Prevents time theft Employee attendance tracking

 

Privacy and Ethical Concerns

As powerful as the technology is, it sparks serious debates. Who owns the face data? How securely is it stored? What if it gets misused? Regulations are starting to catch up. In Europe, GDPR rules require companies to get clear consent before storing or using biometric data.

Transparency and user control are key. People need to know how their face data is being used and have the right to opt out. Striking a balance between security and privacy remains one of the biggest challenges for the industry.

 

Case Studies: Where It Works Best Airports – The U.S. Customs and Border Protection agency reported that facial recognition has caught thousands of identity fraud attempts since its rollout.

Corporate Offices – Large firms in Asia have reduced payroll fraud by adopting face-based attendance.

Healthcare – Hospitals use it to secure patient data and restrict access to sensitive areas.

These case studies highlight how versatile and impactful the technology can be when used responsibly.

 

The Future of Face Recognition

Imagine walking into a store, picking items, and leaving without waiting in line. Payment is automatically processed after the system confirms your face. This futuristic scenario is closer than you think. Retailers are already piloting systems where face recognition replaces credit cards.

At the same time, research is focusing on reducing bias. Early systems struggled with accuracy across different ethnicities. Today, continuous improvements are making recognition fairer and more reliable. Open-source contributions on platforms like GitHub are accelerating innovation by giving developers direct access to tools and data.

 

Conclusion

A face recognition system is more than just a tech buzzword. It is reshaping industries by offering speed, security, and convenience. From attendance tracking to fraud prevention, its applications are only expanding. But with great power comes great responsibility, and balancing innovation with privacy will decide how widely it gets adopted in the future. For organizations exploring the technology, brands like Recognito are paving the way with practical, secure, and developer-friendly solutions.

Friday, 19. September 2025

Shyft Network

Middle East Crypto in 2025: From Wild Experiments to Ironclad Rules

The Middle East’s crypto scene is no longer a playground for bold experiments. By September 2025, the region is laying down the law, transforming from a sandbox of ideas into a powerhouse of regulated innovation. Dubai’s regulators are cracking the whip, Bahrain’s rolling out bold new laws, and the UAE’s dirham is staking its claim as the backbone of digital payments. This isn’t just a shift — it’

The Middle East’s crypto scene is no longer a playground for bold experiments. By September 2025, the region is laying down the law, transforming from a sandbox of ideas into a powerhouse of regulated innovation. Dubai’s regulators are cracking the whip, Bahrain’s rolling out bold new laws, and the UAE’s dirham is staking its claim as the backbone of digital payments. This isn’t just a shift — it’s a seismic leap toward a future where compliance fuels growth. Let’s dive into the forces reshaping the region’s crypto landscape.

Dubai: Where Stablecoins Meet Serious Oversight

Dubai’s Virtual Assets Regulatory Authority (VARA) isn’t messing around. Gone are the days of loose guidelines and “let’s see what sticks.” VARA’s 2025 rulebook is a masterclass in clarity, dictating how stablecoins (Fiat-Referenced Virtual Assets) and tokenized real-world assets (RWAs) must be issued, backed, and disclosed. Want to launch a stablecoin or tokenize a skyscraper? You’d better have your paperwork in order.The real game-changer? Enforcement. VARA recently slapped a fine on a licensed firm, sending a crystal-clear message: licenses aren’t just badges of honor — they’re contracts with accountability. Dubai’s saying loud and clear: innovate, but play by our rules. This isn’t just regulation; it’s a blueprint for trust in a digital age.

Abu Dhabi: The Institutional Crypto Haven

While Dubai swings the regulatory hammer, Abu Dhabi Global Market (ADGM) is crafting a different narrative. Its Financial Services Regulatory Authority (FSRA) has fine-tuned its crypto framework to welcome institutional heavyweights. From custody to payment services, ADGM’s rules for fiat-referenced tokens are a magnet for serious players. Yet, privacy tokens and algorithmic stablecoins? Still persona non grata.

ADGM’s approach is a tightrope walk: embrace cutting-edge innovation while ensuring every move can withstand the scrutiny of global finance. It’s less about flashy pilots and more about building a crypto hub that lasts.

UAE’s Central Bank: Dirham Takes the Digital Crown

The Central Bank of the UAE (CBUAE) is drawing a line in the sand. As of September 2025, only dirham-pegged stablecoins can power onshore payments. Foreign tokens? Relegated to niche corners. This isn’t just policy — it’s a bold bet on the dirham as the anchor of the UAE’s digital economy. By prioritizing local currency, the CBUAE is ensuring the UAE doesn’t just participate in the crypto revolution — it leads it.

Dubai’s Real Estate Revolution: Tokenization Goes Big

Remember when Dubai’s tokenized real estate pilots were just a cool idea? Those days are gone. Recent sales, run with the Dubai Land Department, vanished in minutes, pulling in investors from every corner of the globe. The DIFC PropTech Hub is doubling down, turning these pilots into a full-blown movement. Tokenized property isn’t a gimmick anymore — it’s a market poised to redefine how we invest in real estate.

Bahrain and Beyond: The GCC’s Crypto Patchwork

Bahrain’s not sitting on the sidelines. Its new laws for Bitcoin and stablecoins are designed to make trading safer and more attractive to institutions. Meanwhile, Kuwait and Qatar are playing it cautious, keeping their crypto gates tightly shut. The GCC isn’t moving in unison, but the UAE and Bahrain are sprinting ahead, setting the pace for a region-wide crypto renaissance.

The Privacy Puzzle: Navigating the FATF Travel Rule

Behind the headlines lies a thornier challenge: the FATF Travel Rule. Virtual Asset Service Providers (VASPs) now have to share user data across borders, stirring up privacy and operational headaches. Enter Shyft Veriscope, a peer-to-peer platform that lets firms comply without exposing sensitive customer data to centralized risks. In a region obsessed with trust and growth, tools like these are the unsung heroes of crypto’s next chapter.

Why 2025 Is the Year to Watch

The Middle East isn’t just dabbling in crypto anymore — it’s rewriting the rules of the game. From dirham-backed stablecoins to tokenized skyscrapers, the region is building a digital asset economy where compliance isn’t a burden but a springboard. For founders, investors, and innovators, the message is clear: get on board, align with the rules, and seize the opportunity to shape a future where crypto isn’t just a buzzword — it’s a legacy.

About Veriscope

‍Veriscope, the compliance infrastructure on Shyft Network, empowers Virtual Asset Service Providers (VASPs) with the only frictionless solution for complying with the FATF Travel Rule. Enhanced by User Signing, it enables VASPs to directly request cryptographic proof from users’ non-custodial wallets, streamlining the compliance process.

For more information, visit our website and contact our team for a discussion. To keep up-to-date on all things crypto regulations, sign up for our newsletter and follow us on X (Formerly Twitter), LinkedIn, Telegram, and Medium.

Book your consultation: https://calendly.com/tomas-shyft or email: bd@shyft.network


iComply Investor Services Inc.

KYB Compliance Software for Regulated Entities: Navigating Global AML Shifts

KYB requirements are tightening worldwide. This guide helps regulated firms navigate evolving AML expectations and shows how iComply streamlines compliance with secure, scalable software.

Regulated entities – including PSPs, VASPs, investment platforms, and trust companies – must meet rising KYB and AML expectations. This article highlights emerging requirements across the UAE, UK, EU, Singapore, and U.S.

Regulated entities operate in complex environments where KYB and AML compliance are non-negotiable. Whether your firm is a payment service provider (PSP), virtual asset service provider (VASP), investment platform, corporate services provider, a real estate agent, a mortgage broker, regulators are tightening standards.

In 2025 and beyond, firms must demonstrate robust KYB controls, real-time screening, and jurisdictional audit readiness – especially as rules evolve in key markets like the UK, UAE, and EU.

Emerging Global AML Requirements for Regulated Entities United Kingdom Regulators: Companies House, FCA Shifts: Mandatory KYB and identity verification for directors and PSCs; AML registration and sanctions screening under MLR 2017 United Arab Emirates Regulators: CBUAE, DFSA, VARA, ADGM Requirements: Risk-based onboarding, KYB for corporate clients, Travel Rule compliance, UBO discovery, and localized data handling European Union Regulators: AMLA (in development), national competent authorities Shifts: 6AMLD mandates KYB, UBO transparency, risk scoring, and centralized reporting; MiCA introduces crypto-specific controls Singapore Regulator: MAS Requirements: CDD/EDD obligations, sanctions list monitoring, transaction screening, and UBO tracking for regulated businesses United States Regulators: FinCEN, SEC, CFTC, state agencies Shifts: BOI reporting under the Corporate Transparency Act; mandatory KYB and AML controls for regulated financial service providers Compliance Challenges for Regulated Entities

1. Overlapping Regulatory Bodies
Firms often face scrutiny from sector-specific and national agencies.

2. Diverging Standards
KYB requirements vary across regions, and privacy rules complicate data handling.

3. High-Risk Clients and Transactions
Cross-border payments and digital assets raise red flags.

4. Legacy Compliance Systems
Siloed tools delay onboarding and lack real-time visibility.

iComply: Leading KYB Compliance Software for Global Entities

iComply enables regulated firms to standardize and scale AML workflows across jurisdictions with modular tools and built-in localization.

1. KYB + KYC Automation Verify entities and individuals using real-time registry, document, and biometric checks Visualize UBO networks and flag nominee ownership Encrypted edge processing for global data privacy compliance 2. KYT + Risk Monitoring Monitor transactions for suspicious patterns or volume anomalies Score risk based on client type, geography, and transaction behaviour Trigger escalations and audit-logged alerts automatically 3. Centralized Case Management Unify screening, onboarding, and regulatory review workflows Track every decision, flag, and escalation in one dashboard Export formatted reports for FinCEN, FCA, AMLA, and MAS 4. Deployment + Localization Deploy on-prem, in private cloud, or across multiple regions Jurisdiction-specific policies, thresholds, and audit trails Seamless integration with banking, CRM, and identity tools Case Insight: DIFC-Based Corporate Services Firm

A UAE-regulated corporate services firm implemented iComply’s KYB software to unify compliance across business clients:

Cut onboarding time by 70% Automated UBO and sanctions monitoring Passed DFSA audit with zero deficiencies

As KYB expectations evolve globally, regulated entities must modernize fast. iComply’s compliance software simplifies onboarding, standardizes audit preparation, and supports confident cross-border operations.

Talk to iComply to see how our KYB compliance software helps PSPs, VASPs, and financial institutions stay compliant—no matter where they operate.


BlueSky

Building Healthier Social Media: Updated Guidelines and New Features

Public discourse on social media has grown toxic and divisive, but unlike other platforms, Bluesky is building a social web that empowers people instead of exploiting them.

Public discourse on social media has grown toxic and divisive. Traditional social platforms drive polarization and outrage because they feed users content through a single, centralized algorithm that is optimized for ad revenue and engagement. Unlike those platforms, Bluesky is building a social web that empowers people instead of exploiting them.

Bluesky started as a project within Twitter in 2019 to reimagine social from the ground up — to be an example of “bluesky” thinking that could reinvent how social worked. With the goal of building a healthier, less toxic social media ecosystem, we spun out as a public benefit corporation in 2022 to develop technologies for open and decentralized conversation. We built Authenticated Transfer so Twitter could interoperate with other social platforms, but when Twitter decided not to use it, we built an app to showcase the protocol.

When we built the app, we first gave users control over their feed: In the Bluesky app, users have algorithmic choice — you can choose from a marketplace of over 100k algorithms, built by other users, giving you full control over what you see. There is also stackable moderation, allowing people to spin up independent moderation services, and giving users a choice in what moderation middleware they subscribe to. And of course there is the open protocol, which lets you migrate between apps with your data and identity, creating a social ecosystem with full data portability. Just today, we announced that we are taking the next step in decentralization.

Although we focused on building these solutions to empower users, we still inherited many of the problems of traditional social platforms. We’ve seen how harassment, vitriol, and bad-faith behavior can degrade overall conversation quality. But innovating on how social works is in our DNA. We’ve been continuously working towards creating healthier conversations. The quote-post used to let harassers take a post out of context, so we gave users the ability to disable them. The reply section often filled up with unwanted replies, so we gave users the ability to control their interaction settings.

Our upcoming product changes are designed to strengthen the quality of discourse on the network, give communities more customized spaces for conversation, and improve the average user’s experience. One of the features we are workshopping is a “zen mode” that sets new defaults for how you experience the network and interact with people. Another is including prompts for how to engage in more constructive conversations. We see this as part of our goal to make social more authentic, informative, and human again.

We’ve also been working on a new version of our Community Guidelines for over six months, and in the process of updating them, we’ve asked for community feedback. We looked at all of the feedback you gave and incorporated some of your suggestions into the new version. Most significantly, we added details so everyone understands what we do and do not allow. We also better organized the rules by putting them into categories. We chose an approach that respects the human rights and fundamental freedoms outlined in the UN Guiding Principles on Business and Human Rights. The new Guidelines take effect on October 15.

In the meantime, we’re going to adjust how we enforce our moderation policies to better cultivate a space for healthy conversations. Posts that degrade the quality of conversations and violate our guidelines are a small percentage of the network, but they draw a lot of attention and negatively impact the community. Going forward, we will more quickly escalate enforcement actions towards account restrictions. We will also be making product changes that clarify when content is likely to violate our community guidelines.

We were built to reimagine social from the ground up by opening up the freedom to experiment and letting users choose. Social media has been dominated by a few platforms that have closed off their social graph and squashed competition, leaving users few alternatives. Bluesky is the first platform in a decade to challenge these incumbents. Every day, more people set up small businesses and create new apps and feeds on the protocol. We are continuing to invest in the broader protocol ecosystem, laying a foundation for the next generation of social media developers to build upon.

Today’s Community Guidelines Updates

In January, we started down the path of updating our rules. Part of that process was to ask for your thoughts on our updated Community Guidelines. More than 14,000 of you shared feedback, suggestions, and examples of how these rules might affect your communities. We especially heard from community members who shared concerns about how the guidelines could impact creative expression and traditionally marginalized voices.

After considering this feedback, and in a return to our experimental roots, we are going to bring a greater focus to encouraging constructive dialogue and enforcing our rules against harassment and toxic content. For starters, we are going to increase our enforcement efforts. Here is more information about our updated Community Guidelines.

What Changed Based on Your Feedback

Better Structure: We organized individual policies according to our four principles – Safety First, Respect Others, Be Authentic, and Follow the Rules. Each section now better explains what's not allowed and consolidated related policies that were previously scattered across different sections. More Specific Language: Where you told us terms were too vague or confusing, we added more detail about what these policies cover. Protected Expression: We added a new section for journalism, education, advocacy, and mental health content that aims to reduce uncertainty about enforcement in those areas.

Our Approach: Foundation and Choice

We maintain baseline protections against serious harms like violence, exploitation, and fraud. These foundational Community Guidelines are designed to keep Bluesky safe for everyone.

Within these protections, our architecture lets communities layer on different labeling services and moderation tools that reflect their specific values. This gives users choice and control while maintaining essential safety standards.

People will always disagree about whether baseline policies should be tighter or more flexible. Our goal is to provide more detail about where we draw these boundaries. Our approach respects human rights and fundamental freedoms as outlined in the UN Guiding Principles on Business and Human Rights, while recognizing we must follow laws in different jurisdictions.

Looking Forward

Adding clarity to our Guidelines and improving our enforcement efforts is just the beginning. We also plan to experiment with changes to the app that will improve the quality of your experience by reducing rage bait and toxicity. We may not get it right with every experiment but we will continue to stay true to our purpose and to listen to our community as we go.

These updated guidelines take effect on October 15, and will continue to evolve as we learn from implementation and feedback. Thank you for sharing your perspectives and helping us build better policies for our community.

Thursday, 18. September 2025

LISNR

How Mobility Leaders Turn Idle Ride Time into Opportunity

How Mobility Leaders Turn Idle Ride Time into Opportunity Mobility leaders across the globe are searching for a constant communication channel with their end customers. For transit leaders, there are three main touchpoints with their end consumers: Ticketing (Boarding), In-Transit, and Exit (Disembarkation). Most mobility leaders perfect one of the three, leaving possible revenue channels […] Th
How Mobility Leaders Turn Idle Ride Time into Opportunity

Mobility leaders across the globe are searching for a constant communication channel with their end customers. For transit leaders, there are three main touchpoints with their end consumers: Ticketing (Boarding), In-Transit, and Exit (Disembarkation). Most mobility leaders perfect one of the three, leaving possible revenue channels and ideal rider experiences on the table.

What communication channel can be capitalized across all three consumer journey touchpoints within mobility?

The Problem: Current proximity modalities are limited by one of the following: distance, throughput, hardware limitations, and interoperability.

The Solution: LISNR Radius offers a unique proximity modality that changes the way consumers interact throughout the rider journey. Our Radius SDK relies on ultrasonic communication between standard speakers (already installed in transit vehicles and stations) and microphones found in everyday devices like smartphones. By establishing a communication channel directly between the consumer device and the vehicle or station, transit operators can reduce wait times, improve accessibility, capitalize on idle time in transit, and segment their riders for variable pricing. 

Furthermore, LISNR offers Quest, our loyalty and gamification portal, which allows mobility leaders to keep a unified record of key customer interactions. With Quest, mobility leaders can incentivize off-peak rides and partner with nearby shops to offer advertisements directly to a rider in transit.

Talk with Our Team about Mobility Solutions The Proliferation of LISNR-Enabled Digital Touchpoints in Mobility

LISNR empowers businesses to capitalize on the digital touchpoints found in everyday transit experiences. By enabling the delivery of speedy ticketing and personalized offers directly to consumers’ devices, transit operators can engage their riders during all three stages of transit.

Ticketing

Legacy ticketing infrastructure creates long queues, is easy to bypass, and simply doesn’t work without a stable internet connection. Radius redefines this process with our ultrasonic SDK by working at longer ranges than NFC, with more pinpoint precision than BLE, and without a network connection at the time of transaction. Radius is already gaining major traction as a ticketing alternative in the mobility space. With our recent partnership with S-Cube, LISNR has expanded to provide a mass ticketing solution to the busiest transit stations in India.

S-Cube needed a faster and more secure way to enable ticketing for millions of riders. Moreover, S-Cube needed ticketing technology that could perform without a reliable network connection. Radius was able to achieve all of these and more. In testing, S-Cube saw a dramatic increase in rider throughput by switching from QR codes to Radius for ticketed gate access. They moved from processing 35 riders per minute to 60 riders per minute, representing an over 70% improvement.

 

S-Cube uses a Zone 66 broadcast at entry allowing consumers to identify themselves and validate their ticket as they approach the turnstile. Once at the turnstile, consumers can broadcast their account-based ticket information to the ticketing machine (Point1000 on Channel 0 from their device’s speaker). Since they have already been identified and validated, their passengers can breeze through the ticketing process.

See More Product Demos In-Transit Promotion

In-transit promotions are not a new concept, with buses and trains already filled with billboard-like advertising. More recently, rideshare applications have started showing ad space on the home page and key active pages. Unfortunately, these advertisements often go unnoticed, are rarely relevant to the end customer, and for rideshares, are only presented to the paying device. LISNR solves these problems with Radius and Quest.

Using Radius, transit operators can capitalize on idle time in transit by sending promotional offerings directly to all consumers’ devices that are present in the vehicle. For example, businesses at certain stops can target specific riders based on their commute patterns. Furthermore, food/grocery delivery platforms can focus on a tired passenger coming home from work.

By establishing this additional communication channel to their riders, the transit operator can send promotional messages from their partners directly to the most important audiences. By communicating at the device level, promotional offerings can be sent with the preferences of the end customer. Radius’s ultrasonic SDK operates above audible frequencies, meaning that even in noisy conditions, riders are still able to receive their promotions.

By incorporating Quest, transit operators (or their marketing partners) can keep a unified record of customers and the promotions they interact with. Over time, this leads to more relevant promotions and a better experience for marketers and riders alike. With Quest and Radius, transit operators can capitalize on riders’ idle time in transit while establishing a positive connection with them.

Radius tone being broadcast at a frequency higher than human hearing Example of Quest, gamified loyalty for a mobility ecosystem leader Identify the Exit Point

In some modes of transit it’s easy to identify when the consumer exits the vehicle (planes, rideshares), however, most modes of public transit are left in the dark. This lack of visibility into rider disembarkation makes certain variable pricing nearly impossible. With Radius, transit operators can leverage the rider’s microphone when in-app to detect and confirm the presence of the device. With Radius enabled, mobility operators can begin to charge based on a “Be-In-Be-Out” pricing model. These seamless transit experiences are gaining traction with the global contactless transit market projected to grow to $33.5B by 2030 (CAGR ~15%). This major shift is driven by account-based ticketing and distance/usage-based fares (Source: Allied Market Research, 2023). 

LISNR is here to enable transit ecosystem leaders with the technology to support a near-frictionless be-in/be-out user flow for consumers. Our long-range (Zone) ultrasonic tones can broadcast in-vehicle to detect the presence of devices. As riders exit the vehicle, the tones will no longer be detected and the app backend will end the variable pricing model for their trip.

Conclusion

LISNR’s contactless solutions help support the mobility and transit ecosystems across all major digital touchpoints in the consumer journey, from ticketing to exit. With these contactless touchpoints optimized for speed and security, ecosystem leaders can capitalize on variable pricing and answer the growing demand for frictionless experiences; all while establishing new revenue streams with in-transit promotions. 

Our customer loyalty and gamification portal, Quest, can support and optimize consumer touchpoints across the journey. Riders can be incentivized to travel in off-peak hours, receive bonuses for promotions that they convert on, and be rewarded for their achievements such as lifetime rides.

We’ve put together a comprehensive PDF that outlines where LISNR outperforms other contactless technologies commonly found in mobility. If you’re interested in learning more or sharing with a colleague, please feel free to download a copy below.

We’ve created an easily digestible overview of this process, highlighting the digital touchpoints for your passengers. Fill out your contact information below to download a digital copy.

The post How Mobility Leaders Turn Idle Ride Time into Opportunity appeared first on LISNR.


IDnow

Why compliance and player protection define iGaming in Germany – DGGS’s CEO Florian Werner on leading the way.

We spoke with Florian Werner, CEO of Deutsche Gesellschaft für Glücksspiel (DGGS), the operator behind the JackpotPiraten and BingBong brands, to understand how strict regulatory requirements under the Interstate Treaty on Gambling (GlüStV 2021) are shaping iGaming in Germany. From the country’s strong focus on player protection to navigating compliance challenges Werner explains how DGGS […]
We spoke with Florian Werner, CEO of Deutsche Gesellschaft für Glücksspiel (DGGS), the operator behind the JackpotPiraten and BingBong brands, to understand how strict regulatory requirements under the Interstate Treaty on Gambling (GlüStV 2021) are shaping iGaming in Germany.

From the country’s strong focus on player protection to navigating compliance challenges Werner explains how DGGS balances regulation with player experience – and why trusted partners like IDnow are essential for building a sustainable, responsible iGaming market. 

As one of Germany’s earliest licensed iGaming operators, DGGS has taken on both the pride and responsibility of setting industry standards. With its JackpotPiraten and BingBong brands, the company is committed to combining entertainment with strong compliance and social responsibility. In this interview, CEO Florian Werner shares how DGGS works with regulators, leverages technology to protect players and adapts to the challenges of one of Europe’s most tightly regulated markets. 

Why being first in Germany came with pride – and responsibility  In 2022, DGGS’ JackpotPiraten and BingBong became the first brands to receive a national slot licence from the German regulator, GGL. What did that milestone mean for you as an operator – especially in terms of your responsibility to lead in compliance and player protection?

We were delighted and proud to be the first operator to meet the necessary requirements for entering the German market. At the same time, we are fully aware of the responsibility this entails. That is why we are committed to acting responsibly and have deliberately chosen such an experienced partner as IDnow to stand by our side, supporting us actively in key areas such as player protection, account verification, and the safety of our players.

How does IDnow help you protect your players?

IDnow helps us reliably verify the identity of our players, making sure that no one can play under a false name. At the same time, the solution provides a secure and compliant identity check that effectively prevents underage gaming and fraud. This way, we create a trustworthy and protected environment for all our players.

Why regulation in Germany creates both challenges and opportunities  What were the most significant regulatory and operational challenges you faced in those first months?  

The biggest challenges in the regulated German market have remained unchanged since legalization. These primarily include the high tax burdens in Germany, which have a negative impact on payout ratios and the overall gaming experience of virtual slot machines. In addition, requirements such as a €1 stake limit and the mandatory delay between game rounds (the ‘5-second rule’) pose significant challenges. Since many of these regulations were newly introduced under the 2021 Interstate Treaty on Gambling (GlüStV 2021), a meaningful exchange of experiences was initially difficult. However, we are in contact with various industry representatives and remain hopeful for a more attractive offering for German players in the future.

How does the DGGS work with GGL and other regulators to protect players and combat fraud and how does it stay up to date with any regulatory changes to ensure continuous compliance? 

We are engaged in regular dialogue on multiple levels, collaborating closely with both industry associations and regulatory representatives. In particular our compliance team maintains an ongoing exchange that we experience as collegial, constructive, and open.

Why technology and trusted partners are the backbone of compliance  What role do trusted technology or identity verification partners play in maintaining your compliance and risk posture? 

Verification and identity-check technologies are of vital importance. In Germany, strict regulations rightly govern the handling of personal data. To meet these standards effectively, we rely on experienced external providers whose expertise ensures secure, efficient, and reliable processes at a scale that would not be possible manually.

Why responsible gambling is more than a legal requirement  The German GGL regulation is centred on social responsibility and player protection. What specific measures do you have in place to identify and assist players at risk of gambling harm? 

At our online casinos JackpotPiraten and BingBong, we analyze player behavior and ensure a safe gaming experience. If signs of problematic gambling emerge, we are able to reach out directly to the player and if necessary, exclude them from play. As part of the regulated market, we see this consistent and responsible approach as one of our core duties in protecting players.

How do you ensure that your responsible gambling tools are actually effective? Do you measure outcomes or make improvements based on player feedback or behavioral data? 

We take responsible gambling very seriously and therefore conduct ongoing monitoring of player activity. If signs of problematic gambling behavior are detected and cannot be changed, we can take a range of measures, including the closure of the player’s account.

Can you describe how the OASIS self-exclusion system is integrated into your platform and how you handle self-excluded or returning players? 

Players can exclude themselves at any time directly on our platforms through the OASIS self-exclusion system. In addition, a ‘panic button’ is available, enabling an immediate 24-hour break from play. Once registered with OASIS, players are automatically blocked from accessing our platforms and are prevented from receiving any form of personalized advertising. These measures reflect our strong commitment to responsible gambling and player protection.

What trends are you seeing in player behavior since the introduction of the new regulatory framework? 

In international comparison, German legislation for virtual slot games is very strict. Tax rates are set at a high level, which negatively impacts the payout ratios of the games. In addition, there are stake restrictions and a requirement for a minimum game round duration of five seconds. Players view these measures very critically and often turn to the less restrictive and more attractive offerings of the black market. As a result, tax revenues in Germany from virtual slot games have been continuously declining, an unfortunate negative trend.

Transparency is key in regulated markets. How do you communicate responsible gambling features and policy updates to your players in a clear and proactive way? 

At Deutsche Gesellschaft für Glücksspiel, raising awareness among players about responsible gaming is a core priority. We follow a dual strategy that goes well beyond legal requirements. In line with regulations, we provide a dedicated information section on our platforms that explains how to use gambling products safely. Clear warnings about potential risks are displayed transparently, and players can access support organizations directly through the links we provide.

Going further, we actively engage our players through a regular newsletter and our innovative Slot Academy. Here, education takes place via live video sessions that continuously address the risks of virtual slot games and promote responsible, informed play.

Why entertainment and responsibility can go hand in hand  Looking ahead, what’s next for DGGS? Are there upcoming developments, features, or goals you’re particularly excited about? 

This year we are celebrating the Jackpot Video Awards 2025. The idea for an event together with our players came directly from the community. The Jackpot Video Awards combine entertainment with player protection and are eagerly anticipated by both our team and the players.

Interested in more from our customer interviews? Check out: Docusign’s Managing Director DACH, Kai Stuebane, sat down with us to discuss how secure digital identity verification is transforming digital signing amid Germany’s evolving regulatory landscape.

By

Nikita Rybová
Customer and Product Marketing Manager at IDnow
Connect with Nikita on LinkedIn


FastID

Publish your website without a host

Deploy static sites to Fastly Compute directly from your browser or IDE. Publish blogs, apps, and websites at the edge without hosting.
Deploy static sites to Fastly Compute directly from your browser or IDE. Publish blogs, apps, and websites at the edge without hosting.

Wednesday, 17. September 2025

Dark Matter Labs

Where to? Five pathways for a regenerative built environment

Where to next? Five pathways for a regenerative built environment Possibilities for the Built Environment, part 2 of 3 This is the second in a series of three provocations, which mark the cumulation of a collaborative exploration between Dark Matter Labs and Bauhaus Earth to consider a regenerative future for the built environment as part of the ReBuilt project. In this piece, we shar
Where to next? Five pathways for a regenerative built environment Possibilities for the Built Environment, part 2 of 3

This is the second in a series of three provocations, which mark the cumulation of a collaborative exploration between Dark Matter Labs and Bauhaus Earth to consider a regenerative future for the built environment as part of the ReBuilt project.

In this piece, we share five pathways toward regenerative practice in the built environment from Dark Matter Labs’ ongoing mission X0 (Extraction Zero). First outlined in the A New Economy for Europe’s Built Environment, these pathways are currently being developed by X0 in partnerships with cities across Europe.

In the first piece, we suggested how six guiding principles for a regenerative built environment could redirect our focus. In this piece, we lay out six pathways toward regeneration, with suggested benchmarks and possible demonstrators, as a means of starting conversations, and identifying allies and tensions. The final piece in the series uses the configuration of the cement industry to explore the idea of nested economies and possible regenerative indicators.

Toward a process-based definition of regeneration

This piece leans into the friction between today’s extractive norms and the regenerative futures we have yet to realise.

We propose five pathways to establish regenerative practices throughout the built environment: these will span scales and sectors while driving change aligned with the principles laid out in the previous provocation. These pathways represent five modes for developing a multiplicity of new metrics, as well as creating the conditions for further progress to be taken on by future generations. Embedded in this logic are multiple and diverse systemic entry points for various actors to engage along the way.

These pathways are directions of travel that can be launched within the current economic system, without adopting a solution mindset. However, there are still real challenges to progress because of today’s political economy and scale of the polycrisis. While these pathways can be initiated within the current economic system, to be fully realised they must transform the system itself along the way.

One aspiration for these pathways is that they can capture the imagination and energies of a range of stakeholders, by creating containers for the changes it will take to bring us to a regenerative built environment. If we assume that to reach this future we will need both paradigm-shifting ‘impossible’ ideas and real demonstrations of best practices within our current contexts, then these pathways can hold together the different strands of effort, from the more feasible to the boundary-pushing, in one directional container. In each pathway, we ourselves look toward collaborators across geographies and disciplines to imagine, visualise and orient ourselves toward where these shifts could take us, in 2030, 2050 and beyond.

On a pragmatic level, structures to support initiation and governance of these pathways already exist and can be further fostered. Ownership for pathways can sit at the city or municipal level, supported by city networks such as Net Zero Cities, C40 cities and others, and further enabled through multi-municipal or regional coalitions to reach national scales. This type of multi-scalar, integrated approaches to the pathways can create the conditions for bottom-up schemes and ideas in communities and allow these to grow. The scale and pace of the transition we need requires governing decision-makers to have visibility over exceptional ideas that can push at the edges of the Overton window.

These pathways are not wholesale solutions to the problem, but rather provocative visions to incite discussion, draw out coalitions, grow a sense of responsibility and build momentum. It’s not that if we do these five things that a regenerative future will be reached. Rather, these are components of a re-envisioning.

For further exploration of these pathways, please see the white paper A New Economy for Europe’s Built Environment, associated with Dark Matter Labs’ X0 (Extraction Zero) mission.

Pathway 1: Maximising utilisation

Maximising the utilisation of our existing resources, spaces and infrastructures is one of the most transformative actions we can undertake in a context of resource shortage, carbon emissions crisis and labour crisis. That is especially relevant in the European context where our resource and space use inefficiencies are massive. Unlocking this latent capacity promises significant advancements in social justice and decoupling space and use creation from extraction and pollution. This develops a range of strategies from full utilisation of existing building stock, sharing models, flexible space use, with instruments such as open digital registries, smart space use platforms, smart contracts, and the like.

Image: Dark Matter Labs, ‘A New Economy for Europe’s Built Environment’ white paper, for New European Bauhaus lighthouse project Desire: An Irresistible Circular Society, 2024.

Deep structural changes in mechanisms to challenge speculative land markets and reform regulatory frameworks will be needed to embed redistributive and democratic principles into the governance of urban space.

Potential challenges:

The implementation of maximal utilisation is severely constrained by today’s profit-driven development logic, which prioritises profit through new development and property speculation over efficient or shared use. Institutional inertia, entrenched ownership regimes and the financialisation of housing all work against such a shift, while digital tools like registries and smart contracts risk reinforcing existing inequalities if not democratically governed.

System demonstrator: reprogramming office buildings from 35% to a 90% use, increasing financial flows of the building
What could this look like in 2050? Multi-actor spatial governance frameworks and use-based permissions Dynamic pricing structures for building use based on occupancy and social value creation Highly durable building structures with adaptable multi-use internal spaces Outcomes-based financing models tied to social and ecological impacts Mixed use public-private-NGO partnerships Public digital booking platforms for maximised utilisation of spaces Pathway 2: Next-generation typologies

Next typologies are no longer governed by the principle that form follows function. Instead, they transcend traditional asset classes based on programmatic use, as a new asset class valued for the optionality, flexibility, use efficiency and value creation they provide. Decoupling value creation from extraction, systemic inefficiencies and carbon emissions here happens through focusing on social capital–for instance, radical sharing and cooperation models, as well as intellectual capital–as new innovation models and new design typologies.

Image: Dark Matter Labs, ‘A New Economy for Europe’s Built Environment’ white paper, for New European Bauhaus lighthouse project Desire: An Irresistible Circular Society, 2024.

Without directly challenging speculative land markets, financialisation, and the classed and racialised histories embedded in built form, next-generation typologies may risk becoming a greenwashed evolution of the status quo rather than a transformative departure from it.

Potential challenges:

In capitalist urban systems, typologies and asset classes are produced through financial logics, property relations and commodification. Reframing buildings as flexible, innovation-driven assets may simply reproduce these dynamics in a new guise, reinforcing speculative value creation and market discipline under the banner of sustainability.

System demonstrator: Community living rooms–lightweight extensions on existing buildings, providing amenities with the right to use
What could this look like in 2050? Building public awareness in benefits of social time in relation to mental health New standards and codes for shared spaces and assets Tax reductions linked to carbon reduction impact of maximising efficiency Shared kitchens, living rooms, laundry rooms, appliances, tools and workshops Policy innovation enabling categorisation of shared spaces Increased cross-generational support, decreased loneliness, depression, stress levels Pathway 3: Systems for full circularity

Even though we have comprehensive knowledge on circularity, current levels in Europe are extremely low, and globally its rate is declining, thus this work focuses on the systems unlocking it and instruments driving its advancement on the ground. Apart from a comprehensive understanding of the craft (design for disassembly, development of city-scale material components networks, use of non-composite materials), we need the institutional economy and systems enabling circularity. That includes instruments such as material registries, material passports, financing mechanisms, design regulations, all developed simultaneously to unlock the new systems for circularity.

Image: Dark Matter Labs, ‘A New Economy for Europe’s Built Environment’ white paper, for New European Bauhaus lighthouse project Desire: An Irresistible Circular Society, 2024.

For circularity to be genuinely transformative, it must be accompanied by political and economic restructuring — challenging the growth imperative, redistributing material control, and embedding democratic governance into how urban resources are managed and reused.

Potential challenges

Structural barriers hinder circularity. Extraction, planned obsolescence and short-term profit maximisation, which are the main imperatives in the current system, actively disincentivise long-term material stewardship. Circular practices often require slower, more localised and collaborative modes of production, which clash with the logics of global supply chains, speculative development and financialised real estate.

Moreover, without addressing issues of ownership, labour relations and uneven access to materials and technologies, circular systems risk being implemented in ways that benefit private actors while offloading costs onto public bodies or marginalised communities.

System demonstrator: City-scale architectural components bank, with developers’ right-to-use models
What could this look like in 2050? Material data registries and warranties for secondary materials Lightweight extensions, maximising utilisation and reuse of existing buildings City-scale material balance sheets and data registries for localised material cycles Civic material hubs for storage and distribution, zero carbon transport and logistics networks Demountable and highly adaptable building design Sinking funds for facilitating material reuse during deconstruction Pathway 4: Biogenerative material economy

The long-term future of our material economy must be bioregenerative. This transition needs deep understanding of systems impacts, avoiding further global biodiversity and land degeneration through green growth. This shift requires a transformation in land use for materials, moving from “green belts’’ to permaculture and regenerative methods, from supply chains to local supply loops. This requires developing new local material forests, zero-carbon local transport, non-polluting construction methods, as well as the policy, operational and financial innovation for a successful implementation of a fully biocompatible material economy.

Image: Dark Matter Labs, ‘A New Economy for Europe’s Built Environment’ white paper, for New European Bauhaus lighthouse project Desire: An Irresistible Circular Society, 2024.

True transformation will involve challenging capitalist land markets, redistributing land and decision-making power and centering indigenous and community-led stewardship practices within the material economy.

Potential challenges:

We must not underestimate how global capitalism — through land commodification, agribusiness and extractive supply chains — actively undermines regenerative potential. Transforming green belts into permaculture zones, or establishing local material forests, requires not just technical and policy innovation, but a fundamental shift in land ownership, governance and power relations. Without addressing who controls land and resources, and whose interests are served by current material economies, there is a danger that biogenerative strategies become niche or elite enclaves, rather than systemic solutions.

System demonstrator: Neighbourhood gardens of biomaterials for insulation panels components for on site retrofitting
What could this look like in 2050? Regenerative agriculture & forestry practices and open education programs Certification for regenerative agriculture & carbon storage Macro-investments in bioregional forests & urban farms Civic biomaterial experimentation workshops & micro-factories Land restoration & rewilding sinking funds Regional, regenerative biomaterial supply chains, zero-carbon logistics networks Pathway 5: Shifting comfort, increasing contact

The ways we live in buildings today alienates us from our environmental and earthly context. Today’s built environment is designed to optimise for sterilisation through conditioned environments, separating us from the biomatter that is both input and output to our livelihoods. In providing comfort, we have been depending on extraction of resources, other species, biodiversity and ironically ourselves. We need to decouple the economy of comfort, which is here a shorthand for human-optimised environmental conditions, from extraction and externalisation. Pathways in driving this shift include participation and care models, increasing social values, shifting human relation to nature, a shift from technological to ecological services providing comfort, an increase in social and physical activity, a shift from the building scale to other scales, such as city-scale nature-based infrastructures and micro-scale furniture or clothing.

Image: Dark Matter Labs, ‘A New Economy for Europe’s Built Environment’ white paper, for New European Bauhaus lighthouse project Desire: An Irresistible Circular Society, 2024.

Real progress will involve confronting the socio-economic systems that produce uneven access to comfort, land and energy, and reconfiguring them through justice-oriented redistribution, democratic urban governance and decommodified approaches to housing and care.

Potential challenges:

In this pathway, we must not romanticise behavioural or cultural change without sufficiently addressing the structural conditions that produce and maintain the current ‘economy of comfort’. The alienation it describes is not simply the result of misplaced design priorities or cultural habits, but of a capitalist system that commodifies comfort, standardises it through global construction norms, and externalises its costs onto ecosystems and marginalised communities. Some people experience the comfort constructed by today’s systems much more than others.

Shifting toward ecological and participatory models of comfort is valuable, but without challenging the political economy that privileges resource-intensive, climate-controlled lifestyles for some while denying basic shelter or agency to others, such shifts may remain symbolic or limited in scope.

System demonstrator: Retrofitting a neighbourhood to new comfort standards to increase this area’s economic resilience to changing energy landscape.
What could this look like in 2050? New standards and codes for comfort Tax reductions linked to shifts in investments from mechanical towards ecological services Curriculum rethinking lifestyles in relation to health impacts Investments in extending ecological services and permeable surfaces for flood mitigation, indoor and outdoor comfort through passive climatisation Infrastructures for integral value accounting Capturing and measuring physical and mental health impacts More community and individual knowledge about how to deal with the material world, ranging from biomatter to biodegradable consumer goods Local biowaste sorting and utilisation in industry/agriculture From a static to a process-based definition of a regenerative future

In viewing our transition to a regenerative built environment through these core shifts, we look toward a process-based definition of what is regenerative. A process-based definition would be an understanding of the regenerative that is calculated not by fixed, profit-driven metrics, determined on the basis of isolated data-points, or tied to particular policy benchmarks, but rather something dynamic, intuitive, and assembled from across knowledge-spheres and perspectives, with their associated means of measurement.

A process-based definition might adapt to the changing data landscape, material reality, technopolitical ground conditions and Overton windows of different contexts. Whereas absolute metrics like embodied carbon are difficult to attain with accuracy, and fail to capture the whole picture, targets pegged to individual points in time and specific standards can quickly become obsolete. A process-based approach is inspired by DML’s Cornerstone Indicators [more information at this link], a methodology which creates composite, intuitive indicators for assessing change over time, co-developed and governed in place.

Originally co-designed with Dr Katherine Trebeck, the Cornerstone Indicators were initiated in the city of Västerås in Sweden to support citizens to co-design simple, intuitively understandable indicators that encapsulate what thriving means to the people of the Skultana district. The indicators, which align with overall goals like ‘health & wellbeing’ and ‘strong future opportunities’, can facilitate greater understanding of a place, enable further conversation, and guide future decisions. The initial 9-month workshop process to design this first iteration of the Cornerstone Indicators, resulted in indicators such as ‘the number of households who enjoy not owning a car’, and ‘regularly doing a leisure activity with people you don’t cohabit with’ which were analysed and offered to local policymakers. The success of this process has led to explorations of the Cornerstone Indicator process across Europe and North America. Initiatives like the Cornerstone Indicators present a model of how momentum toward a regenerative future for the built environment can be built. It’s urgent that we begin using process-based definitions and practices to bring more people to the table and increase the potential for transition pathways to gain traction.

Conclusion

In the first two pieces in this series, we have explored the idea of a regenerative future in the built environment by examining how our current frameworks for regeneration fall short of meeting the demands of the present moment. We outline principles and pathways for charting a course toward genuine transformation.

In providing examples of leading-edge organisations making progress toward a regenerative future, these pieces are intended to invite conversation, feelings of agency and reflection, even in the face of prevailing systemic constraints. Rather than offering neat solutions, this piece seeks to open doors to new possibilities.

The context and projections offered here raise a number of questions. For a wholesale transition, it will be important to understand what will indicate progress toward regeneration, as well as how decisions will be made in order to resist the co-opting of regenerative principles into status quo ways of operating.

The remaining piece in this series will explore:

How configurations of material extraction, labour and monetary capital entrench nested economies and particular power relations, using the example of the cement industry Possible indicators of progress toward a regenerative built environment, and of the limitations encountered

Together these pieces aspire to introduce the idea of a regenerative built environment and associated promises and challenges, to inspire a sense of direction and to sketch the broader systemic shifts to which we must commit.

This publication is part of the project ReBuilt “Transformation Pathways Toward a Regenerative Built Environment — Übergangspfade zu einer regenerativen gebauten Umwelt” and is funded by the German Federal Ministry for the Environment, Nature Conservation, Nuclear Safety and Consumer Protection (BMUV) on the basis of a resolution of the German Bundestag.

The five pathways in this provocation provocation are based on the white paper A New Economy for Europe’s Built Environment and ongoing work by Ivana Stancic and Indy Johar, as part of the X0 (Extraction Zero) mission at Dark Matter Labs.

In addition, this piece represents the views of the team, including, from Dark Matter Labs, Emma Pfeiffer and Aleksander Nowak, and from Bauhaus Earth, Gediminas Lesutis and Georg Hubmann, among other collaborators within and beyond our organisations.

Where to? Five pathways for a regenerative built environment was originally published in Dark Matter Laboratories on Medium, where people are continuing the conversation by highlighting and responding to this story.


Shyft Network

Shyft Network’s Veriscope Powers Compliant Crypto Trading with Nowory in India

India’s crypto market, with 93 million investors, demands infrastructure that balances innovation with FATF Travel Rule compliance. Shyft Network, a leading blockchain trust protocol, has partnered with Nowory, an Indian crypto trading platform, to integrate Veriscope, the only frictionless solution for regulatory compliance. This collaboration showcases Veriscope’s ability to enable secure, compl

India’s crypto market, with 93 million investors, demands infrastructure that balances innovation with FATF Travel Rule compliance. Shyft Network, a leading blockchain trust protocol, has partnered with Nowory, an Indian crypto trading platform, to integrate Veriscope, the only frictionless solution for regulatory compliance. This collaboration showcases Veriscope’s ability to enable secure, compliant digital finance in high-growth markets while prioritizing user privacy.

Why Veriscope Matters for India’s

Crypto EcosystemAs India’s regulatory framework evolves, Virtual Asset Service Providers (VASPs) need tools to ensure compliance without complexity. Veriscope leverages cryptographic proof technology to facilitate secure, privacy-preserving data exchanges, aligning with FATF Travel Rule requirements. By integrating Veriscope, Nowory demonstrates how VASPs can achieve regulatory readiness seamlessly.

Nowory’s Role in the Partnership

Nowory, launched in August 2025, is an Indian crypto trading platform designed to serve India’s 93 million crypto investors with a secure and efficient bank-to-crypto gateway. By integrating Veriscope, Nowory aligns with global compliance standards, eliminating risky P2P trading and supporting India’s growing demand for regulated crypto infrastructure.

Key Benefits of Veriscope’s Integration

The Shyft Network-Nowory partnership highlights Veriscope’s power to transform crypto compliance:

Frictionless Compliance: Simplifies FATF Travel Rule adherence without burdening platforms or users. Privacy-First Design: Protects user data using cryptographic proofs, ensuring autonomy. Scalable Solutions: Supports growing VASPs in dynamic markets like India.

Zach Justein, co-founder of Veriscope, emphasized the integration’s impact:

“India’s crypto market needs solutions that streamline compliance while preserving privacy. Veriscope’s integration with Nowory reflects Shyft Network’s commitment to secure, compliant blockchain infrastructure.”
Powering a Compliant Crypto Future

Nowory joins a global network of VASPs adopting Veriscope to meet regulatory demands seamlessly. This partnership underscores the need for secure, compliant crypto infrastructure in high-growth markets like India.

About Veriscope

Veriscope, built on Shyft Network, is the leading compliance infrastructure for VASPs, offering a frictionless solution for FATF Travel Rule compliance. Powered by User Signing, it enables VASPs to request cryptographic proof from non-custodial wallets, simplifying secure data verification while prioritizing privacy. Trusted globally, Veriscope reduces compliance complexity and empowers platforms in regulated markets.

About Nowory

Nowory is an Indian crypto trading platform launched in August 2025, designed for secure and efficient trading of assets like Bitcoin, Ethereum, and Solana. It provides a direct bank-to-crypto gateway for India’s 93 million crypto investors, emphasizing regulatory readiness and the elimination of risky P2P trading.

Stay ahead in crypto compliance.

Visit Shyft Network, subscribe to our newsletter, or follow us on X, LinkedIn, Telegram, and Medium.

Book a consultation at calendly.com/tomas-shyft or email bd@shyft.network

Shyft Network’s Veriscope Powers Compliant Crypto Trading with Nowory in India was originally published in Shyft Network on Medium, where people are continuing the conversation by highlighting and responding to this story.

Tuesday, 16. September 2025

Extrimian

Why Extrimian is an AI-First Company

Why Extrimian’s AI‑First Approach Improves Digital Credential Solutions Let’s start explaining why Extrimian is an AI-First company. Our goal is to give universities and other startups, a faster, more reliable way to issue, manage, and verify credentials—while ensuring our own teams work smarter behind the scenes. This post explains how our AI‑first ethos (via the […] The post Why Extrimian is a
Why Extrimian’s AI‑First Approach Improves Digital Credential Solutions

Let’s start explaining why Extrimian is an AI-First company. Our goal is to give universities and other startups, a faster, more reliable way to issue, manage, and verify credentials—while ensuring our own teams work smarter behind the scenes. This post explains how our AI‑first ethos (via the internal agent Micelya) makes Extrimian more efficient, and how our University Portal product solves the very real problems of diploma fraud, identity theft and manual verifications.

TL;DR Extrimian’s AI‑first philosophy refers to how we work internally, not how we verify credentials. Our agent Micelya organises knowledge and speeds up development and support. Self‑Sovereign Identity (SSI) and cryptographic signatures secure the credentials; AI is not used in the verification flow. By using AI internally and SSI externally, Extrimian delivers more complete features, faster updates and a calmer verification process for universities and students. What does “AI‑first” really mean in Extrimian? AI – Artificial intelligence, future technology innovation. Extrimian AI-First Company

When Extrimian says we are AI‑first, we’re talking about our own processes, not the product’s cryptographic core. We have an internal agent called Micelya that acts like a living knowledge hub for our teams. It stores and organises product specifications, SOPs, design decisions and customer insights, making them easy to find and apply. 

How do we use Micelya internally? Agile and interdisciplinary processes

To keep Micelya truly useful, our product and engineering teams continually feed it with the latest internal documentation, release notes, process playbooks and step‑by‑step guidelines for every product. This curated knowledge helps the agent surface the right answers, recommend the correct templates and shorten hand‑offs across the organisation.

When engineers or product managers work on a new release, Micelya suggests the right protocol or template and reminds us of past decisions. This means we iterate more quickly, avoid duplication, and keep every improvement in play. The agent doesn’t handle your credentials; it powers how we build and support the product.

How does Micelya make Extrimian faster and more consistent?

Micelya’s role is to optimize Extrimian’s internal processes. It automatically flags related resources—SOPs, integration steps, templates—at the moment a team member needs them. It nudges us when something requires approval or when a template must be updated. It also stores lessons learned from support tickets and feature requests, so improvements become part of our future releases. This means we respond to universities more quickly, address issues more consistently, and ship updates faster. Because the agent streamlines our internal workflow, you receive a product that evolves continuously without long delays.

Why does AI‑first matter for universities if it’s only internal?

You might wonder why our internal AI should matter to you. Simply put, Micelya makes Extrimian more efficient, which reflects in our product and support. Faster iteration cycles mean new features and fixes arrive sooner. A shared knowledge hub ensures you receive consistent advice regardless of who answers your call. When updates roll out, they’re informed by a complete history of past decisions and user feedback. Although AI never touches your credentials or verification flow, our AI‑first culture ensures we deliver a more refined, dependable product.

Why is it good to be an AI‑first company?

Being AI‑first has benefits that extend beyond Extrimian; companies in many sectors adopt AI to become more responsive, innovative, and resilient. Here’s a concise summary of key advantages and how they play out in our case:

Benefit of being AI‑first Impact on operations Extrimian example Efficiency Faster decisions & shorter release cycles Micelya surfaces the right SOPs and templates so teams ship updates quicker Knowledge retention Shared, up‑to‑date repository of policies & best practices Our knowledge hub prevents repeated mistakes and speeds new‑hire onboarding Cross‑team alignment Consistent workflows and communication across departments Product, engineering & support teams work from the same playbook Continuous improvement AI highlights patterns & informs roadmaps Micelya captures feedback loops so each release builds on lessons learned Better customer experience Quicker responses & higher‑quality products Universities see faster support, smoother updates and less rework

This table illustrates why an AI‑first mindset isn’t just a buzzword—it underpins real gains in speed, quality and alignment. For Extrimian, those gains help us deliver a stable verification product more rapidly and consistently.

What do students and verifiers experience?

From a student’s perspective, digital credentials mean convenience and control. They receive tamper‑proof proofs right in their ID Wallet and share them through a link or QR code. They aren’t forced to disclose their entire transcript when only enrollment status is needed. For verifiers, checking credentials is just as straightforward: visit the university’s verification page, scan the QR code or paste the link, and see an immediate result with clear guidance. No waiting for emails, no guesswork, and no reliance on appearance. This streamlined experience increases trust and speeds up decision‑making for everyone.

AI for process, cryptography for trust

Extrimian’s approach balances two forces: cryptographic security for credentials and AI‑driven efficiency for internal work. SSI and digital signatures make diplomas and enrolment proofs tamper‑proof, while the AI‑first mindset (through Micelya) reduces friction in our development and support processes. The two realms remain separate; AI does not verify credentials, but it helps us build better products and respond faster. For universities, this means a reliable, ready‑to‑use product backed by a company that continuously improves without sacrificing trust.

Recommended resources: Internal links University Portal overview – Learn more about our University Portal and how it issues tamper‑proof credentials.
ID Wallet page – link to the page that introduces the student/employee wallet used to store and share verifiable credentials.
Anchor text: “See how the ID Wallet lets students carry and share their credentials securely. About Extrimian / Our Story – Discover who we are and why we invest in internal AI to deliver better products.” Blog archive or Learning Resources – For a deeper dive into SSI and digital identities, explore our resources page or related articles. Contact or Demo page – If you’d like to see the portal in action, book a demo with our team.
External links W3C Verifiable Credentials specification – The W3C’s Verifiable Credentials Data Model defines how digital credentials are issued and verified. Self‑Sovereign Identity (SSI) explainer – Self‑Sovereign Identity (SSI) is an approach that puts individuals in control of their data; this SSI overview explains the core principles. Industry research or reports by EDUCASE – Recent studies show credential fraud is on the rise; this EDUCAUSE report outlines the challenge for universities. FIDO Alliance passkey standards – Passkeys are based on the FIDO2/WebAuthn standard for secure, phishing-resistant login.

 

The post Why Extrimian is an AI-First Company first appeared on Extrimian.


Holochain

Dev Pulse 151: Network Improvements in 0.5.5 and 0.5.6

Dev Pulse 151

We released Holochain 0.5.5 on 19 August and all tooling and libraries are now up to date.

Holochain 0.5.5 and 0.5.6 released

With these releases, we’re continuing to work on network performance for the Holochain 0.5.x series. There’s been a bunch of bug fixes and improvements:

New: At build time, Holochain can be switched between libdatachannel and go-pion WebRTC libraries, with libdatachannel currently the default in the Holochain conductor binary release and go-pion the default in Kangaroo-built hApps. go-pion is potentially free from an unwanted behaviour in libdatachannel, in which the connection is occasionally force-closed after a period of time. If you’ve seen this behaviour, consider trying your hApp in a Kangaroo-built binary to see if it’s resolved. Changed: Some tracing messages are downgraded from info to debug in Kitsune2 to reduce log noise. Bugfix: Make sure the transport layer has a chance to fall back to relay before timing out a connection attempt. Bugfix: When Holochain received too many requests for op data, it would start closing connections with the peers making the requests it couldn't handle. This caused unnecessary churn to reconnect, rediscover what ops need fetching, and send new requests. Instead, the requests that can't be handled are dropped and have to be retried. The retry mechanism was already in place, so that part just works. When joining a network with a lot of existing data, the sync process is now a lot smoother. Bugfix: Established WebRTC connections would fall back to relay mode when they failed; now the connection is dropped, and peers will try to establish a new WebRTC session. Bugfix: If a WebRTC connection could not be established, the connection would sometimes be left in an invalid state where it could not be used to send messages and Holochain wouldn't know to replace the connection to that peer. Bugfix: Holochain was using the wrong value for DHT locations. This was leading to differences being observed in the DHT model between peers, who would then issue requests for the missing data. The data couldn't be found because the requested locations didn't match what was stored in the database. This led to DHT sync failing to proceed after some period of time. Note: updating a hApp from Holochain 0.5.4 or earlier might cause a first-run startup delay of a few seconds as the chunk hashes are recalculated. Bugfix: Kitsune2 contains a fix for an out-of-bounds array access bug in the DHT model. Shifted priorities for 0.6

We’d originally planned to start the groundwork for coordinator updates (allowing a cell’s coordinator zomes to be updated) and DHT read access gating via membrane proofs in Holochain 0.6. We’re now going to push those to a later release in favour of focusing on warrants and other features that offer functionality that considers the strategically critical priorities of our partners.

These are the major themes of our work on 0.6:

Resolving incomplete implementations of the unstable warrants feature, writing more tests, and marking the feature stable for all app and sys validation except chain forks. Finishing the work that allows Holochain to block peers at the network level if they publish invalid data. Making sure that the peer connection infrastructure is production-ready. Continuing to build out the Wind Tunnel infrastructure and test suite.

There are a few smaller themes; check out the 0.6 milestone on GitHub for the full story.

Wind Tunnel updates

With many of the big gains in network performance and reliability realised in the 0.5 line and two new developers joining our team, we’ve freed up developer hours to focus on the Wind Tunnel test runner once again. Our big goal is: make it more usable and used. To this end, here are our plans:

We want to run the tests on a regular, automated schedule to gather lots of data and track changes over Holochain’s development. Rather than it being a requirement that a conductor is running alongside Wind Tunnel, Wind Tunnel itself will run and manage the Holochain conductor, allowing us to test conductors with different configs or feature flags within a scenario. Wind Tunnel already collects metrics defined in each scenario, but we are expanding on this to collect metrics from the host OS, such as CPU usage, and from the conductor itself. This will give us insight into system load and how the conductor is performing during the scenarios. More scenarios will be written, including complex ones involving malfunctioning agents and conductors with different configurations. More dashboards are being created to display the new metrics and give us insight into how the scenarios perform from version to version. These will then make it easy for us to track how Holochain's performance envelope changes as new features are added, and also to make it easier to prioritize where to focus our optimization efforts. We plan to run multiple scenarios on a single runner in parallel to make better use of the compute resources we have in our network. Along with adding more runners to the network, this will reduce the time it takes to run all of the tests, which will let us run the tests more often. We’re creating an OS installation image for Wind Tunnel runners, allowing any spare computer to be used for Wind Tunnel testing. This will let people support Holochain by adding their compute power to our own network. Holochain Horizon livestream

If you’re reading this, you probably care about more than just the state of Holochain development. We’re starting a series of livestreams that talk about things like where the Holochain Foundation is headed and what’s happening in the Holochain ecosystem.

The first one, a fireside chat between Eric Harris-Braun, the executive director of the Foundation, and Madelynn Martiniere, the Foundation’s newest council member and ecosystem lead, was on Wed 30 Jul at 15:00 UTC. Watch the replay on YouTube.

Next Dev Office Hours call: Wed 17 Sept

Join us at the next Dev Office Hours on Wednesday 17 Sept at 16:00 UTC — it’s the best place to hear the most up-to-date info about what we’ve been working on and ask your burning questions. We have these calls every two weeks on the DEV.HC Discord, and the last one was full of great questions and conversations. See you there next time!


Indicio

From paper to Proven: what the EUDI wallet means for the secure document printing industry

The post From paper to Proven: what the EUDI wallet means for the secure document printing industry appeared first on Indicio.
The shift to digital identity is accelerating and 2026 will be a critical year for the security printing and paper businesses. Now is the time to prepare.

By Helen Garneau

For decades, trust has been printed. Passports, ID cards, certificates, and other official, government-issued, and securitized documents have been how people prove who they are.  The European Digital Identity (EUDI) wallet signals the end of the era for exclusive use of paper and plastic-based identity. 

The regulation, set to be mandated with new technologies rolled out within the next year, introduces a way for citizens, residents, and businesses to securely share digital identity data in the form of  Verifiable Credentials across all EU member states; banking, travel, enterprises and government services are already piloting credential implementations. 

As with many transformative technologies, change happens slowly and then very fast.  

Companies that adapt quickly will stay relevant and leverage digital identity to deliver better products and services and innovate around seamless authentication and digital trust. Those that delay risk being left behind.

The question for companies in the secure document printing market is: how to not become obsolete when cryptography can make digital credentials every bit as trustworthy as the most secure physical document?

Just because the EUDI wallet framework architecture describes Verifiable Credentials, a digital identity technology that is interoperable, secure, and easy to use, the shift to digital identity doesn’t spell the end of physical documents.

Position for the great transition

The next few years will see a transition to verifiable digital identity and verifiable digital data and identity documents are the on-ramp. A key example: The International Civil Aviation Organization (ICAO) specifications for Digital Travel Credentials start with self-derived credentials (DTC-1), which means people are able to extract the data in the passport’s RFID chip then comparing the image in the chip with a real-time liveness check of the person scanning the passport and issuing a digital credential version of the passport. The passport can then be validated to confirm the data came from an official government source. They’ll still need their physical passport when they travel but it will only be for backup. 

The next step will be governments directly issuing digital passport credentials (following DTC-2 specifications) along with a person’s physical passport. The person will still need this physical passport when they travel.

In both cases, the digital passport credential will do all the heavy lifting in terms of identity authentication that enables the passenger to seamlessly check-in, access a lounge, cross a border, pick up a rental car, and check into their hotel. 

After these have been successfully implemented, we’ll move to a DTC-3 type credential — a fully digital passport where no physical back up is required. 

Where are we in the transition process? Well, with Indicio Proven, governments are able to issue DTC-2 type credentials. Expect to see them soon.

Driver’s licenses, diplomas

It’s not just passports that are being digitalized. The same liveness check and face-mapping that happens with DTCs can be done with government-issued documents, such as driver’s licences and Optical Character Recognition can read the data in the absence of the RFID chip. More US states are adopting Mobile Driver’s Licenses (mdoc/mDL), while the European Union expects this standard to be implemented in Europe by 2030

One bug in this rollout is that many mDL implementations don’t include the verification software businesses need to validate digital versions. These businesses still rely on physical driver’s licenses for customer identity authentication. If you want an mDL with simple, mobile, scalable verification Indicio Proven has you covered.

Diplomas, degrees, course transcripts and certificates are also being rendered as tamper-proof digital credentials through the Open Badges 3.0 specification. While their physical counterparts are not secured in the same way as government-issued identity, the Open Badges 3.0 standard makes these documents impossible to fake, binds them to their rightful holders, and renders them instantly verifiable.

The key to managing the transition to digital identity documents is to enable transition to these documents. And this is where Indicio Proven is unique in the marketplace.

Indicio Proven: your bridge from the physical to digital

Indicio Proven® gives printing companies a direct path into the digital era by transforming secure physical documents into Verifiable Credentials, the same technology outlined in the EUDI specification.

With Proven, your physical products become anchors, on-ramps, or companions to digital credentials. Passports can be turned into DTCs, and more than 15,000 types of identity documents from 250+ countries and territories can be credentialized. Driver’s licenses and other official documents can also be validated, bound with biometrics, and issued as tamper-proof digital Verifiable Credentials that are:

Fraud-resistant and cryptographically secure Combine with biometrics and stored on individual’s own device Portable across borders Instantly verifiable without complex checks

Proven is a fast, simple, and cost-effective way to extend your role in the EUDI realm today that helps your customers:

Save costs by reducing manual checks Protect against fraud with secure digital credentials Unlock new revenue by offering digital trust services alongside physical products

This technology also opens the door to offering new services in identity verification. When passports become Digital Passport Credentials and driver’s licenses become mobile driver’s licenses, organizations like financial institutions, airlines, and government agencies can verify and trust the information. Processes that were once inefficient and cumbersome—such as age verification, KYC, and cross-border travel—become seamless, premium services that create value and potential revenue streams every time they’re issued and verified.

The next chapter for printing and paper

Physical cards and certificates will not disappear overnight, but their primary value will shift. And that doesn’t mean paper-based industries are left out—your expertise in trust, security, and document integrity is more valuable than ever. 

Proven makes this transition easy, enabling your business to grow as identity goes digital. With Indicio, you can carry that expertise into the digital age and position your company at the center of the EUDI wallet revolution.

The world is moving from paper to Proven. The opportunity is here—are you ready to take it? 

Contact us today to get your complimentary EUDI digital identity strategy from one of our experts.

###

The post From paper to Proven: what the EUDI wallet means for the secure document printing industry appeared first on Indicio.


Ontology

How Smart Accounts Are Reinventing The Web3 Wallet

If you’ve ever used a crypto wallet like MetaMask, you’ve used an externally owned account (EOA). It’s a simple pair of keys: a public address that acts as your identity and a private key that proves you own it. This model is powerful but rigid, putting the entire burden of security and complexity on the user. Lose your seed phrase? Your funds are gone forever. Find transactions confusing? The eco

If you’ve ever used a crypto wallet like MetaMask, you’ve used an externally owned account (EOA). It’s a simple pair of keys: a public address that acts as your identity and a private key that proves you own it. This model is powerful but rigid, putting the entire burden of security and complexity on the user. Lose your seed phrase? Your funds are gone forever. Find transactions confusing? The ecosystem has little flexibility to help.

A new standard is emerging to solve these problems, moving us from rigid key-based wallets to programmable, user-friendly interfaces. The answer is smart accounts.

What is a smart account?

A smart account (or smart wallet) is not controlled by a single private key. Instead, it is a smart contract that acts as your wallet. This shift from a key-based account to a contract-based account is revolutionary because smart contracts are programmable. They can be designed to manage assets and execute transactions based on customizable logic, enabling features that were previously impossible.

This transition is powered by account abstraction (AA), a concept that “abstracts away” the rigid requirements of EOAs, allowing smart contracts to initiate transactions. While the idea isn’t new, it recently gained mainstream traction thanks to a pivotal Ethereum standard: EIP-4337.

EIP-4337 (the game changer)

EIP-4337: Account Abstraction via Entry Point Contract achieved something critical: it brought native smart account capabilities to Ethereum without requiring changes to the core protocol. Instead of a hard fork, it introduced a higher-layer system that operates alongside the main network.

Here’s how it works: UserOperations: You don’t send a traditional transaction. Instead, your smart account creates a UserOperation — a structured message that expresses your intent. Bundlers: These network participants (such as block builders or validators) collect UserOperation objects, verify their validity, and bundle them into a single transaction. Entry Point Contract: A single, standardized smart contract acts as a gatekeeper. It validates and executes these bundled operations according to the rules defined in each user’s smart account.

This system is secure, decentralized, and incredibly flexible.

Other key proposals (EIP-3074 and EIP-7702)

The journey to account abstraction has involved other proposals, each with different approaches.

EIP-3074: This proposal aimed to allow existing EOAs to delegate control to smart contracts (called invokers). While simpler in some ways, it raised security concerns due to the power given to invoker contracts. It has since been paused. EIP-7702: Proposed by Vitalik Buterin, this upgrade would allow an EOA to temporarily grant transaction permissions to a smart contract. It offers a more elegant and secure model than EIP-3074 and may complement — rather than replace — the infrastructure built around EIP-4337.

For now, EIP-4337 is the live standard that developers and wallets are adopting.

Why smart accounts matter

The real value of smart accounts lies in the user experience and security improvements they enable.

Gas abstraction: Apps can pay transaction fees for their users or allow payment via credit card, removing a major barrier to entry. Social recovery: Lose your device? Instead of a single seed phrase, you can assign “guardians” — other devices or trusted contacts — to help you recover access. Batch transactions: Perform multiple actions in one click. For example, approve a token and swap it in a single transaction instead of two. Session keys: Grant limited permissions to dApps. A game could perform actions on your behalf without being able to withdraw your assets. Multi-factor security: Require multiple confirmations for high-value transactions, just like in traditional banking. The future is programmable

Smart accounts represent a fundamental shift in how we interact with blockchains. They replace the “all-or-nothing” key model with programmable, flexible, and user-focused design. Major wallets like Safe, Argent, and Braavos are already leading the way, and infrastructure from providers like Stackup and Biconomy is making it easier for developers to integrate these features.

We’re moving beyond the era of the seed phrase. The future of Web3 wallets is smart, secure, and designed for everyone.

How Smart Accounts Are Reinventing The Web3 Wallet was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


liminal (was OWI)

Turning Competitive Intelligence into Messaging That Wins (with examples)

Why competitive intelligence often fails in messaging I’ve seen it firsthand: competitor battlecards stacking up in shared drives, analyst PDFs collecting dust, and persona research tucked into charts that never see daylight. It’s easy to feel overwhelmed by the noise and unsure where to even start. I’ve been there more times than I’d like to […] The post Turning Competitive Intelligence into Me
Why competitive intelligence often fails in messaging

I’ve seen it firsthand: competitor battlecards stacking up in shared drives, analyst PDFs collecting dust, and persona research tucked into charts that never see daylight. It’s easy to feel overwhelmed by the noise and unsure where to even start. I’ve been there more times than I’d like to admit. The problem isn’t a lack of data; it’s the ability to digest it and translate it into messaging that actually differentiates. Without that step, teams fall back on the same empty claims: “innovative,” “customer-first,” “the most trusted.” Buyers tune it out. Competitive intelligence only works when it becomes narrative. The raw material exists: persona insights, competitor positioning, feature data – but without the right framework, it collapses into noise. In fact, 83% of B2B buyers now expect personalization on par with consumer experiences, which means vague promises no longer earn attention.

The three pillars of messaging that stand out Buyer persona insights

Great messaging doesn’t start with features; it starts with people. Early on, I wrote messaging as if “the buyer” was a monolith. It fell flat. A CMO trying to differentiate a brand doesn’t think like a sales leader trying to speed up onboarding. Persona-based marketing insights can surface those distinctions, but the job of messaging is to speak to those specific goals and pain points, not to the broadest common denominator.

Competitor messaging & positioning

Copycat messaging is the silent killer of differentiation. Throw the first stone if you’ve never obsessed over a competitor’s launch while paying too little attention to how they positioned their value. Competitive benchmarking is useful, but not if it leads you to recycle the same message with a “we do it better” twist. The real win comes from understanding where you truly differentiate and telling the story of why that matters in the first place.

Feature differentiation that resonates

I used to think listing every capability would convince buyers, but it never did. Features only matter when they connect to buyer outcomes that feel tangible. In fraud prevention, that might mean reducing chargeback losses by 40%. In cybersecurity, it might mean cutting breach detection time in half. The point is not to list what your product does but to anchor why it matters in the buyer’s world, and only nerd about the specifics once you have their undivided attention.

Generic vs persona-informed messaging

To show the difference, here’s a snapshot of how messaging shifts when intelligence is applied. Generic copy focuses on features and broad claims, while persona-informed messaging uses ICP data and persona pain points to connect with specific buyers.

DomainGeneric MessagePersona-Informed MessagePersona ExampleFraud Prevention“We help enterprises stop fraud before it happens by detecting suspicious activity, flagging risky transactions, and protecting customer accounts. Our platform is designed to keep your business safe and secure.”
“You’re responsible for revenue protection across global sales flows, which means chargebacks and payment fraud land on your desk. Teams like yours cut chargeback losses by 40% with real-time fraud alerts that protect revenue without slowing deals. Buyers expect both outcomes: silent protection and measurable margin impact.”
VP of Sales, BDR LeaderFinancial Crimes Compliance (AML/KYC)“We help compliance teams stay audit-ready with AML and KYC tools that reduce risk, cut down on false positives, and keep your business aligned with evolving regulations.”“As Chief Compliance Officer, you know false positives are the hidden tax on your team. Cutting them by 50 percent means analysts focus on true risk while you stay audit-ready against FATF and DOJ scrutiny. Clients report faster SAR filing cycles and stronger exam outcomes that regulators can see.”Chief Compliance OfficerCybersecurity / Threat IntelligenceWe help enterprises stay ahead of account takeover, session hijacking, and phishing attacks with advanced detection and monitoring that safeguard sensitive data and protect customer accounts.”“Your bottleneck probably isn’t a lack of MFA; it’s gaps in mobile session integrity and weak recovery bindings. Leading platforms now combine FIDO2 passkeys, device certificates, runtime attestation, and behavioral biometrics into a single API. Results often show 90–99% reductions in ATO flows and deployments measured in weeks, not quarters, while fitting directly into CI/CD pipelines.”CISOTrust & Safety (Age Assurance, Platform Integrity)“We help platforms create safe online spaces by stopping fake accounts, preventing underage sign-ups, and protecting users from harmful activity. Our solution builds trust across your community.”“You’ve grown marketplaces quickly, but fake accounts and underage signups erode trust as fast as growth builds it. Trust & Safety leaders block fraudulent accounts at scale, improving conversion while lifting NPS. Clients see measurable drops in fake account creation alongside sustained growth.”
Head of Trust & Safety
Risk Management“We help companies manage third-party risk by identifying potential vulnerabilities, monitoring vendor compliance, and providing visibility across your supply chain.”“Your mandate is to catch vendor risk before it turns into tomorrow’s crisis. Risk leaders using continuous monitoring spot supplier red flags weeks earlier. That foresight prevents compliance failures and costly breaches that would otherwise reach the boardroom.”
CRO, Risk Manager

This table turns the theory into practice: with competitive intelligence in play, messaging shifts from broad and forgettable to precise and compelling.

The challenge, of course, is scale. Tailoring a handful of persona-informed messages is one thing. Refreshing them continuously across dozens of campaigns, competitors, and markets is another. That’s where AI-enhanced intelligence platforms become indispensable. By monitoring live market signals, competitor narratives, and persona insights, AI can help us surface fresh message updates, stress-test positioning, and keep playbooks aligned with the market, so teams never slip back into generic messaging.

A framework for refreshing messaging without reinventing the wheel

High-performing teams do not wait for annual off-sites to rethink their messaging. They run refreshes as an ongoing discipline. So, how do we actually keep messaging fresh without burning cycles? Here is a practical process that has worked for us:

Collect signals continuously – competitor launches, persona survey data, market shifts. Map signals to differentiation – identify where buyer priorities intersect with unique strengths. Stress-test narratives – run them through sales conversations, campaign pilots, and post-call analytics. Refresh, don’t rewrite – evolve messaging every few weeks, not every few quarters.

The result is messaging that stays alive, tuned to the market, and sharper than the competition.

How leading teams operationalize competitive intelligence

It’s one thing to know the process, another to make it work at scale. The best GTM teams operationalize competitive intelligence through three capabilities:

1. Always-on market signals

Static PDFs cannot keep up with dynamic markets. Teams that win track real-time signals like funding rounds, regulatory shifts, competitor campaigns, and feed them straight into campaign planning.

2. Persona-level insights at scale

Instead of treating personas as theater, leading teams embed real-time buyer insights into campaigns and sales workflows. Every refresh reflects what buyers are actually thinking now, not last year.

3. Embedded intelligence in workflows

Intelligence only works if it lives where teams work: Slack alerts pushing industry shifts in real time, SEO content built on market truth, email campaigns aligned with buyer signals, and sales calls armed with live AI intelligence. Intelligence becomes actionable in the moment, not theoretical.

The challenge of messaging in niche markets

As adoption grows, so does the data: companies using competitive intelligence report a 15% boost in revenue growth. Platforms like Link are built to deliver these capabilities, from event monitoring and perpetual surveys to dynamic playbooks and post-call analytics. The real challenge is not more data, but the right data — intelligence that is specific enough to your market to make messaging credible and differentiated.

And this is where it gets tricky in niche markets. Sure, we can create a neat competitive battlecard, but what do we actually put on it if I don’t understand how the ICP is behaving in the real world? We can send a well-designed email, but if the target is a cybersecurity leader, they might care more about an upcoming TPRM webinar than a case study from the banking sector. The reality is that without specific, contextual intelligence, even polished campaigns miss the mark without the right segmentation.

At the end of the day, buyers don’t want platitudes; they want proof. In specialized markets, the cost of undifferentiated messaging isn’t just lost deals, it’s lost trust and stalled growth.

Key Takeaways Competitive intelligence fails when it sits in decks and PDFs. It only creates value when it fuels differentiated narratives buyers actually hear. Messaging that stands out comes from three things: persona insights, competitor positioning, and outcomes buyers can measure. Refreshing messaging is not a one-off exercise. The teams that win treat it as an ongoing discipline. Intelligence has to live where teams work: in Slack alerts, sales calls, campaigns, and content, so it becomes actionable in the moment. In niche markets, buyers don’t want platitudes, they want proof. Miss that, and you lose both deals and trust.

The post Turning Competitive Intelligence into Messaging That Wins (with examples) appeared first on Liminal.co.


Spherical Cow Consulting

Who Really Pays When AI Agents Run Wild? Incentives, Identity, and the Hidden Bill

Google recently gave us something we’ve been waiting on for years: hard numbers on how much energy an AI prompt uses. According to their report, the median Gemini prompt consumes just 0.24 watt-hours of electricity — roughly running a microwave for a second — along with some drops of water for cooling. The post Who Really Pays When AI Agents Run Wild? Incentives, Identity, and the Hidden Bill a

“Google recently gave us something we’ve been waiting on for years: hard numbers on how much energy an AI prompt uses.”

According to their report, the median Gemini prompt consumes just 0.24 watt-hours of electricity — roughly running a microwave for a second — along with a few drops of water for cooling.

On its face, that sounds almost negligible. But the real story isn’t the number itself. It’s about incentives: who benefits, who pays, and how those dynamics shape how we deploy AI.

A Digital Identity Digest Who Really Pays When AI Agents Run Wild? Incentives, Identity, and the Hidden Bill Play Episode Pause Episode Mute/Unmute Episode Rewind 10 Seconds 1x Fast Forward 30 seconds 00:00 / 00:11:21 Subscribe Share Amazon Apple Podcasts CastBox Listen Notes Overcast Pandora Player.fm PocketCasts Podbean RSS Spotify TuneIn YouTube iHeartRadio RSS Feed Share Link Embed

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

A history lesson from the cloud

To understand how incentives can blindside us, let’s revisit the cloud computing boom. You remember that, right? “Save all the money! Get rid of your datacenter! Cloud computing ftw!”

In 2021, Sarah Wang and Martin Casado of Andreessen Horowitz published “The Cost of Cloud: A Trillion-Dollar Paradox.” They showed how cloud services, while indispensable for speed and agility, became a drag on profitability at scale. Dropbox famously repatriated workloads back from public cloud and saved $75 million over two years — a shift that doubled their gross margins from 33% to 67%. CrowdStrike and Zscaler adopted hybrid approaches for similar reasons.

The takeaway: Early incentives reward adoption. But when the bills grow large enough, cost discipline suddenly becomes a board-level issue. By then, inefficiency is already baked into operations.

AI energy use is following the same arc. Vendors and enterprises alike are celebrating adoption, but the hidden costs are waiting to surface.

The incentives for vendors

AI vendors want mass adoption, and their incentives reflect that. They’ll emphasize efficiency gains — like Gemini’s 33-fold reduction in energy per query from 2024 to 2025, according to their recent report — but those are selective disclosures.

As the MIT Tech Review story “In a first, Google has released data on how much energy an AI prompt uses” pointed out, disclosures become marketing tools without standardized metrics. Vendors reveal what flatters them, not necessarily what helps customers make better choices.

And the race to ship bigger, more capable models only deepens this misalignment. Scale brings revenue. The energy, water, and carbon costs? Those are someone else’s problem.

The incentives for enterprises

Enterprises often don’t see the full picture either. A cloud invoice hides the per-prompt costs. IAM and security teams grant permissions to agents, but they don’t own the sustainability budget. Sustainability teams, meanwhile, don’t have visibility into permissions and entitlements.

The result: over-provisioning goes unnoticed. AI agents are allowed to “just run,” and every permissioned action quietly consumes resources. Those costs add up, but they land in someone else’s ledger, often long after the decisions were made.

This is the same organizational mismatch cloud adoption created: IT ops pays the bill, developers get the flexibility, and the CFO finds out later. AI is just the next chapter.

Incentives and regulation

Here’s where things start to change. Environmental, Social, and Governance (ESG) reporting isn’t optional anymore; regulators are giving incentives real teeth.

United States: The SEC’s new climate disclosure rule requires large public companies to report greenhouse gas emissions. Failure to comply has already resulted in multimillion-dollar fines for ESG misstatements, like Deutsche Bank’s $19M settlement. Europe: The EU’s Corporate Sustainability Reporting Directive (CSRD) sets steep penalties. In Germany, fines can reach €10 million or 5% of turnover. In France, executives risk prison time for obstructing disclosures. Australia: Directors must certify sustainability data as part of financial filings. Failure to comply can trigger civil penalties in the hundreds of millions, with individuals personally liable for up to AUD 1.565 million.

None of this is about fearmongering. (OK, maybe it’s a little bit of fearmongering in the hope of catching your attention.) It’s also a reality. Boards are now directly accountable for climate and resource disclosures. AI usage may feel “small” at the per-prompt level, but at enterprise scale, it becomes part of that regulatory picture.

Where identity comes in

So where does identity fit?

Every AI-agent action isn’t just a governance event; it’s also a consumption event. Permissions are no longer just about who can do what. They’re also about what we’re willing to pay, financially and environmentally, for them to do it.

Standing access matters here, too. A human user with unused entitlements is a risk; an AI with broad entitlements is a resource leak. It will happily keep churning until someone tells it to stop — and by then the costs have already piled up.

Imagine if your audit logs evolved to show not just “who accessed what,” but “how much energy and water those actions consumed.” It sounds futuristic, but sustainability reporting is heading in that direction. IAM teams may find themselves pulled into ESG conversations whether they want to be or not.

Runtime governance as sustainability

Earlier, I argued that runtime governance is essential when AIs can act faster than human oversight cycles. Here’s the sustainability angle: runtime checks can throttle not just security risks, but waste.

Deny agents the ability to hammer a system with brute-force permutations. Flag actions that consume far more resources than typical queries. Revoke unnecessary entitlements before they become both a risk and an expense.

Governance is shifting from “is this allowed?” to “is this worth it?”

Bridging past lessons with today’s challenges

The hidden costs of the cloud were supposed to teach us that efficiency ignored eventually becomes inefficiency entrenched. I’m not convinced people and organizations have learned that lesson, but regardless, AI is repeating that story, with energy, water, and carbon as the currencies.

Like cloud spend, AI resource usage may start small, but it scales faster than oversight cycles. And when regulations demand transparency, boards will want answers.

Identity leaders are uniquely positioned here. Permissions are the gate between an agent’s intent and its actions. Expanding the governance lens to include consumption could help organizations stay ahead of both the bills and the regulators.

Putting it together

So let’s put this together:

Vendors are incentivized by adoption and scale, not efficiency. Enterprises have silos that hide true costs. Regulators are introducing real penalties for climate and resource misstatements. Identity teams are sitting at the chokepoint, granting permissions that double as consumption choices.

The shift isn’t about turning identity professionals into sustainability officers. It’s about recognizing that incentives travel with permissions. And when permissions scale through AI, the hidden costs travel with them.

So here’s my question for you: have you seen incentives around AI use in your organization, good or bad? And if so, how did those incentives shape the choices your teams made?

Because incentives aren’t just a policy issue or a compliance box. They’re the difference between governance, which you can explain to your board, and governance, which you only notice when the bill or the fine arrives.

If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript

[00:00:29] Hi everyone, and welcome back to A Digital Identity Digest. I’m Heather Flanagan, and today we’re going to talk about something that’s only just starting to make the headlines: what happens when AI agents run wild—and who actually ends up footing the bill.

Spoiler alert: it’s probably not the vendors themselves, and it’s probably not who you think inside your own organizations either.

[00:00:53] In this episode, we’ll explore:

The incentives driving AI adoption The role of identity in hidden costs The growing regulatory landscape around sustainability Setting the Stage

[00:01:04] What inspired today’s conversation is a recent Google report that finally revealed some long-awaited data: how much energy a single AI prompt consumes.

[00:01:20] Their findings? The median Gemini prompt uses about 0.24 watt hours of electricity.

[00:01:28] To put it in perspective:

That’s like running your microwave for one second, plus a few drops of water for cooling. At first glance, it seems tiny. But at scale, millions of these “drops in the ocean” can eventually flood entire continents.

[00:01:46] The real story isn’t about that single number. Instead, it’s about the incentives behind those numbers—who benefits, who pays, and how those dynamics shape AI deployment.

Lessons from the Cloud

[00:01:57] To understand today’s AI landscape, let’s rewind to the early days of cloud computing. Remember the pitch? “Save money, get rid of your data center—cloud computing for the win.”

[00:02:20] But by 2021, Sarah Wang and Martin Casado at Andreessen Horowitz highlighted the Trillion Dollar Paradox:

Cloud was amazing for speed and agility. Yet at scale, it dragged on profitability.

[00:02:30] Dropbox learned this firsthand, repatriating workloads from the public cloud and saving $75 million over two years—doubling their margins in the process.

[00:02:51] The key lesson? Early incentives reward adoption. But once costs balloon, discipline becomes a board-level issue.

[00:03:10] AI is following the same arc. We’re in the “woohoo adoption” phase now, but hidden costs are waiting to catch up.

Vendor Incentives

[00:03:24] Let’s start with the incentives for LLM vendors. These are crystal clear: encourage mass adoption.

[00:03:33] Vendors emphasize efficiency gains. Google bragged about a 33-fold reduction in energy per query between 2024 and 2025.

[00:03:43] Sounds impressive. But disclosures are:

Not standardized Highly selective Designed to flatter the vendor, not inform customers

[00:03:53] Meanwhile, the race for bigger, flashier, more capable models continues. The revenue comes in, but the energy, water, and carbon costs are left as someone else’s problem.

Enterprise Incentives

[00:04:09] For enterprises, the picture is murkier. Why? Because:

Cloud invoices hide the per prompt cost. IAM and security teams grant permissions but don’t own the sustainability budget. Sustainability teams lack visibility into entitlements.

[00:04:34] The result?

Over-provisioning goes unnoticed. AI agents run unchecked. Bills land on someone’s desk long after the fact—often someone who had no say in granting permissions.

[00:04:58] This is déjà vu from the cloud era. Ops pays the bill, developers enjoy flexibility, and the CFO discovers the hit too late.

Regulators Enter the Chat

[00:05:03] Unlike the early cloud days, regulators are already watching. ESG (Environmental, Social, and Governance) reporting is now mandatory in many regions.

[00:05:15] Examples include:

United States: SEC Climate Disclosure Rule, with fines already issued (e.g., Deutsche Bank’s $19M settlement). Europe: Corporate Sustainability Reporting Directive (CSRD), with penalties up to €10 million or 5% of turnover. France: Executives can face prison time for obstructing disclosures. Australia: Civil penalties can reach hundreds of millions, with directors personally liable.

[00:06:20] This isn’t fearmongering—it’s reality. Boards are accountable, and one AI prompt may seem trivial, but multiplied across millions of queries, it becomes a regulatory reporting item.

Where Identity Comes In

[00:06:38] Every AI agent action is more than a governance event—it’s also a consumption event.

Permissions = not just who can do what, but what we’re willing to pay financially and environmentally. An unused human entitlement is a risk. An AI with broad entitlements is a resource leak that runs until stopped.

[00:07:15] Imagine if audit logs didn’t just say who accessed what, but also recorded how much energy and water were consumed.

[00:07:24] That may sound futuristic, but sustainability reporting is moving that way. IAM teams could soon be pulled into ESG discussions—whether they feel it’s their role or not.

Governance Shifts

[00:07:37] Governance isn’t just about security anymore. With AI, it’s about balancing risk and resource consumption.

Runtime checks can throttle wasteful AI actions. Agents can be denied brute-force or high-cost queries. Entitlements can be revoked before they pile up into risks—or expenses.

[00:08:07] Governance now asks not only “Is this allowed?” but also “Is this worth it?”

History Repeats Itself

[00:08:14] Cloud should have taught us that ignored inefficiency becomes entrenched inefficiency. Once it’s embedded in infrastructure, it’s painfully hard to extract.

[00:08:38] AI is repeating that story—with water, energy, and carbon as the new currencies.

[00:08:54] When regulators demand transparency, boards will expect clear, defensible answers. And that’s where identity leaders can step up.

[00:09:01] Permissions sit at the choke point between agent intent and agent action. Expanding governance to include consumption metrics gives organizations a head start on both the bills and regulatory scrutiny.

Bringing It All Together

[00:09:16] To recap:

Vendors chase adoption and scale, not efficiency. Enterprises operate in silos that hide true costs. Regulators are introducing significant penalties for ESG misstatements. Identity teams control permissions, which now double as consumption risks.

[00:09:41] IAM professionals don’t need to become sustainability officers. But they must recognize that incentives travel with permissions—and when AI scales, costs scale too.

[00:09:57] So here’s the key question:
Have you seen incentives around AI use in your organization—good or bad? And how are those incentives shaping your team’s decisions?

Because incentives aren’t just about compliance checkboxes. They’re the difference between proactive governance, you can explain to your board, and reactive governance, you only notice when the bill—or the fine—lands on your desk.

Closing Thoughts

[00:10:23] That’s it for this episode of A Digital Identity Digest. If you found it useful, subscribe to the podcast or visit the written blog at sphericalcowconsulting.com for reference links.

[00:10:45] If this episode brought clarity—or at least sparked curiosity—share it with a colleague and connect with me on LinkedIn at lflanagan. Don’t forget to subscribe and leave a review on Apple Podcasts or wherever you listen.

Stay curious, stay engaged, and let’s keep these conversations going.

The post Who Really Pays When AI Agents Run Wild? Incentives, Identity, and the Hidden Bill appeared first on Spherical Cow Consulting.


iComply Investor Services Inc.

AML in Real Estate: Source of Funds, Identity, and Global Risk Controls

From complex ownership to offshore funding, real estate is high-risk for money laundering. This guide shows how iComply helps brokers, lawyers, and lenders simplify AML compliance across jurisdictions.

Real estate professionals face rising AML scrutiny across markets. This article breaks down identity verification, source of funds, and beneficial ownership rules in the U.S., Canada, UK, EU, and Australia – and shows how iComply helps automate compliance across agents, lawyers, and lenders.

Real estate is a prime target for financial crime. High-value transactions, opaque ownership structures, and limited oversight have made the sector vulnerable to money laundering worldwide.

From regulators to investigative journalists, scrutiny is intensifying, compliance expectations are evolving. Brokers, lawyers, developers, mortgage professionals, and title companies all have a role to play.

Shifting AML Expectations in Real Estate United States Regulators: FinCEN, state real estate commissions Requirements: Geographic targeting orders (GTOs), beneficial ownership reporting (CTA), SARs, and KYC for buyers and entities Canada Regulators: FINTRAC, provincial real estate councils Requirements: KYC, source of funds verification, PEP/sanctions screening, STRs, and compliance program requirements (as reinforced by the Cullen Commission) United Kingdom Regulators: HMRC, FCA (for lenders), SRA (for law firms) Requirements: Client due diligence, UBO checks, transaction monitoring, and compliance under MLR 2017 European Union Regulators: National AML authorities under AMLD6 Requirements: Risk-based customer due diligence, UBO transparency, STRs, and GDPR-aligned reporting Australia Regulator: AUSTRAC (legislation pending for real estate-specific coverage) Requirements: AML risk management for law firms, lenders, and trust accounts; expected expansion to include property professionals Real Estate-Specific Risk Factors

1. Complex Ownership Structures
Use of shell companies, nominees, and trusts can obscure true buyers.

2. Source of Funds Obscurity
Large cash deposits or offshore funding require enhanced scrutiny.

3. Multi-Party Transactions
Buyers, sellers, agents, lawyers, lenders, and developers often use disconnected systems.

4. Regulatory Patchwork
Requirements vary by jurisdiction and professional role.

How iComply Helps Real Estate Professionals Stay Compliant

iComply enables unified compliance across real estate workflows—from individual onboarding to multi-party coordination.

1. Identity and Entity Verification KYC/KYB onboarding via secure, white-labeled portals Support for 14,000+ ID types in 195 countries UBO discovery and documentation 2. Source of Funds Checks Collect and validate financial statements, employment records, or declarations Risk-based automation of EDD triggers Document retention for regulator inspection 3. Sanctions and Risk Screening Real-time screening of all participants (buyers, sellers, brokers, law firms) Automated refresh cycles and trigger alerts 4. Cross-Party Case Collaboration Connect agents, legal counsel, and lenders in a single audit-ready file Assign roles, track tasks, and escalate within shared dashboards 5. Data Residency and Privacy Compliance Edge computing ensures PII is encrypted before upload Compliant with PIPEDA, GDPR, and U.S. state laws On-premise or cloud deployment options Case Insight: Vancouver Brokerage

A Canadian real estate firm used iComply to digitize ID checks and SoF verification for domestic and foreign buyers:

Reduced onboarding time by 65% Flagged two nominee structures linked to offshore trusts Passed a FINTRAC audit with zero deficiencies Final Take

Real estate professionals can no longer afford fragmented compliance. With global pressure mounting, smart automation ensures faster onboarding, better oversight, and fewer audit risks.

Talk to iComply to learn how we help brokers, lawyers, and lenders unify AML workflows – without slowing down the deal.


PingTalk

Accelerating Financial Service Innovation With Identity-Powered Open Banking in the Americas

Explore how financial institutions across the Americas are using open banking and identity-powered APIs to drive innovation, enhance security, and deliver personalized customer experiences.

Open banking is rapidly becoming a critical plank of digital innovation in the financial services industry across both North and South America. Whether driven by regulation, market innovation, or consumer demand, the financial industry across both continents is increasingly embracing a standards-based, application programming interface (API)-first mindset in a bid to accelerate hyper-personalization, trust-based relationships, and value upsell.

 

While digital challengers continue to capture digitally-savvy customers, incumbent providers are scrambling to meet the increasing demand for seamless and customer-centric experiences in a bid to maintain competitiveness. What might come as a surprise, is this paradigm shift is underpinned by technical standards that govern financial-grade APIs (FAPIs) interacting with enterprise-grade identity and access management (IAM). 

 

The battle for market share in North and South American banking, and indeed the wider financial services industry, will hinge on the degree to which financial service providers embrace these technologies and industry standards and leverage underlying investments to deliver differentiated customer experiences.

 


FastID

Teach Your robots.txt a New Trick (for AI)

Control how AI bots like Google-Extended and Applebot-Extended use your website content for training. Update your robots.txt file with simple Disallow rules.
Control how AI bots like Google-Extended and Applebot-Extended use your website content for training. Update your robots.txt file with simple Disallow rules.

Monday, 15. September 2025

Dark Matter Labs

What’s guiding our Regenerative Futures?

Expanding our view toward six guiding principles for regenerative practice. Image: Dark Matter Labs. Adapted from Jan Konietzko, ‘Carbon Tunnel Vision’. Possibilities for the Built Environment, part 1 of 3 This is the first in a series of three provocations, which mark the cumulation of a collaborative effort between Dark Matter Labs and Bauhaus Earth to consider a regenerative future fo
Expanding our view toward six guiding principles for regenerative practice. Image: Dark Matter Labs. Adapted from Jan Konietzko, ‘Carbon Tunnel Vision’. Possibilities for the Built Environment, part 1 of 3

This is the first in a series of three provocations, which mark the cumulation of a collaborative effort between Dark Matter Labs and Bauhaus Earth to consider a regenerative future for the built environment as part of the ReBuilt project.

In this publication, we lay out the historical, professional and theoretical context for the contemporary push toward regenerative practice, and offer six guiding principles for a regenerative built environment, looking beyond profit tunnel-vision. In the second and third pieces, we propose pathways, configurations and indicators of the transformation our team envisions.

What isn’t regenerative? Debunking a misconception

When it was completed in 2014, Bosco Verticale, a pair of 40-story residential towers on Milan’s outskirts, was celebrated as an example of leading-edge regenerative building design for the 800 or so trees cascading from its balconies. In describing the project, its architect Stefano Boeri sketches the figure of the “biological architect”, who is driven by biophilia and prizes sustainability above other design concerns. Praise for Bosco Verticale, in the architectural press and beyond, implies that the development’s vegetal adornments represent a meaningful substitution of traditional building materials with bio-based ones, and further that measures supporting biodiversity constitute climate-positive architecture.

The list of green credentials associated with the project ignores other characteristics of Bosco Verticale that don’t align with this vision. The steel-reinforced concrete structure was designed with unusually substantial 28cm deep slabs to support the vegetation’s weight (which totals an estimated 675 metric tons) and associated dynamic loads. Considering that this slab depth is about twice that of comparable buildings without the green facade, the embodied carbon associated with the project’s 30,000m² floor slabs alone is approximately double that of a standard building.

In tandem, an existing workspace for local artists and artisans based in a former industrial building was demolished to make space for the premium residential units accessible only to the few. Although a replacement workspace was eventually built nearby, the structure’s regenerative aspirations are weighed down by profound contradictions beneath the leafy surface.

Certainly, Bosco Verticale is significant as an exceptional investment in urban greening on the part of the developer, and as a leading-edge demonstration of innovations that enhance the multiple benefits of green infrastructure. Bosco Verticale contributed to the viability of future developments that extend the geographic reach of urban greening discourse into new geographies: copy-cat schemes have been built in East Asia and elsewhere. However, it’s clear that Bosco Verticale fails to stand up to a holistic consideration of what regenerative building looks like. Many voices overlooked the social and material impacts of the project, instead dazzled by the urban greening.

Puzzle pieces of the regenerative

In recent years, societies worldwide have become familiar with weather events and political shifts that were unprecedented or previously unthinkable. Six of the nine planetary boundaries that demarcate the safe operating space for humanity were crossed as of 2023. There is now a strong case for the idea that our entangled human and planetary systems exist in a state of polycrisis. Bearing this in mind, what do we mean when we refer to a built environment that is regenerative?

This piece aims to add nuance and system-scale perspective to our working definitions. As examples like Bosco Verticale show, it’s possible to be green in the public eye while counteracting what is regenerative. Perhaps we need new methods to help us understand:

How long a building will last, How its materials will be stewarded, Whether it is built in a context that enables low-carbon living, And what its end of life might involve.

System-scale perspective is needed because the built environment cannot be disentangled from systemic needs like the demand for affordable housing and the reality of physical, material constraints. Although we do need initial demonstrations to spark change, a single, locally-sourced timber building constructed with ethical labour does not define wholly regenerative practice in itself.

What is regenerative?

Regenerative is the term of the moment, yet it remains loosely defined in public discourse: we rely on examples, implicit understandings, and theoretical frameworks to give it meaning. How, then, is it used in particular contexts?

Beyond ‘green’

Regeneration refers to approaches that seek to balance human and natural systems, allowing for coexistence, repair and self-regulation over time.

The regenerative paradigm seeks to look beyond what’s merely ‘green’, and to do net good. A broader lineage of thinking around the term spans agriculture, biology and ecology, medicine, urbanism and design: disciplines and industries that connect to the health and wellbeing of biomes, bodies and buildings. Variation in definition can be observed in different contexts, sectors and aims.

‘Regenerative’: a brief history of the term
The term regenerative began to gain traction in fields including agriculture and development to outline a new paradigm from the 1980s. The US’ Rodale Institute popularised the term ‘regenerative agriculture’ to describe farming systems that go beyond sustainability by improving soil, biodiversity and ecosystem health. The practices invoked are ancient, with precedents across the globe, and rooted in Indigenous land management. However, this specific application of the term ‘regenerative’ articulated an emergent attitude in this period that focused on renewal and improvement of ecological and social systems. The Rodale Institute advanced this concept through research, advocacy, farmer training, publications and consumer education geared toward regenerative organic agriculture, laying the groundwork for its integration into mainstream agricultural discourse and integration into other disciplines.
From the early 2000s, the work of Bill Reed and the Regenesis Institute for Regenerative Practice has anchored the application of regeneration to design fields and the built environment in particular. With a focus on ecosystem renewal and coevolution of human and natural systems, Reed’s framework implies that regenerative design goes beyond sustainability by restoring and renewing ecosystems, integrating humans and nature in a symbiotic relationship. Expanding this idea beyond ecology, many architects and urbanists have adapted Reed’s model to their own corners of their fields, looking for design that doesn’t simply do less harm, but does more good. Bauhaus Earth maps Reed’s familiar bowtie-shaped diagram onto four basic categories for the built environment: from conventional, to green, to restorative and finally regenerative–that which has the greatest positive environmental and social impact.
Across applications, several elements of a core meaning of what is regenerative exist: a focus on supporting systems of different scales to recover from loss, to take on new life, to grow responsively. The evocative nature of this idea, easily applied across different disciplines, has inspired a range of permutations and schools of thought.
Other key references on the regenerative:
1 Regenerative Development, Regenesis Group, 2016.
2 Regenerative Development and Design, Bill Reed and Pamela Mang, 2012.
3 Shifting from ‘sustainability’ to regeneration, Bill Reed, 2007.
4 Towards a regenerative paradigm for the built environment, Chrisna du Plessis, 2011.
5 Doughnut for Urban Development, Home.Earth, 2023.
6 The Regenerative Design Reading List, Constructivist, 2024.
Image: Bauhaus Earth, adapted from Bill Reed’s ‘Trajectory of Ecological Design’

The term’s uses have gained traction and proliferated within the particular historical context of the last half-century, during which concepts like the anthropocene and the full extent of human impact on the planet have been evidenced. As technology has enabled our understanding of the ways in which humanity has degraded our environments — at scales from the cellular to whole earth systems — to grow, so too has our desire for models that point to possible ways to repair this damage. Conceptualising the regenerative across scales and disciplines opens the door to alternative futures in which planetary demise at the hands of humans is not inevitable. The application of the core elements of regenerative theory to fields like architecture has spurred a range of generative and planet-benefitting practices. However, these individual actions, and even the rise of the sustainability paradigm across design fields, cannot override the prevailing limitations of capitalism that continue to increase rates of extraction, social inequality and environmental degradation. As it stands, regenerative approaches continue to be exceptions working against the odds.

The main limitation: political economy

These frameworks were written within academic and industrial contexts, largely from a Western, wealthy nations’ perspective. While regenerative thinking has inspired thinkers across the planet and across fields, attempts to translate these concepts into a global, political economic scale fails to account for deep-seated inequalities. We are limited by the systems and power imbalances in which we’re working. Capitalism, in particular, compounds these blindspots, limiting attempts to translate regenerative thinking into other spaces such as the built environment. As such, while trailblazing organisations, communities and individuals are offering proofs of possibilities in regenerative infrastructure and urbanism, these are currently exceptional cases. It is not yet evident how these ideas can be instantiated at scale to benefit all people and meaningfully address systemic inequalities.

The role for and responsibilities of professionals

The interconnected challenges of this moment invoke new layers of complexity. But if professionals can’t understand or deploy the idea of regeneration, then it won’t guide their decisions and actions.

Extractive activities led by the industrialised global North continue to irreversibly alter our planet at pace, while the transition to renewable energy will involve even higher rates of extraction of critical minerals than those of today. As such, the earth’s systems’ ability to regenerate is stressed more than ever. The built environment, with its outsized responsibility for global carbon emissions associated with construction, building operations and demolition, must admit these impacts and face up to its epoch-defining responsibility. So how do we get off the one-way road of identifying problems without solutions?

There is a separation between perceived responsibility and power in today’s professional landscape. This moment necessitates a shift from individual to collective agency in taking on advocacy for the regenerative potential of the built environment.

Imagine this: you are an architect today, trying to answer the client’s brief by maximising the use of responsibly-sourced bio-based materials, embedding social justice in your design processes and objectives, and considering carbon-storage potential and place stewardship for future generations, while accepting that your brief is to create market-rate apartments. This is nearly impossible in the context of today’s imperative to maximise profits and commodify housing. Architects in the current professional environment are profoundly limited in means to meaningfully address these intersecting priorities, whether one at a time or in concert. Our current economic system simply does not position architects to be the core innovators, as much as Stefano Boeri’s reflections on the Bosco Verticale boast otherwise.

These professional limitations are an indirect signal of the political economy of real estate development and the power relations underpinning the construction industry. Only a systemic shift can address the limitations facing individuals operating within a design scope. To genuinely take on the intersections of ecology, social justice and the built environment, architects need to see their work for all its entanglement with the broader political, economic and social forces, using the tools of the profession and connections, bolstered by connections with aligned collaborators, and their collective power to dismantle the systems of power that limit transformation at across scales.

We’re orienting ourselves toward a future in which there is more latitude for these crucial priorities to be addressed. This future will hold an altered scope for decisions made by architects and other built environment professionals in the course of development processes, and a transition to a regenerative built environment driven by collective commitment.

A growing field: precedents and trailblazers

A range of contemporary initiatives, programmes and projects aim to establish frameworks to define the idea of a regenerative built environment. Drawing on advancements in circular economic thinking, increasing recognition of the significance of embodied carbon in addition to operational carbon in buildings, and as the industry’s understanding of indicators like biodiversity and water use that are tied to planetary boundaries grows, these programmes help experts and the general public to move beyond misconceptions.

Bauhaus Earth emerged in 2021 as an initiative around the use of timber and other bio-based materials for construction and their ability to store carbon. Today, Bauhaus Earth is a research and advocacy organization dedicated to transforming the built environment into a regenerative force for ecological restoration. It brings together experts from architecture, planning, arts, science, governance, and industry to promote systemic change in construction practices.

Index of aligned enquiries

A global range of community-led and grassroots organisations focusing on the work and needs of underserved groups receive grant funding from and can be discovered via the Re:arc Institute.
Non-Extractive Architecture(s)’ directory gathers a global index of projects that rethink the relationship between human and natural landscapes, alongside questions about the role of technology and politics in future material economies. The directory is an ongoing project itself.
A range of related organisations and initiatives in the working ecosystem of Europe can be found in the table below. The range in types of these enquiries represents the broad coalition of stakeholders and types of activity that will be required to activate transformation toward a regenerative built environment.
Index of related initiatives in Europe. For links, see the end of this post.
Bio-based building materials are an important nexus of social and material relations. These materials, which bridge human and earth-based capacities for creation, urge an expanded view of stewardship. Understanding this will enable us to move past a paradigmatic dichotomy between the human and the natural, which enables humans to exploit planetary resources. Bio-based building materials were humans’ first building materials, and over millennia the practices, most notably agricultural and indigenous ones, that created the materials we work with today, have developed in concert with human civilisations and material realities. Holding these strands together, it’s evident why a maintained focus upon bioregionally-sourced and bio-based materiality is crucial for a regenerative future.
For a contemporary design and research practice that focuses on this intersection of agendas, see Material Cultures.
Regeneration across time horizons: shortsightedness and the Capitalocene

As Reed’s Trajectory of Ecological Design diagram and the examples above indicate, regeneration of ecosystems and societies are continuous, open-ended processes that occur over time, at scales from the cellular, to the neighbourhood, and to the planetary. As the repair and balancing of regenerative processes have occurred in many contexts across eons, we need to understand regeneration across multiple accordant time horizons. Within this complex and extensive landscape, time horizons can act as organising units that help make sense of interconnections and nested scales of action.

In construction, key processes take place across different timescales. These range from time needed for a regenerative resource such as a forest to grow, to the lifespan of a building, to the longer time periods associated with meaningful carbon sequestration. In each of these cases, regenerative interventions involving acts of maintenance and design directly modulate the temporal register of the built environment. For example, extending a building’s lifespan through processes of care and preventing demolition impacts the future form of its locale and pushes back against the conceptualisation of buildings strictly as sources of profit within capitalist logic–that is, viewing buildings primarily in terms of their capacity to generate immediate economic returns through cycles of development, exploitation and obsolescence. By this means, it is within the medium of time that a regenerative lens on the built environment can be most revealing.

Regeneration in deep time and at the timescale of ecosystems has been disrupted by human processes. We are accustomed to the idea of the Anthropocene, in which an epoch defined by human activity has become the dominant influence on climate and the environment, which was initiated by the industrial revolution. However, recent discussions by Jason W Moore, Andreas Malm and others offer a critique of this concept in making the case for the Capitalocene as a more precise term. Rather than treating humanity as a homogenous force as Anthropocene theory does, the Capitalocene examines how differences in responsibility, power and agency within societies have been compounded in the context of the capitalist system, and how this system has driven ecological crisis. Rather than humanity as a whole, Moore argues that we should examine how the social, economic and political processes that have shaped recent centuries, and which reach back to the early modern period, provide a better basis for understanding the relationship between human activity and planetary wellbeing, and how this dynamic produces ecological crises. Using this focus on the un-natural and political origins of the crisis we face today, it’s possible to see how shifting senses of responsibility, agency and relationships, operating against capitalist logics, are essential for developing effective pathways toward planetary regeneration. In the predominant logic of the Capitalocene, short-term profits, increases in productivity, and optimisation around flawed ideas of efficiency are necessitated–and regeneration could be mistaken for a loss, an indicator of inefficacy, a concession to the ineffable–and as such, unwarranted. This is the systemic logic that must be resisted.

The prevalence of demolition today is one example of how this systemic short-sightedness is bad for people and the planet. The UK is now facing the consequences of the prevalent use of reinforced aerated autoclave concrete (‘RAAC’), in municipal buildings nationwide during the 1980s. With a material lifespan of only 30 years, many hospitals and schools built of RAAC are now being demolished. Indeed, the lifespan of many of the structures that are most viable in our current urban development models are steadily decreasing in spite of increasing awareness of the embodied carbon impacts of demolition.

We would do well, in looking toward a regenerative future for the built environment, to retune our time horizons. This might involve syncing carbon sequestration time with lifecycles for construction that create value over time, taking into account things like municipal land leases and emerging whole life carbon regulations. What if we had a way to see the long-term impact of decisions made today?

In this effort to hold more timescales in mind when we consider processes of regeneration, we can learn a great deal from indigenous cultures from across the world, many of whom have developed, over the course of millennia, methods and ideologies supporting the human ability to connect with scales of time beyond our species-specific and news-cycle dependent parameters. Some of these examples are evidenced in the above Index of enquiries.

Theoretical underpinnings: what constitutes a regenerative built environment?

The built environment is both a physical and a social construct: it’s not fitting in this moment of polycrisis to continue to abstract the physical materials that shelter us from the labour that built them, the livelihoods that maintain them, the design processes that make them fit for purpose, and the policies or decisions that keep them standing.

To identify ways to directly address the injustices to people and the planet engendered by the Capitalocene, we need to look to historical and political decisions that have driven the crises in housing affordability and race-based inequality that are defining features of cities today. In recent years, there has been a greater focus on how the built environment can benefit from the application of lenses that focus on the distribution of power and agency within societies, including critical theory and urban political ecology. These approaches can help us to articulate how the built environment and natural resources can be viewed in the context of human struggles to meet their needs in the context of today’s critical conditions.

David Harvey, most notably in Social Justice and the City, points to how a purely quantitative or spatial design-based approach to understanding urban space consistently fails to engage socioeconomic phenomena like inequality and urban poverty, while arguing for the necessity of approaches that integrate the spatial with the social. Harvey’s reading, grounded in radical geography, makes clear how spatial development processes are driven by financial capital, which keeps governments, civil society, communities and individuals in predetermined roles, ill-equipped to resist the calcification of capitalised space. Recently, climate justice movements like the Climate Justice Alliance (on the grassroots side) have formed alliances with decision-makers and activists in the built environment around causes like health and buildings, retrofit poverty and feminist approaches to building, under banners like a Global Green New Deal, in which a spatialised social justice lens can be directly applied.

Harvey’s work is a key influence on urban political ecology approaches, which assist us in understanding of how cities are hybrids of natural and social processes, rejecting a dichotomy between people and nature. Similarly, Marxist political economic thinkers like Raymond Williams have pointed to how capitalism organises space and produces environmental inequalities, as analysed using multiscalar analysis, among other techniques. Through a political ecology lens, we see that developers and investors, not communities or ecological needs, shape the built environment, often through speculative real estate practices that exploit labour and resources. These critiques of the built environment emphasise that urban development is driven primarily by capitalist interests, prioritising profit over social and environmental well-being, leading to inequality, displacement, and environmental degradation. Theory can support an analysis of exclusion in planning, and advocacy for participatory processes that could support socially regenerative places.

In sum, focusing exclusively on buildings misses the point that cities are fluid, open, contested multivocal landscapes. At scales from the individual building, to the neighbourhood, including infrastructure like street systems, as well as cities and regions, the built environment is a negotiation between matter, human behaviour and social systems over time.

As we look to the future, how will our urban environments be produced? Who will benefit from them? And how can we challenge the environmental injustices inherent to the systems we live in?

Guiding principles for regenerative practice Six layered principles for a regenerative built environment

Expanding our definition of what’s regenerative in the built environment calls for clear ways to speak to the material, economic and social dimensions of cities. We need ways of accessing and assessing regeneration that cut across disciplinary boundaries, invite broader participation in these conversations, and account for future risks and technological developments.

What layers and principles might expand and deepen our understanding of systemic interactions as we work toward more holistic indicators? Below are six suggestions to focus our gaze.

Time horizons and generational preparedness

Future indicators of a regenerative built environment must take a long-term view. If the built environment is to form a matrix in support of human life for generations to come, it should fundamentally be building material preparedness for the future. This means the way we measure and quantify what the built environment does ought to speak to this extended time horizon, for example by considering how much carbon is stored for three generations to come, how much of our timber is sourced in a way that will allow for replanted trees that will mature over decades, or how much of a building’s material stock can be disassembled and reused within the same settlement.

Today we have standard metrics like Floor Area Ratio (FAR) that are aligned with present development models and profit-driven logics requiring maximum saleable use of space, fundamentally constraining possibilities for the built environment. Foregrounding time horizons for change enables retooling of these ways of measuring cities, focusing not on short-term, singular profits and benefits, but rather on the future generations and our planetary resources.

Geopolitical resilience and security

Future indicators for a regenerative built environment should address the geopolitical stakes of decisions.This is especially relevant now in Europe, with regard to geopolitical dynamics within and between the US, Russia and China, in light of multipolarity and the EU Strategic Autonomy conversation. Can we refashion the socioeconomic and material dependencies in cities so that they are resilient to the crises that may face future generations, while supporting enhanced responses to geopolitical dangers? We should look to modes of resilience that address the political and economic systems that exacerbate geopolitical precarity, such as the extractive nature of global trade, and the ongoing influence of multinational corporations in shaping environments across scales. The status quo propositions toward resilience often fall short of addressing geopolitical power structures.

Place-based and planetary approaches

Future policies and indicators should adopt a multiscalar view that takes into account the unique local context to which it’s applied, as well as the transformative potential and influence interventions may leverage across scales (e.g. throughout the value chain). Contextual specificity is associated with direct impact in regenerative efforts, but these must be connected to transformative change that fundamentally alters the properties and functions of systems.

Living systems approach

Actions should help to shift thinking towards more holistic and ecocentric worldviews, in which non-capitalistic, nature-centred systems of values are given primacy. This layer considers interventions as part of dynamic social-ecological systems rather than isolated components. It is crucial to see these social-ecological systems for their complex adaptive qualities, in which people and nature are inextricably linked.

A living systems approach supports biogenerative thinking, in which processes, systems, or designs that actively promote, support, and regenerate life — both biological and ecological — create conditions for continuous growth, renewal, and self-sustaining ecosystems.

Co-evolutionary and community-led

Interventions should structurally empower communities to act and evolve in line with their ecosystems. Structural empowerment means building systems and resources to make communities stronger and self-sufficient and allowing nature to flourish in tandem. This approach foregrounds the utility of feedback mechanisms from nature, like soil health indicators, phenological changes, and biodiversity and species presence, to support the co-evolution and improvement of social-ecological systems.

Supporting holistic value creation

A regenerative built environment should operate on the basis of a broad definition of value, from economic, to ecological and social. As the theoretical approaches discussed previously indicate, the built environment is a hybrid of natural and social processes occurring in the constraints of systems that thrive on extraction and inequality. A holistic approach that combines material, interpersonal and spatial integrators to consider what is regenerative generates cascading value across multiple scales.

“Measuring the impact of regenerative practices on living systems must therefore recognise entangled systemic value flows. Current economic approaches fail to account for this complexity.”
— Dark Matter Labs, A New Economy for Europe’s Built Environment, white paper, 2024
Conclusion

In the context of the polycrisis, we need to move beyond notions of sustainability, toward, as Bill Reed’s diagram suggests, creating healthy, counter-extractive communities and bioregions that can scale from exceptions to define new norms.

Embracing a broadened definition of regenerative practice — one which is informed by the historical and contemporary context of such practices — will evidence the potential contradictions and tensions in the current system. Deploying multimodal metrics and indicators, of the type that the principles introduced in this piece imply, will enable new thinking for net-regenerative outcomes in our cities. Without redirecting our points of orientation toward these six principles, even motivated actors will be limited by today’s system, which allows only for shifting of blame and incremental, localised improvements in the status quo. We will never reach a regenerative built environment without transformational change.

Further pieces in this series will explore in more detail the systemic shifts we envision, pathways toward regenerative practice, and possible indicators for recognising progress.

This publication is part of the project ReBuilt “Transformation Pathways Toward a Regenerative Built Environment — Übergangspfade zu einer regenerativen gebauten Umwelt” and is funded by the German Federal Ministry for the Environment, Nature Conservation, Nuclear Safety and Consumer Protection (BMUV) on the basis of a resolution of the German Bundestag.

This piece represents the views of its authors, including, from Dark Matter Labs, Emma Pfeiffer, Aleksander Nowak, and Ivana Stancic, and from Bauhaus Earth, Gediminas Lesutis and Georg Hubmann.

We extend our thanks to additional collaborators within and beyond our organisations who informed this discussion.

Additional links: Built By Nature Material Cultures Ecococon LUMA Arles / Le Magasin Électrique HouseEurope! Rotor Gleis 21 Home Silk Road Kalkbreite La Borda Living for Future Habitat for Humanity Poland

What’s guiding our Regenerative Futures? was originally published in Dark Matter Laboratories on Medium, where people are continuing the conversation by highlighting and responding to this story.


uquodo

How Businesses Can Detect Crypto Fraud and Protect Digital Assets

The post How Businesses Can Detect Crypto Fraud and Protect Digital Assets appeared first on uqudo.

ComplyCube

Online Safety Act 2023 vs. EU DSA: What You Need to Know

Discover how the UK Online Safety Act 2023 and the EU Digital Services Act differ on age verification, compliance, and platform accountability to protect children online. The post Online Safety Act 2023 vs. EU DSA: What You Need to Know first appeared on ComplyCube.

Discover how the UK Online Safety Act 2023 and the EU Digital Services Act differ on age verification, compliance, and platform accountability to protect children online.

The post Online Safety Act 2023 vs. EU DSA: What You Need to Know first appeared on ComplyCube.


IDnow

Why eID will be key in Germany’s digital future – Docusign’s Kai Stuebane on trust, timing and transformation.

We spoke with Kai Stuebane, Managing Director for DACH at Docusign, to explore how secure digital identity verification is transforming digital signing amid Germany’s evolving regulatory landscape. From navigating increasing compliance demands to delivering seamless user experiences, we discussed why eID (Electronic Identification) is becoming a strategic priority for faster, more secure, and legal
We spoke with Kai Stuebane, Managing Director for DACH at Docusign, to explore how secure digital identity verification is transforming digital signing amid Germany’s evolving regulatory landscape.

From navigating increasing compliance demands to delivering seamless user experiences, we discussed why eID (Electronic Identification) is becoming a strategic priority for faster, more secure, and legally compliant digital signatures – and how Docusign’s partnership with IDnow is empowering enterprises to stay ahead with secure, scalable and user-centric digital workflows.

Why now: Perfect conditions for eID to scale In today’s rapidly evolving regulatory landscape, particularly in Germany but also across Europe, digital identity is becoming increasingly significant. From Docusign’s perspective, what factors are driving the growing importance of secure digital identity solutions in the enterprise environment? 

First, regulatory compliance is a major driver. Regional laws such as eIDAS, and the impending eIDAS 2.0 in the EU are enhancing the need for digital authentication solutions across the region by introducing initiatives such as European Digital Identity Wallets (EUDI). In Germany, the focus on digital trust services, enforced by institutions such as BaFin and regulations like GwG, demand robust, verifiable digital identity solutions. Enterprises must meet strict requirements for customer identification and authentication when signing or executing agreements electronically. 

Second, security concerns and fraud prevention are top priorities. According to a recent Docusign global survey into the identity verification landscape, 70% of organisations agree that identity fraud attempts are on the rise, as remote and hybrid work models become the norm and businesses continue digitising their operations. As a result, companies require robust authentication solutions that ensure document integrity and signer identity across borders and devices.

A third major driver is that user expectations have shifted. Both customers and employees now expect seamless, secure digital experiences, with 50% of organisations actually prioritising customer experience over fraud prevention, given its perceived importance. Organisations like Docusign enable enterprises to deliver this through a frictionless signing experience while maintaining high standards of security and trust. For example Grenke who, in addition to offering IDnow’s videoident process through Docusign, decided to also add the new eID capability in order to offer more convenience to their customers.  

Finally, digital transformation continues to accelerate. Enterprises are modernising legacy workflows at an exponential rate, and secure digital identity is foundational to automating agreement processes end-to-end. Digital-first solutions empower businesses to operate faster, more efficiently, and with greater legal certainty – particularly in highly regulated markets like Germany.

As Germany advances its digital transformation initiatives, how do you anticipate electronic identification (eID) solutions will reshape document signing processes for both enterprises and consumers in the German market?

There is an overall shift within the identity verification and authentication landscape  where organisations are actively seeking-out solutions that enable them to maintain security and compliance, without impacting the user experience.  

For enterprises, eID solutions will help streamline identity verification, enabling faster onboarding, contract execution, and compliance with stringent regulatory requirements such as eIDAS and Germany’s Trust Services Act. Again, take Grenke as an example, the ability to integrate German eID schemes into their existing signing workflow – especially for digital signatures – means they can ensure the highest level of legal validity while reducing manual processes and streamlining the customer experience.

For consumers, eID will offer a more seamless and familiar experience whilst maintaining security – something we pride ourselves on delivering here at Docusign. With familiar national identity methods integrated into digital transactions, users will be able to verify their identity and complete agreements with confidence and ease. This not only enhances trust but also accelerates adoption in regulated sectors like finance, insurance, and real estate.

Through our partnership with IDnow, Docusign is committed to supporting the German market by leaning into evolving regulations and integrating eID solutions into its portfolio, meeting local regulatory needs while delivering the trusted experience that users expect.

The eID advantage: Seamless UX meets compliance How can Germany unlock and accelerate the full potential of eID?

Based on our experience, accelerating eID adoption in Germany hinges on three key factors: user experience, awareness, and interoperability. 

First, simplifying the user experience is critical. For individuals to embrace eID for digital agreement completion, the process must be intuitive, fast, and secure. Reducing friction, such as removing lengthy registration steps or complex verification methods, can significantly increase user adoption. By leveraging the familiar eID methods, this will streamline this experience while maintaining high levels of identity assurance.

Second, education and awareness are essential. Many individuals are unaware that their national eID can be used as part of the digital agreement process. Promoting the benefits (legal validity, security, and convenience, etc.) will help build trust and drive usage across different age and user groups.

Third, ensuring broad interoperability with public and private identity schemes is key. Businesses need confidence that the eID solutions they implement will work across sectors and meet local (GwG) and regional (eIDAS) regulatory standards.

In what ways has Docusign enhanced its signing workflows by incorporating eID with other IDnow-powered verification solutions?

Docusign has a long-standing partnership with IDnow. The evolution of this partnership to now include IDnow’s eID capabilities enhances the security and user experience of its joint offering in the following ways:

Automation: Customers can make the most of an Identification method that simply relies on  the electronic identification (eID) function of the German national identity card.  Security: Two factors of authentication for additional security:  PIN entry  Scanning of the near field communication (NFC) chip contained within German eIDs  Familiarity and ease of use: not only are eIDs increasingly adopted across Germany, but the fact we leverage new technology such as NFC provides an additional element of ease of use. Real-world application: GRENKE’s eID-first transformation For businesses that already use Docusign but haven’t yet implemented eID-based signing, what are the key benefits they might be missing out on?

Ultimately, we can distill the key benefits to: 

Increased completion rates, driven through familiarity: enable customers to use their German eID for straightforward, intuitive identity verification that supports compliance obligations.  Secure, simplified signing:  built-in security enhancements (i.e. use of PIN, scanning of NFC, etc.) mean that SMS re-authentication and live video interactions are no longer required, resulting in an even faster identification process for signers. Storage and centralisation of key identity information: continue to download or easily access required signer identity information through Docusign and IDnow, to demonstrate compliance with BaFin GwG requirements  Can you share a real-world example of how a customer of Docusign is using eID to improve efficiency and achieve measurable business outcomes?

A strong example is our long-standing collaboration with Grenke, a leading provider of leasing and financing services. For several years, Grenke has enabled customers and dealers to digitally sign contracts using Docusign eSignature, with IDnow’s VideoIdent solution supporting identity verification.

Recently, Grenke enhanced this process by integrating IDnow’s eID solution as an alternative verification method. The impact has been clear: the introduction of eID has helped Grenke accelerate contract turnaround times, reduce reliance on physical materials, and improve the overall user experience. This has translated into greater operational efficiency, enhanced customer satisfaction, and measurable progress toward the company’s digital and sustainability goals.

What’s next: Looking beyond legal requirements As we anticipate the implementation of eIDAS 2.0 and the European Digital Identity framework in the coming months, how do you envision these regulatory advancements shaping the evolution of electronic identification and digital signature solutions across Germany and the broader European market?

These regulatory advancements will establish a unified, interoperable framework for digital identity across EU member states, enabling individuals and businesses to authenticate and complete digital agreements securely and seamlessly across borders. For Germany, this means greater alignment with a pan-European standard that facilitates trust, legal certainty, and smoother cross-border transactions.

eIDAS 2.0 introduces the concept of the European Digital Identity Wallet (EUDI), which empowers citizens to manage, store and share verified identity attributes as they wish. This will significantly enhance user control, reduce onboarding friction, and boost adoption of high-assurance digital signatures, particularly Qualified Electronic Signatures (QES). At Docusign our stated ambition is to become an federator of identities, where all EUDI wallets are available through our platform . 

For businesses, these changes will reduce complexity in managing multiple identity systems while improving compliance and scalability. 

We’re excited for what’s to come. 

Interested in more from our customer conversations? Check out: Holvi’s Chief Risk Officer, René Hofer, sat down with us to discuss fraud, compliance, and the strategies needed to stay ahead in an evolving financial landscape.

By

Nikita Rybová
Customer and Product Marketing Manager at IDnow
Connect with Nikita on LinkedIn

Sunday, 14. September 2025

Innopay

Mariane ter Veen to speak on responsible AI adoption at MyData 2025

Mariane ter Veen to speak on responsible AI adoption at MyData 2025 from 24 Sep 2025 till 26 Sep 2025 Trudy Zomer 14 September 2025 - 16:36 Helsinki, Finland 60.110698558061, 25.01868035 We’re e
Mariane ter Veen to speak on responsible AI adoption at MyData 2025 from 24 Sep 2025 till 26 Sep 2025 Trudy Zomer 14 September 2025 - 16:36 Helsinki, Finland 60.110698558061, 25.01868035

We’re excited to announce that Mariane ter Veen, INNOPAY’s Director Data Sharing, will speak at the MyData 2025 conference, taking place in Helsinki from 24–26 September 2025.

MyData 2025 is one of the world’s leading conferences on human-centric data sharing, bringing together innovators, policymakers, and experts from across the globe. This year’s programme highlights the growing importance of digital sustainability, with a dedicated track exploring how organizations can innovate responsibly in the age of AI.

In her session, Mariane will introduce INNOPAY’s Triple AI framework (Access, Integrity & Intelligence): a practical approach to adopting artificial intelligence effectively, responsibly, and sustainably. She’ll share insights on how organizations can:

Align digital innovation with societal values while safeguarding trust and inclusivity Gain control over AI strategies to unlock responsible innovation at scale Create long-term value by linking environmental, social, and economic sustainability goals

Drawing on INNOPAY’s expertise in creating trusted digital ecosystems, Mariane will explore how AI, data, and governance can work together to deliver innovation with purpose.

Event details
 

Date: 24–26 September 2025
Location: Helsinki, Finland
More information — MyData 2025 programme


Mariane ter Veen to speak at Andersen Lab conference on digital sovereignty

Mariane ter Veen to speak at Andersen Lab conference on digital sovereignty from 23 Oct 2025 till 23 Oct 2025 Trudy Zomer 14 September 2025 - 16:25 NEMOS Suite, Frankfurt, Germany 50.121329352631, 8.6365638
Mariane ter Veen to speak at Andersen Lab conference on digital sovereignty from 23 Oct 2025 till 23 Oct 2025 Trudy Zomer 14 September 2025 - 16:25 NEMOS Suite, Frankfurt, Germany 50.121329352631, 8.6365638

On 23 October, Mariane ter Veen, Director Data Sharing at INNOPAY, will deliver a keynote at an exclusive Andersen Lab conference in the NEMOS Suite in Frankfurt.

In her session, "The next competitive edge: building a sovereign and sustainable digital future," Mariane will highlight how organisations can leverage digital sovereignty and sustainable data ecosystems to gain a competitive advantage.

Andersen Lab hosts high-level conferences for executives, innovators, and strategic decision-makers driving digital transformation. These events combine thought leadership and in-depth knowledge sharing in an exclusive, focused setting

Date and location
23 October 2025
NEMOS Suite, Frankfurt, Germany

For more details and registration go to the event website.


Mariane ter Veen to speak at Andersen Lab conference on digital sovereignty

Mariane ter Veen to speak at Andersen Lab conference on digital sovereignty from 04 Nov 2025 till 04 Nov 2025 Trudy Zomer 14 September 2025 - 16:22 hotel Jakarta, amsterdam On 4 November, Mariane ter Veen, Director Data Sharing at INNOP
Mariane ter Veen to speak at Andersen Lab conference on digital sovereignty from 04 Nov 2025 till 04 Nov 2025 Trudy Zomer 14 September 2025 - 16:22 hotel Jakarta, amsterdam

On 4 November, Mariane ter Veen, Director Data Sharing at INNOPAY, will speak at an exclusive Andersen Lab conference at Hotel Jakarta in Amsterdam.

In her keynote, "The next competitive edge: building a sovereign and sustainable digital future," Mariane will explore the strategic importance of digital sovereignty and how organisations can use it to create sustainable competitive advantage.

Andersen Lab organises exclusive, small-scale conferences for C-level executives and decision-makers in the financial and technology sectors. The events bring together thought leaders to share insights, explore visions, and shape the digital future.

Date and location

4 November 2025
Hotel Jakarta, Amsterdam, the Netherlands

For more details and registration go to the event website.

Saturday, 13. September 2025

Recognito Vision

The Future of Face ID Search in Smartphones 2025

Face ID search technology has rapidly evolved, becoming a standard feature in smartphones. In 2025, its capabilities are expected to expand even further, offering a seamless, secure, and personalized experience for users. This blog explores the future of Face ID search in smartphones, how it integrates with existing technology, and the potential benefits and challenges...

Face ID search technology has rapidly evolved, becoming a standard feature in smartphones. In 2025, its capabilities are expected to expand even further, offering a seamless, secure, and personalized experience for users. This blog explores the future of Face ID search in smartphones, how it integrates with existing technology, and the potential benefits and challenges of this advancement.

 

What is Face ID Search?

It is a technology that uses facial recognition to unlock smartphones and enable various features such as security, payments, and app access. Unlike traditional password systems, it allows users to unlock their devices by simply looking at them, using the unique features of their faces as identification.

This technology has come a long way since its inception and continues to evolve with advancements in 3D facial recognition and other biometrics. By 2025, Face ID will likely be even more accurate, efficient, and secure.

 

How Facial ID Recognition Works

It relies on advanced facial recognition algorithms and hardware, such as depth sensors, infrared cameras, and AI-powered software. It captures the unique features of a person’s face, including the distance between their eyes, nose, mouth, and other defining characteristics.

In the case of smartphones, Face ID works by:

Scanning your face using a 3D depth sensor to create a detailed map of your features. Comparing the scanned data to the stored template to confirm your identity. Unlocking the device or allowing access to apps, payment systems, and more once the match is confirmed.

This process is both fast and secure, offering a more convenient method of authentication compared to traditional PIN codes or passwords.

 

The Future of Face ID Search in Smartphones (2025)

As we look toward the future of smartphones, Face recognition is set to play an even more central role. Here are some expected advancements:

 

1. Improved Accuracy with 3D Facial Recognition

Currently, Face ID systems rely on 2D mapping and some 3D depth sensors for better security. However, by 2025, 3D facial recognition will likely become the standard for even more accurate and precise identification. With the integration of advanced 3D facial recognition, your smartphone will be able to detect your face from multiple angles, providing enhanced security and reducing the risk of errors in recognition.

 

2. More Personalized User Experience

It will move beyond just unlocking your phone. By 2025, smartphones will likely offer a personalized user experience based on facial recognition. For instance, Face ID search could automatically:

Adjust screen brightness or display settings based on your face. Personalize app suggestions or content based on your past preferences. Unlock specific apps and features automatically when the phone detects that you are looking at it.

This level of personalization can enhance user engagement and make smartphone interactions more intuitive.

 

3. Facial Recognition for Payments and Secure Transactions

Already, smartphones with Face ID capabilities allow users to make payments through mobile wallets like Apple Pay or Google Pay. By 2025, face unlock for payments will become even more common and secure. We may see Face ID search systems that can perform secure transactions, even without the need for an additional password or PIN. This will make financial transactions quicker and more secure for users.

 

4. Integration with Augmented Reality (AR)

Augmented reality is quickly gaining popularity, and Face ID search will likely integrate seamlessly with AR experiences. Imagine using your smartphone’s facial recognition to control AR experiences unlocking virtual environments, personalizing characters, and interacting with digital content. 3D facial recognition will provide accurate data to ensure a more immersive experience, enabling personalized AR interactions based on your facial features.

 

5. Enhanced Privacy and Security Features

With the growing concern over digital privacy, the future of Face ID search will focus on enhancing security measures. Face unlock technology will be enhanced to ensure that it is more difficult for people to bypass the system. Expect additional layers of security such as liveness detection, where the phone can determine if it’s looking at a real face (not a photo or video), or multi-factor authentication (combining face recognition with voice or fingerprint authentication).

 

Its Impact on the Smartphone Industry

The introduction of Face ID is already changing the way we interact with our smartphones. By 2025, it will likely have a profound impact on various industries:

 

1. Mobile Payments and E-Commerce

As smartphones adopt Face ID search technology, mobile payment and e-commerce platforms will see an uptick in secure transactions. Users will no longer need to fumble with passwords, credit cards, or PINs. Just a glance at their phone will be enough to authorize payments, making online shopping and in-store purchases more efficient.

 

2. Smartphone Security

Smartphone security will continue to evolve. With improved facial recognition technology, phone manufacturers will likely be able to deliver a much higher level of security. This could reduce the likelihood of data theft and unauthorized access, making smartphones much more secure.

 

3. Privacy Concerns

As Face ID search becomes more widespread, privacy concerns are likely to rise. Many people worry about the potential for their facial data to be stored and misused. The smartphone industry will need to address these concerns by implementing stronger encryption and giving users control over their data.

 

Challenges and Concerns in the Future of Face ID Search

While Face ID search has many advantages, it does come with its challenges:

 

1. Privacy and Security Risks

Storing and using facial data raises privacy concerns. If this data is hacked or stolen, it could lead to identity theft. To combat these risks, manufacturers will need to adopt robust encryption and make sure that personal data is stored securely.

 

2. Facial Recognition Accuracy

While facial recognition technology has improved, it’s still not flawless. Factors such as lighting, aging, or facial hair changes can affect recognition. As we move toward 2025, more accurate 3D facial recognition systems will likely emerge to minimize these issues.

 

3. Increased Dependency on Facial Recognition

As more tasks are tied to Face ID search, users may become overly reliant on facial recognition for security. This could present issues if the system fails or the user’s facial features change significantly due to injury or surgery.

 

Conclusion

The future of Face ID search in smartphones looks promising. By 2025, it will be more accurate, secure, and integrated with other technologies, enhancing the user experience and providing improved functionality. Whether for security, payments, or personalization, Face ID search will be a key player in how we interact with our smartphones.

If you’re a business or developer interested in incorporating facial recognition technology into your app, tools like Recognito’s Face ID SDK can help. Tested under the NIST FRVT 1:1 case study, it delivers reliable performance while prioritizing both security and privacy. Recognito offers robust, easy-to-integrate solutions for adding face unlock features into your products. To learn more and explore the implementation, you can also visit Recognito’s GitHub repository.

The future is looking brighter with Face ID search, but it’s essential to address privacy, accuracy, and security concerns as the technology continues to evolve.

 

Frequently Asked Questions

1) What is Face ID search, and how does it work?

Face ID search uses facial recognition technology to unlock your smartphone by scanning unique facial features and matching them to a stored template.

 

2) Is Face ID search more secure than traditional passwords?

Yes, Face ID search is more secure as it uses biometric data, which is harder to guess or steal compared to traditional passwords.

 

3) Can Face ID search be fooled by photos or videos?

Modern Face ID systems use liveness detection, making it difficult for photos or videos to fool the system.

 

4) What happens if Face ID search doesn’t recognize my face?

If Face ID fails, you can unlock your device with an alternative method like a password, PIN, or fingerprint.

 

5) Will Face ID search work if my face changes significantly (e.g., due to aging, makeup, or injury)?

Face ID can adapt to minor changes but might struggle with significant changes like severe injuries or drastic aging.

Tuesday, 26. August 2025

Radiant Logic

Rethinking Enterprise IAM Deployments with Radiant Logic’s Cloud-Native SaaS Innovation

Learn how Radiant Logic’s cloud-native SaaS redefines IAM operations with agility, resilience, and real-time observability, empowering enterprises to thrive in the cloud era. The post Rethinking Enterprise IAM Deployments with Radiant Logic’s Cloud-Native SaaS Innovation appeared first on Radiant Logic.

iComply Investor Services Inc.

Nonprofit Due Diligence: How to Manage Global Compliance Without Mission Drift

Nonprofits face growing AML obligations. This guide explains how to verify donors, partners, and grantees while maintaining trust and operational focus using iComply.

Nonprofits are under growing pressure to vet grantees, partners, and donors to meet global AML standards. This article outlines key KYC and KYB expectations in the U.S., UK, EU, Canada, and Australia – and shows how iComply enables automated risk screening without disrupting trust or operations.

Nonprofits and non-governmental organizations (NGOs) are mission-driven – but increasingly, they’re also AML-obligated. Regulators, donors, and banking partners now expect them to verify counterparties, conduct due diligence on sub-recipients, and track risk exposure across jurisdictions.

Global AML rules are expanding—and nonprofits must ensure their programs and funds are not diverted for criminal or terrorist use.

Emerging AML Obligations for Nonprofits United States Regulators: FinCEN, IRS, Department of State Requirements: Due diligence on foreign grantees, donor vetting, sanctions screening, and enhanced scrutiny of transactions involving high-risk countries United Kingdom Regulators: Charity Commission, HMRC Requirements: Financial controls, PEP and sanctions screening, and governance reviews for organizations handling overseas grants European Union Regulators: National charity bodies, AML authorities Requirements: UBO transparency, transaction monitoring, GDPR-compliant due diligence, and STR obligations Canada Regulator: CRA, FINTRAC Requirements: Anti-terrorist financing controls, donor due diligence, reporting obligations, and foreign activity reviews Australia Regulator: ACNC, AUSTRAC Requirements: AML/CTF compliance for overseas programs, sanctions compliance, and source-of-funds transparency Challenges Nonprofits Face

1. Resource Constraints
Small compliance teams, tight budgets, and limited infrastructure

2. Complex Grant Networks
Sub-grantees, international affiliates, and in-country partners with limited transparency

3. Donor Sensitivity
Trust and confidentiality must be preserved during verification

4. High-Risk Regions
Operations often focus on areas with elevated AML or sanctions risk

iComply: Mission-Aligned AML Tools for Nonprofits

iComply offers a lightweight, privacy-respecting AML platform that supports risk screening and verification across the nonprofit ecosystem.

1. KYC + KYB for Partners and Grantees Verify local nonprofits, vendors, and individuals with document and registry checks Onboard stakeholders using multilingual, mobile-ready portals Collect declarations, signatures, and supporting documentation securely 2. Sanctions and Risk Screening Screen partners and donors against OFAC, EU, UN, and national sanctions lists Apply configurable thresholds and refresh cycles Automate PEP/adverse media checks without storing unnecessary PII 3. Privacy-First Infrastructure Data processed on-device before transmission Full compliance with PIPEDA, GDPR, and local privacy laws Configurable consent workflows and retention schedules 4. Case Management and Reporting Assign compliance reviews and track escalations Export audit logs for internal governance or third-party funders Maintain a defensible trail of due diligence Case Insight: Charitable Gifting Platform

A Canadian-registered charitable gifting platform operating across North America adopted iComply to manage grantee and partner due diligence. Results:

Screened 60+ partners in under 4 weeks Flagged one entity with prior sanction exposure Increased trust with a major foundation through automated compliance The Bottom Line

Doing good doesn’t exempt you from doing due diligence. Nonprofits that integrate smart, mission-aligned compliance tools can:

Meet funder and regulatory expectations Maintain operational focus Build donor and partner trust

Talk to iComply to learn how we help nonprofits automate global AML screening – without sacrificing impact or transparency.


Aergo

BC 101 #6: Why Exchanges Are Building Their Own Blockchains

Crypto exchanges are no longer content with just being marketplaces. Increasingly, they are launching their own networks. On the surface, this appears to be a bid to reduce costs or capture transaction fees. But the real agenda is bigger: to become the gateway. The Strategic Position of Exchanges Exchanges already sit at the most valuable chokepoints in crypto: They own the user 

Crypto exchanges are no longer content with just being marketplaces. Increasingly, they are launching their own networks. On the surface, this appears to be a bid to reduce costs or capture transaction fees. But the real agenda is bigger: to become the gateway.

The Strategic Position of Exchanges

Exchanges already sit at the most valuable chokepoints in crypto:

They own the user funnels. They aggregate liquidity. They provide fiat on/off ramps. They hold the keys to KYC and AML compliance, giving them regulatory leverage and privileged access to the intersection of traditional finance and crypto.

By creating their own blockchains, exchanges extend this power. They no longer just host trading. They design the rails on which trading, applications, and interactions take place. In doing so, they secure the single sign-on (SSO) layer for Web3 and dApps.

A Familiar Playbook: Enterprises and Stablecoins

This strategy mirrors what is happening in traditional finance. Top enterprises and financial institutions are increasingly launching their own stablecoins, not because they want to compete with Bitcoin or Ethereum directly, but because they see stablecoins as the gateway to the digital financial system. Whoever owns the stablecoin rails owns the access point to payments, settlements, and capital flows.

In both cases — exchanges with blockchains and enterprises with stablecoins — the logic is the same: secure the gateway, and you secure the market.

Lessons from the Internet

We’ve seen this dynamic before. In the early days of the web, Facebook dominated single sign-on (SSO) by making “Login with Facebook” the default across apps and websites. Today, that role has largely shifted to Google, which owns identity and access at internet scale.

Exchanges are now attempting to replicate this playbook for Web3. By pushing users and developers onto their own chains, they position themselves as the default login layer of the crypto economy. Meanwhile, enterprises aim to achieve the same goal in finance through stablecoins, thereby creating a default settlement layer for the digital economy.

The Bigger Picture

What looks like fragmented innovation is in fact the same strategic move: to own the gateway layer of the future.

Exchanges are building a crypto SSO for decentralized apps.
Enterprises are building a financial SSO for digital payments.

Both are racing to become the indispensable entry point to their respective domains.

And yet, there is a third frontier emerging: the gateway for AI-native infrastructure. That story belongs to HPP, and it’s one we’ll explore in the next article.

BC 101 #6: Why Exchanges Are Building Their Own Blockchains was originally published in Aergo (HPP) on Medium, where people are continuing the conversation by highlighting and responding to this story.


FastID

Fastly + Scalepost: Extending the Fastly platform to manage AI Crawlers

See when and how AI chatbots use your content. With Fastly and ScalePost, publishers finally gain visibility into how their work shows up in AI-generated answers.
See when and how AI chatbots use your content. With Fastly and ScalePost, publishers finally gain visibility into how their work shows up in AI-generated answers.