Last Update 12:56 PM May 04, 2024 (UTC)

Company Feeds | Identosphere Blogcatcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!

Friday, 03. May 2024

FindBiometrics

Identity News Digest – May 3, 2024

Welcome to FindBiometrics’ digest of identity industry news. Here’s what you need to know about the world of digital identity and biometrics today: Proposed Bill Would Extend Colorado’s Facial Recognition […]
Welcome to FindBiometrics’ digest of identity industry news. Here’s what you need to know about the world of digital identity and biometrics today: Proposed Bill Would Extend Colorado’s Facial Recognition […]

CIPL Report Calls for Risk-Based Approach to Biometrics Regulation

The Centre for Information Policy Leadership (CIPL) has released a report addressing the complex landscape of biometric technology regulation. The report acknowledges the expanding use cases for biometrics across industries, […]
The Centre for Information Policy Leadership (CIPL) has released a report addressing the complex landscape of biometric technology regulation. The report acknowledges the expanding use cases for biometrics across industries, […]

Suspect Charged After ClubsNSW Data Breach

A 46-year-old man has been arrested and charged with blackmail in connection with the extensive ClubsNSW data breach. The breach exposed the personal information of over a million individuals who […]
A 46-year-old man has been arrested and charged with blackmail in connection with the extensive ClubsNSW data breach. The breach exposed the personal information of over a million individuals who […]

Microsoft Strengthens Ban on Police Use of Azure AI for Facial Recognition

Microsoft has strengthened its restrictions on the use of its Azure OpenAI Service for facial recognition purposes by law enforcement, with the updated terms of service explicitly banning US police […]
Microsoft has strengthened its restrictions on the use of its Azure OpenAI Service for facial recognition purposes by law enforcement, with the updated terms of service explicitly banning US police […]

IBM Blockchain

What you need to know about the CCPA rules on AI and automated decision-making technology

The California Privacy Protection Agency (CPPA) released a set of draft regulations on the use of AI and automated decision-making technology. Here's what you need to know. The post What you need to know about the CCPA rules on AI and automated decision-making technology appeared first on IBM Blog.

In November 2023, the California Privacy Protection Agency (CPPA) released a set of draft regulations on the use of artificial intelligence (AI) and automated decision-making technology (ADMT). 

The proposed rules are still in development, but organizations may want to pay close attention to their evolution. Because the state is home to many of the world’s biggest technology companies, any AI regulations that California adopts could have an impact far beyond its borders. 

Furthermore, a California appeals court recently ruled that the CPPA can immediately enforce rules as soon as they are finalized. By following how the ADMT rules progress, organizations can better position themselves to comply as soon as the regulations take effect.

The CPPA is still accepting public comments and reviewing the rules, so the regulations are liable to change before they are officially adopted. This post is based on the most current draft as of 9 April 2024.

Why is California developing new rules for ADMT and AI?

The California Consumer Privacy Act (CCPA), California’s landmark data privacy law, did not originally address the use of ADMT directly. That changed with the passage of the California Privacy Rights Act (CPRA) in 2020, which amended the CCPA in several important ways.

The CPRA created the CPPA, a regulatory agency that implements and enforces CCPA rules. The CPRA also granted California consumers new rights to access information about, and opt out of, automated decisions. The CPPA is working on ADMT rules to start enforcing those rights.

Who must comply with California’s ADMT and AI rules?

As with the rest of the CCPA, the draft rules would apply to for-profit organizations that do business in California and meet at least one of the following criteria:

The business has a total annual revenue of more than USD 25 million. The business buys, sells, or shares the personal data of 100,000+ California residents. The business makes at least half of its total annual revenue from selling the data of California residents.

Furthermore, the proposed regulations would only apply to certain uses of AI and ADMT: making significant decisions, extensively profiling consumers, and training ADMT tools. 

How does the CPPA define ADMT?

The current draft (PDF, 827 KB) defines automated decision-making technology as any software or program that processes personal data through machine learning, AI, or other data-processing means and uses computation to execute a decision, replace human decision-making, or substantially facilitate human decision-making.

The draft rules explicitly name some tools that do not count as ADMT, including spam filters, spreadsheets, and firewalls. However, if an organization attempts to use these exempt tools to make automated decisions in a way that circumvents regulations, the rules will apply to that use.

Covered uses of ADMT Making significant decisions

The draft rules would apply to any use of ADMT to make decisions that have significant effects on consumers. Generally speaking, a significant decision is one that affects a person’s rights or access to critical goods, services, and opportunities.

For example, the draft rules would cover automated decisions that impact a person’s ability to get a job, go to school, receive healthcare, or obtain a loan.

Extensive profiling

Profiling is the act of automatically processing someone’s personal information to evaluate, analyze, or predict their traits and characteristics, such as job performance, product interests, or behavior. 

“Extensive profiling” refers to particular kinds of profiling:

Systematically profiling consumers in the context of work or school, such as by using a keystroke logger to track employee performance. Systematically profiling consumers in publicly accessible places, such as using facial recognition to analyze shoppers’ emotions in a store. Profiling consumers for behavioral advertising. Behavioral advertising is the act of using someone’s personal data to display targeted ads to them. Training ADMT

The draft rules would apply to businesses’ use of consumer personal data to train certain ADMT tools. Specifically, the rules would cover training an ADMT that can be used to make significant decisions, identify people, generate deepfakes, or perform physical or biological identification and profiling.

Who would be protected under the AI and ADMT rules?

As a California law, the CCPA’s consumer protections extend only to consumers who reside in California. The same holds true for the protections that the draft ADMT rules grant.

That said, these rules define “consumer” more broadly than many other data privacy regulations. In addition to people who interact with a business, the rules cover employees, students, independent contractors, and school and job applicants.

What are the CCPA rules on AI and automated decision-making technology?

The draft CCPA AI regulations have three key requirements. Organizations that use covered ADMT must issue pre-use notices to consumers, offer ways to opt out of ADMT, and explain how the business’s use of ADMT affects the consumer.

While the CPPA has revised the regulations once and is likely to do so again before the rules are formally adopted, these core requirements appear in each draft so far. The fact that these requirements persist suggests they will remain in the final rules, even if the details of their implementation change.

Learn how IBM Security® Guardium® Insights helps organizations meet their cybersecurity and data compliance regulations.

Pre-use notices

Before using ADMT for one of the covered purposes, organizations must clearly and conspicuously serve consumers a pre-use notice. The notice must detail in plain language how the company uses ADMT and explain consumers’ rights to access more information about ADMT and opt out of the process.

The company cannot fall back on generic language to describe how it uses ADMT, like “We use automated tools to improve our services.” Instead, the organization must describe the specific use. For example: “We use automated tools to assess your preferences and deliver targeted ads.”

The notice must direct consumers to additional information about how the ADMT works, including the tool’s logic and how the business uses its outputs. This information does not have to be in the body of the notice. The organization can give consumers a hyperlink or other way to access it.

If the business allows consumers to appeal automated decisions, the pre-use notice must explain the appeals process.

Opt-out rights

Consumers have a right to opt out of most covered uses of ADMT. Businesses must facilitate this right by giving consumers at least two ways to submit opt-out requests. 

At least one of the opt-out methods must use the same channel through which the business primarily interacts with consumers. For example, a digital retailer can have a web form for users to complete.

Opt-out methods must be simple and cannot have extraneous steps, like requiring users to create accounts.

Upon receiving an opt-out request, a business must stop processing a consumer’s personal information within 15 days. The business can no longer use any of the consumer’s data that it previously processed. The business must also notify any service providers or third parties with whom it shared the user’s data.

Exemptions

Organizations do not need to let consumers opt out of ADMT used for safety, security, and fraud prevention. The draft rules specifically mention using ADMT to detect and respond to data security incidents, prevent and prosecute fraudulent and illegal acts, and ensure the physical safety of a natural person.

Under the human appeal exception, an organization need not enable opt-outs if it allows people to appeal automated decisions to a qualified human reviewer with the authority to overturn those decisions. 

Organizations can also forgo opt-outs for certain narrow uses of ADMT in work and school contexts. These uses include:

Evaluating a person’s performance to make admission, acceptance, and hiring decisions. Allocating tasks and determining compensation at work. Profiling used solely to assess a person’s performance as a student or employee.

However, these work and school uses are only exempt from opt-outs if they meet the following criteria: 

The ADMT in question must be necessary to achieve the business’s specific purpose and used only for that purpose.  The business must formally evaluate the ADMT to ensure that it is accurate and does not discriminate. The business must put safeguards in place to ensure that the ADMT remains accurate and unbiased. 

None of these exemptions apply to behavioral advertising or training ADMT. Consumers can always opt out of these uses.

Learn how IBM data security solutions protect data across hybrid clouds and help simplify compliance requirements.

The right to access information about ADMT use 

Consumers have a right to access information about how a business uses ADMT on them. Organizations must give consumers an easy way to request this information. 

When responding to access requests, organizations must provide details like the reason for using ADMT, the output of the ADMT regarding the consumer, and a description of how the business used the output to make a decision.

Access request responses should also include information on how the consumer can exercise their CCPA rights, such as filing complaints or requesting the deletion of their data.

Notification of adverse significant decisions

If a business uses ADMT to make a significant decision that negatively affects a consumer—for example, by leading to job termination—the business must send a special notice to the consumer about their access rights regarding this decision.

The notice must include:

An explanation that the business used ADMT to make an adverse decision. Notification that the business cannot retaliate against the consumer for exercising their CCPA rights. A description of how the consumer can access additional information about how ADMT was used. Information on how to appeal the decision, if applicable.  Risk assessments for AI and ADMT

The CPPA is developing draft regulations on risk assessments alongside the proposed rules on AI and ADMT. While these are technically two separate sets of rules, the risk assessment regulations would affect how organizations use AI and ADMT.

The risk assessment rules would require organizations to conduct assessments before they use ADMT to make significant decisions or carry out extensive profiling. Organizations would also need to conduct risk assessments before they use personal information to train certain ADMT or AI models.

Risk assessments must identify the risks that the ADMT poses to consumers, the potential benefits to the organization or other stakeholders, and safeguards to mitigate or remove the risk. Organizations must refrain from using AI and ADMT where the risk outweighs the benefits. 

How do the CCPA regulations relate to other AI laws?

California’s draft rules on ADMT are far from the first attempt at regulating the use of AI and automated decisions.

The European Union’s AI Act imposes strict requirements on the development and use of AI in Europe. 

In the US, the Colorado Privacy Act and the Virginia Consumer Data Protection Act both give consumers the right to opt out of having their personal information processed to make significant decisions.

At the national level, President Biden signed an executive order in October 2023 directing federal agencies and departments to create standards for developing, using, and overseeing AI in their respective jurisdictions. 

But California’s proposed ADMT regulations attract more attention than other state laws because they can potentially affect how companies behave beyond the state’s borders.

Much of the global technology industry is headquartered in California, so many of the organizations that make the most advanced automated decision-making tools will have to comply with these rules. The consumer protections extend only to California residents, but organizations might give consumers outside of California the same options for simplicity’s sake.

The original CCPA is often considered the US version of the General Data Protection Regulation (GDPR) because it raised the bar for data privacy practices nationwide. These new AI and ADMT rules might produce similar results.

When do the CCPA AI and ADMT regulations take effect?

The rules are not finalized yet, so it’s impossible to say with certainty. That said, many observers estimate that the rules won’t take effect until mid-2025 at the earliest.

The CPPA is expected to hold another board meeting in July 2024 to discuss the rules further. Many believe that the CPPA Board is likely to begin the formal rulemaking process at this meeting. If so, the agency would have a year to finalize the rules, hence the estimated effective date of mid-2025.

How will the rules be enforced?

As with other parts of the CCPA, the CPPA will be empowered to investigate violations and fine organizations. The California attorney general can also levy civil penalties for noncompliance.

Organizations can be fined USD 2,500 for unintentional violations and USD 7,500 for intentional ones. These amounts are per violation, and each affected consumer counts as one violation. Penalties can quickly escalate when violations involve multiple consumers, as they often do.

What is the status of the CCPA AI and ADMT regulations?

The draft rules are still in flux. The CPPA continues to solicit public comments and hold board discussions, and the rules are likely to change further before they are adopted.

The CPPA has already made significant revisions to the rules based on prior feedback. For example, following the December 2023 board meeting, the agency added new exemptions from the right to opt out and placed restrictions on physical and biological profiling.

The agency also adjusted the definition of ADMT to limit the number of tools the rules would apply to. While the original draft included any technology that facilitated human decision-making, the most current draft applies only to ADMT that substantially facilitates human decision-making. 

Many industry groups feel the updated definition better reflects the practical realities of ADMT use, while privacy advocates worry it creates exploitable loopholes.

Even the CPPA Board itself is split on how the final rules should look. At a March 2024 meeting, two board members expressed concerns that the current draft exceeds the board’s authority.  

Given how the rules have evolved so far, the core requirements for pre-use notices, opt-out rights, and access rights have a strong chance to remain intact. However, organizations may have lingering questions like:

What kinds of AI and automated decision-making technology will the final rules cover? How will consumer protections be implemented on a practical level? What kind of exemptions, if any, will organizations be granted?

Whatever the outcome, these rules will have significant implications for how AI and automation are regulated nationwide—and how consumers are protected in the wake of this booming technology.

Explore data compliance solutions

Disclaimer: The client is responsible for ensuring compliance with all applicable laws and regulations. IBM does not provide legal advice nor represent or warrant that its services or products will ensure that the client is compliant with any law or regulation.

The post What you need to know about the CCPA rules on AI and automated decision-making technology appeared first on IBM Blog.


FindBiometrics

Two More Businesses Face Time Clock BIPA Lawsuits

Two additional companies are facing BIPA lawsuits over biometric time and attendance tracking technology. One of them, Anviz Global Inc., is a supplier of workforce management solutions, including a face-based […]
Two additional companies are facing BIPA lawsuits over biometric time and attendance tracking technology. One of them, Anviz Global Inc., is a supplier of workforce management solutions, including a face-based […]

IBM Blockchain

Commerce strategy: Ecommerce is dead, long live ecommerce

Commerce strategy—what we might formerly have referred to as ecommerce strategy—is so much more than it once was. Discover what's changed. The post Commerce strategy: Ecommerce is dead, long live ecommerce appeared first on IBM Blog.

In today’s dynamic and uncertain landscape, commerce strategy—what we might formerly have referred to as ecommerce strategy—is so much more than it once was. Commerce is a complex journey in which the moment of truth—conversion—takes place. This reality means that every brand in every industry with every business model needs to optimize the commerce experience, and thus the customer experience, to drive conversion rates and revenues. Done correctly, this process also contains critical activities that can significantly reduce costs and satisfy a business’ key metrics for success.

The first step is to build a strategy that’s focused on commerce, a channel-less experience, rather than ecommerce, a rigid, outdated notion that doesn’t meet the needs of the modern consumer.

“It’s about experiential buying in a seamless omnichannel journey, which is so rich that it essentially becomes channel-less.” Rich Berkman, VP and Senior Partner for Digital Commerce at IBM iX

A successful commerce strategy then is a holistic endeavor across an organization, focused on personalization and fostering customer loyalty even in deeply uncertain times.

Ecommerce is dead

The idea of an “ecommerce business” is an anachronism, a holdover from when breaking into the digital realm involved replicating product descriptions on a web page and calling it an ecommerce store. In the early days of online shopping, ecommerce brands were categorized as online stores or “multichannel” businesses operating both ecommerce sites and brick-and-mortar locations. This era was defined by massive online marketplaces like Amazon, ecommerce platforms such as eBay, and consumer-to-consumer transactions conducted on social media platforms like Facebook marketplace.

Early on, ecommerce marketing strategies touted the novelty of tax-free, online-only retailing that incentivized consumers to select an online channel both for convenience and better pricing options. Those marketing campaigns focused on search engine optimization (SEO) and similar search-related tactics to drive attention and sales.Personalization on an ecommerce website might have involved a retailer remembering your previous orders or your name.

In the world dictated by these kinds of ecommerce sales and touch points, an effective ecommerce strategy might prioritize releasing new products on early iterations of social media, or retargeting consumers across marketing channels with an email marketing campaign. Later in the journey, tactics like influencer marketing and social media marketing encouraged channel-specific messaging that still separated a retailer’s digital operations from its in-person activities.

But the paradigm has shifted. Fatigued by endless options and plagued by the perception of bad actors, today consumers expect more. The modern shopper expects a unified and seamless buying journey with multiple channels involved. The idea of discrete sales channels has collapsed into an imperative to create fluid, dynamic experiences that meet customers exactly where they are.

That means every business, no matter the industry or organizational plan, needs to prioritize the three pillars of an excellent commerce experience strategy: Trust, relevance and convenience. Experience is the North Star of conversion. By cultivating those pillars, any retailer, from a small business to a multinational corporation, can elevate its experience to increase its relevance and remain competitive.

Building trust in an uncertain world

Research shows that today’s customer is anxious and uncertain. Most consumers believe that the world is changing too quickly; over half think business leaders are lying to them, purposely trying to mislead people by grossly exaggerating or providing information they know is false. And, in of 2024, brand awareness means little without trust. The integrity of a business’ reputation remains among the top criteria for consumers when they consider where their dollars go.

Customer acquisition and customer retention depend on consistently excellent experiences that reward consumer trust. Making trust a priority requires building relationships through transparent commerce experiences. It means implementing systems that treat potential customers as valued partners rather than a series of data points and target markets to exploit. The necessity of trust in a relationship-focused commerce strategy is perhaps most obvious in terms of how a business treats the data it acquires from its customer base.

But trust is earned—or lost—at every interaction in the customer journey.

Prepurchase Can the customer trust a business to maintain competitive pricing, and generate digital marketing campaigns that are more useful than invasive? Can the customer trust a business to make it easy to control their own data? Is the user experience intuitive and cohesive regardless of whether a customer is shopping at an online sale or in a store? Purchase When new customers view their shopping carts and prepare to complete checkout, does the business automatically sign them up for services they do not want? Does the payment process frustrate a customer to the point of cart abandonment? Post purchase If a package is set to deliver during a specific window, can the customer trust it arrives during that time? Does the brand make it convenient to do business with them post purchase?

By addressing the issue of consumer trust at every stage, an organization can eliminate fiction and consumer pain points to build long-lasting relationships.

Navigating ethical personalization

Personalization in commerce is no longer optional. Just as search engine optimization is essential common practice for getting a business’s webpages in front of people online, personalization is essential for meeting consumer expectations. Today’s consumer expects a highly customized channel-less experience that anticipates their needs.

But those same consumers are also wary of the potential costs of personalization. According to a recent article in Forbes, data security is a “nonnegotiable” factor for boomers, 90% of whom said that personal data protection is their first consideration when choosing a brand. And for gen X, data protection is of the utmost priority; 87% say it’s the primary factor influencing their purchasing behavior. This puts brands in a delicate position.

“You cannot create an experience that resonates with consumers—one that is trusted, relevant and convenient—without understanding the emotions and motivations of those populations being served.” Shantha Farris, Global Digital Commerce Strategy and Offering Leader at IBM iX

The vast amounts of data businesses collect, combined with external data sources, can be used to present cross-selling and upselling opportunities that genuinely appeal to customers. Using automation, businesses can create buyer personas at a rapid pace and use them to improve the customer journey and craft engaging content across channels. But in a channel-less world, data should be used to inform more than FAQ pages, content marketing tactics and email campaigns.

To create precise and positive experiences, brands should synthesize their proprietary customer data—like purchase history and preferences—with third-party sources such as data gleaned from social media scraping, user-generated content and demographic market research. By using these sources, businesses can obtain both real-time insights into target customers’ sentiment and broader macro-level perspectives on their industry at large. Using advanced analytics and machine learning algorithms, such data streams can be transformed into deep insights that predict a target audience’s needs.

To ensure the success of this approach, it is crucial to maintain a strong focus on data quality, security and ethical considerations. Brands must ensure that they are collecting and using data in a way that is transparent, compliant with regulations and respectful of customer privacy. By doing so, they can build trust with their customers and create a positive, personalized experience that drives long-term growth and loyalty across the commerce journey.

Creating delightful, convenient experiences

As mentioned earlier, experience is the North Star of conversion, and building convenient experiences with consistent functions remains a key driver for a business’ sustainable growth. In a channel-less world, successful brands deliver holistic customer journeys that meet customers exactly where they are, whether the touch point is a product page, an SMS message, a social platform like TikTok, or an in-person visit to a store.

The future of commerce, augmented by automation and AI, will increasingly provide packaged customer experiences. This might include personalized subscriptions or a series of products, like travel arrangements, purchased together by using natural language and taking a specific customer’s preferences into account.

“Once you have the foundation of a trusted, relevant and convenient experience, building on that foundation with the power of generative AI will allow businesses to deepen their customer relationships, ultimately driving more profitable brand growth.” Rich Berkman, VP and Senior Partner for Digital Commerce at IBM iX

The moment of conversion can take many forms. With careful planning, the modern retailer has the potential to create a powerful buying experience—one that wins customer loyalty and cultivates meaningful brand relationships. And new technologies like generative AI, when used correctly, provide an opportunity for sustainable and strategic growth.

Explore digital commerce consulting services Sign up for customer experience topic updates

The post Commerce strategy: Ecommerce is dead, long live ecommerce appeared first on IBM Blog.


FindBiometrics

Liberia Embarks On National Biometric ID Project

The Government of Liberia, via the National Identification Registry (NIR), is preparing to initiate a comprehensive mass-enrollment program for both citizens and foreign residents into its national biometric identification system. […]
The Government of Liberia, via the National Identification Registry (NIR), is preparing to initiate a comprehensive mass-enrollment program for both citizens and foreign residents into its national biometric identification system. […]

auth0

Using a Refresh Token in an iOS Swift App

A step-by-step guide to leveraging OAuth 2.0 Refresh Tokens in an iOS app built with Swift and integrated with Auth0.
A step-by-step guide to leveraging OAuth 2.0 Refresh Tokens in an iOS app built with Swift and integrated with Auth0.

FindBiometrics

Proposed Bill Would Extend Colorado’s Facial Recognition Task Force

House Bill 1468, which aims to broaden the responsibilities and membership of the Colorado General Assembly’s task force on facial recognition and biometric technologies, has advanced through a committee with […]
House Bill 1468, which aims to broaden the responsibilities and membership of the Colorado General Assembly’s task force on facial recognition and biometric technologies, has advanced through a committee with […]

Microsoft Entra (Azure AD) Blog

Microsoft Entra announcements and demos at RSAC 2024

The Microsoft Entra team is looking forward to connecting with you next week at RSA Conference 2024 (RSAC) from May 6 to 9, 2024, in San Francisco! As we enter the age of AI and there are more identities and access points to protect, identity security has never been more paramount. From protecting workforce and external identities to non-human identities—that outnumber human identities 10 to 1—the

The Microsoft Entra team is looking forward to connecting with you next week at RSA Conference 2024 (RSAC) from May 6 to 9, 2024, in San Francisco! As we enter the age of AI and there are more identities and access points to protect, identity security has never been more paramount. From protecting workforce and external identities to non-human identities—that outnumber human identities 10 to 1—the task of securing access and the interactions between them requires taking a more comprehensive approach.  

 

To help customers protect every identity and every access point, I’d like to highlight recent innovations that we’ll be showcasing at this upcoming event: 

 

Expanded passkey support for Microsoft Entra ID   Microsoft Entra ID external authentication methods  Microsoft Entra External ID general availability  Microsoft Entra Permissions Management and Microsoft Defender for Cloud integration general availability  Our vision for cloud access management to strengthen multicloud security 

 

We will be demonstrating these new innovations and sharing more about how to take a holistic approach to identity and access at RSA Conference 2024 (see the table at the end of this blog for more information). Now, let’s take a closer look at Microsoft Entra innovations that we’ll be showcasing at RSAC. 

 

Expanded passkey support for Microsoft Entra ID     

 

In addition to supporting sign-ins via a passkey hosted on a hardware security key, Microsoft Entra ID now includes additional support for device-bound passkeys in the Microsoft Authenticator app on iOS and Android. This will bring strong and convenient authentication to mobile devices for customers with the strictest security requirements. 

 

A passkey is a strong, phishing-resistant authentication method you can use to sign in to any internet resource that supports the W3C WebAuthN standard. Passkeys represent the continuing evolution of the FIDO2 standard aimed at creating a secure and user friendly passwordless experience for everyone.  

 

To learn more about using passkeys in the Microsoft Authenticator app, check out this blog.   

 

Microsoft Entra ID external authentication methods 

 

While organizations increasingly choose to unify their multifactor authentication and access management solutions, thus, simplifying their identity architectures, some organizations have already deployed MFA and want to use their pre-existing MFA provider with Microsoft Entra ID. External authentication methods allow organizations to  leverage any MFA solution to meet the MFA requirement with Entra ID. 

 

At launch, external authentication methods integrations will be available with the following identity providers: Cisco, ENTRUST, HYPR, Ping, RSA, SILVERFORT, Symantec, THALES, and TrustBuilder.  

 

Read our documentation to learn more.  

 

Microsoft Entra External ID general availability 

 

Our next-generation, developer friendly customer identity access management (CIAM) solution, Microsoft Entra External ID will become generally available on May 15, 2024. Whether you're building applications for partners, business customers, or consumers, External ID makes secure and customizable CIAM simple. External ID enables you to: 

 

Secure all identities with a single platform  Streamline secure collaboration  Create frictionless end user experiences  Accelerate the development of secure applications 

Learn more about External ID by reading our announcement blog!  

 

Microsoft Entra Permissions Management and Microsoft Defender for Cloud integration general availability 

 

Deploying applications and infrastructure across multiple clouds has become the norm. Ensuring the security of cloud applications and infrastructure requires integrating identity and permission insights into the overall security strategy. This objective is achieved through the integration of Microsoft Entra Permissions Management with Microsoft Defender for Cloud (MDC), which will soon be generally available in May. 

 

The integration streamlines access and permission insights into other cloud postures through a unified interface. Customers benefit from recommendations on mitigating risks within the MDC dashboard, including unused identities, overprivileged permissions, and unused super identities. This facilitates the enforcement of least privilege access for cloud resources across Azure, Amazon Web Services, and Google Cloud Platform. 

 

Our vision for cloud access management to strengthen multicloud security 

 

Deploying applications and infrastructure across multiple clouds has become common in today’s business landscape. At Microsoft, we have long prioritized the protection of customers’ environments, regardless of the number of clouds they use or the providers they choose.  

 

Our recent 2024 State of Multicloud Security Risk Report reconfirms the importance of securing access in multicloud and presents valuable findings based on one year of actual usage data to enhance organizations’ understanding of their risks and facilitate the development of effective mitigation strategies. Key findings related to access and permissions include: 

 

Only 2% of the 51,000 permissions granted to human and workload identities in 2023 were utilized, with 50% of these permissions classified as high-risk.  More than 50% of identities are identified as super identities, indicating they have access to all permissions and resources within the multicloud environment. 

 

Above all, this report confirms that the complexity of multicloud risk continues to grow. Coupled with the increase in cyberattacks targeting identities, especially those assigned to non-human entities, security teams are overwhelmed. Consequently, organizations are shifting priorities from infrastructure protection to actively monitoring and securing interactions between human and workload identities accessing corporate cloud resources. 

 

We believe Microsoft can help address these challenges with our new vision for cloud access management, offering visibility into all identities and permissions in use, along with proactive risk detection to enhance protection and management of your environment. We will continue our journey to secure access to resources anywhere by developing a new converged platform that encompasses four key solution areas critical for organizations, based on our continuous engagements with customers: 

 

Cloud Infrastructure Entitlement Management (CIEM)  Privileged Access Management (PAM)  Identity Governance and Administration (IGA)  Workload Identity and Access Management (IAM) 

 

Stay tuned to learn more about our vision in the coming weeks.  

 

Where to find Microsoft Entra at RSAC 2024  

 

We’re excited to connect with you at RSAC 2024 and discuss the latest innovations to Microsoft Entra. Please join us at the following identity sessions: 

 

Session Title 

Session Description 

Date and time 

Lesson Learned - General Motors Road to Modern Consumer Identity 

This demo-heavy session will provide key insights into the architectural decisions made by General Motors and the lessons learned establishing a secure and resilient customer identity platform powered by Microsoft Cloud for a consistent set of user experiences across all its global customer touchpoints, including web, mobile apps, in-vehicle applications, and backend services 

Tuesday May 7, 2024, 1:15 PM - 2:05 PM PT 

The Storm-0558 Attack - Inside Microsoft Identity Security's Response 

In June 2023, China-based actor Storm-0558 successfully forged tokens to access customer email in 22 agencies using an acquired signing key. This session will walk you through the insider's view of the attack, investigation, mitigation, and repairs resulting from this attack with a focus on what worked and what didn't when defending against this APT actor. 

Thursday, May 9, 2024, 12:20 PM - 1:10 PM PT 

 

 

Stop by our booth #6044N to check out our theater sessions! 

 

Start your CIAM Journey: Secure external identities, streamline collaboration and accelerate your business! 

As you expand your business, protecting all external identities, such as customers, business guests and partners, is essential. In this session, we will demonstrate how Microsoft Entra External ID is a single solution that helps you integrate security into your apps, safeguarding external identities with adaptive access policies, verifiable credentials, built-in identity governance, and more. We will also showcase how to streamline collaboration by inviting business guests and defining what internal resources they can access across Teams, SharePoint and OneDrive.  

Tuesday May 7, 2024, 3:00-3:20PM 

Microsoft Entra and Copilot: Skills you can use for protecting identities and access 

Get an overview of the latest Microsoft Entra skills available via Copilot for Security to help your organization protect against identity threats and increase efficiency in managing and governing access. 

Tuesday May 7, 2024, 3:30-3:50PM 

Modernize your network access with Microsoft’s Security Service Edge Solution 

In today’s dynamic landscape, securing access to critical applications and resources is more crucial than ever. The identity-centric Security Service Edge (SSE) solution in Microsoft Entra takes Conditional Access to a new level, protecting any network destination with granular access controls that consider identity, device, and network. Join us to learn how you can secure access for anyone to anything from anywhere with unified identity and network access. 

Wednesday May 8, 2024, 2:30-2:50PM 

Bringing Passkey into your Passwordless Journey 

Most of our customers are either deploying some form of passwordless credential or are planning to in the next few years, however, the industry is all abuzz with excitement about passkeys. What are passkeys and what do they mean for your organization's passwordless journey? Join the Microsoft Entra product team as we walk you through the background of where passkeys came from, their impact on the passwordless ecosystem and the product features and roadmap bringing passkeys into the Microsoft Entra passwordless portfolio and phishing resistant strategy.   

Thursday May 9, 2024, 12:00-12:20PM 

 

We can’t wait to see you in San Francisco for RSA Conference 2024! 

 

Irina Nechaeva, 

General Manager of Identity & Network Access 


FindBiometrics

AI Update: ‘The Dumbest Model Any of You Will Have to Use Again’

Welcome to the newest edition of FindBiometrics’ AI update. Here’s the latest big news on the shifting landscape of AI and identity technology: OpenAI CEO Sam Altman says that GPT-5, […]
Welcome to the newest edition of FindBiometrics’ AI update. Here’s the latest big news on the shifting landscape of AI and identity technology: OpenAI CEO Sam Altman says that GPT-5, […]

SC Media - Identity and Access

Critical GitLab account takeover flaw added to CISA’s KEV Catalog

More than 2,100 servers may still be vulnerable to GitLab password reset exploits.

More than 2,100 servers may still be vulnerable to GitLab password reset exploits.


This week in identity

E51 - Microsoft Entra External IDs / Cisco and StrongDM / CEO view on Cyber

This week Simon and David return with a weekly dose of industry analysis on the global identity and access management space. First up a discussion on Microsoft announcing the GA of their Entra for External IDs - who is it aimed at? Is it ground breaking? Next up is Cisco who announced an investment round into next-gen PAM provider StrongDM. Finally they discuss a great interview by Standard Charte

This week Simon and David return with a weekly dose of industry analysis on the global identity and access management space. First up a discussion on Microsoft announcing the GA of their Entra for External IDs - who is it aimed at? Is it ground breaking? Next up is Cisco who announced an investment round into next-gen PAM provider StrongDM. Finally they discuss a great interview by Standard Chartered CEO Bill Winters and his view of cyber in the board and its strategic value.


Northern Block

Mobile Driving Licenses (mDL) in 2024 (with Sylvia Arndt)

Discover the future of identity verification with mobile driver's licenses. Join Sylvia Arndt and Mathieu Glaude on The SSI Orbit Podcast for insights. The post Mobile Driving Licenses (mDL) in 2024 (with Sylvia Arndt) appeared first on Northern Block | Self Sovereign Identity Solution Provider. The post <strong>Mobile Driving Licenses (mDL) in 2024</strong> (with Sylvia Arndt) app

🎥 Watch this Episode on YouTube 🎥
🎧   Listen to this Episode On Spotify   🎧
🎧   Listen to this Episode On Apple Podcasts   🎧

About Podcast Episode

Could digital credentials like mobile driver’s licenses be the game-changer for secure and convenient identity verification?

In this episode of The SSI Orbit Podcast, host Mathieu Glaude sits down with Sylvia Arndt, Vice President of Business Development, Digital Identity at ⁠Thales⁠, to explore the rapidly evolving landscape of mobile driver’s licenses (mDLs) and their potential to transform how we prove who we are.

In this conversation, you’ll learn:

The driving forces behind governments adopting mobile driver’s licenses (mDLs), including improving service accessibility for citizens and combating fraud The role of organizations like AAMVA and NIST in setting standards and governance for mDL implementation Business opportunities unlocked by mDLs, such as enabling seamless online identity verification for industries like banking, notaries, and access management Potential monetization models for issuers and verifiers in the mDL ecosystem The rising prominence of biometric verification like facial recognition in conjunction with mDL usage

Don’t miss out on this opportunity to gain valuable insights and expand your knowledge. Tune in now and start exploring the possibilities!

 

Key Insights: Mobile driver’s licenses (mDLs) are gaining momentum as governments seek to improve service quality, combat fraud, and streamline identity verification processes. Organizations like AAMVA and NIST play crucial roles in setting standards and governance for mDL implementation. Interoperability is a key challenge, with the ISO standard for mDLs emerging as a widely adopted solution. Governments must decide between issuing mDLs through their own wallets or leveraging third-party wallets like those from Apple, Google, and Samsung. mDLs could enable seamless online identity verification for industries like banking, notary services, and access management, reducing transaction abandonment rates. Potential monetization models for issuers and verifiers are being explored, as the value of mDLs lies primarily in the verification side. Strategies: Governments are implementing legislation to allow for the acceptance of digital forms of state-issued identities, including mDLs. AAMVA’s Digital Trust Service aims to facilitate cross-state verifications by providing the necessary public keys to read mDLs. Facial biometrics and liveness detection are expected to become more prevalent in conjunction with mDL verification for enhanced security. Governments and industry stakeholders are exploring ways to vet and register verifiers to ensure responsible use of mDL data. Chapters: 00:00 – Status of Mobile Driving Licenses (mDL) in the US 7:00 – Why Government DMVs like the ISO standard for mobile driving licenses 9:25 – About AAMVA (the American Association of Motor Vehicle Administrators) 14:25 – How do governments perceive the value proposition of issuing mDLs 20:10 – General wallet strategy for DMVs in 2024 27:25 – Where are the opportunities in the mDL verification market? 41:41 – Requiring a registration process for mDL verifiers? 45:17 – Exploring possible new risk vectors that mDL introduces 50:17 – Business model for mDL issuers, and possible disruption to IDV market Additional resources: Episode Transcript American Association of Motor Vehicle Administrators – AAMVA NIST SP 800-63 Digital Identity Guidelines ISO-compliant driving licence W3C Verifiable Credentials Data Model TSA Facial Recognition and Digital Identity Solutions About Guest

Sylvia Arndt is a seasoned leader and Vice President of Business Development, Digital Identity at Thales, with over 20 years of experience driving organic growth through innovative software and service solutions. Sylvia excels in identifying strategic opportunities that advance markets and transform business models, with a strong focus on customer advocacy, operational excellence, and cross-functional collaboration. Her expertise spans various industries, including Computer Software, Digital Identity & Security, Aviation, Travel & Hospitality, Communications, Media & Entertainment, Energy, and Government Services. Sylvia’s international reach extends to over 50 countries, where she has worked closely with customers and business partners, demonstrating her leadership in business strategy, product management, operations strategy, and digital transformation.

LinkedIn: linkedin.com/in/sylvia-arndt

  The post Mobile Driving Licenses (mDL) in 2024 (with Sylvia Arndt) appeared first on Northern Block | Self Sovereign Identity Solution Provider.

The post <strong>Mobile Driving Licenses (mDL) in 2024</strong> (with Sylvia Arndt) appeared first on Northern Block | Self Sovereign Identity Solution Provider.


SC Media - Identity and Access

Data breach impacts Airsoft community site

Major airsoft game host and equipment renter Airsoft C3 had the sensitive data of 75,000 individuals part of its enthusiast community website compromised due to a Google Cloud Storage Bucket misconfiguration, indicating a significant threat to the U.S. airsoft community, according to Cybernews.

Major airsoft game host and equipment renter Airsoft C3 had the sensitive data of 75,000 individuals part of its enthusiast community website compromised due to a Google Cloud Storage Bucket misconfiguration, indicating a significant threat to the U.S. airsoft community, according to Cybernews.


Ocean Protocol

Passive & Volume Data Farming Airdrop Has Completed; They Are Now Retired

Claim your rewards now. Predictoor DF Forges Ahead. More DF streams to come Summary This article starts by reviewing Ocean Data Farming (DF), the ASI Alliance, and how an ASI “yes” vote would affect Data Farming. The “yes” happened. This triggered the follow-up actions: We just completed an airdrop to veOCEAN holders. You can now claim OCEAN via the DF Webapp (https://df.oceandao.or
Claim your rewards now. Predictoor DF Forges Ahead. More DF streams to come Summary

This article starts by reviewing Ocean Data Farming (DF), the ASI Alliance, and how an ASI “yes” vote would affect Data Farming.

The “yes” happened. This triggered the follow-up actions:

We just completed an airdrop to veOCEAN holders. You can now claim OCEAN via the DF Webapp (https://df.oceandao.org/rewards) We have retired Passive & Volume DF. Predictoor DF continues, with room to scale up Predictoor DF and introduce new incentive streams. 1. Background 1.1 Ocean Data Farming

Data Farming (DF) is Ocean Protocol’s incentive program. Rewards are weekly. DF has traditionally had three streams / substreams:

Passive DF. Users lock OCEAN for veOCEAN. The longer you lock or the more OCEAN you lock, the more OCEAN you get. Rewards are pro-rata to veOCEAN holdings. 150,000 OCEAN/week. Volume DF. Users allocate veOCEAN towards data assets with high data consume volume (DCV), in a curation function. Rewards are a function of DCV and veOCEAN stake. Up to 112,500 OCEAN/week. Predictoor DF. Run prediction bots to earn continuously. 37,500 OCEAN/week.

Most user DF interactions are via the DF webapp at df.oceandao.org.

1.2 ASI Alliance

Ocean Protocol has been working with Fetch.ai and SingularityNET to form the ASI Alliance, with a unified token $ASI. This Mar 27, 2024 article describes the key mechanisms. It needed a “yes” vote from the Fetch and SingularityNET communities.

1.3 ASI Alliance Impact on DF

If the event of a “yes”, there were important implications for Ocean Data Farming and veOCEAN. This Mar 29, 2024 post describes them. From that post:

To be ready for either outcome [yes or no], we will pause giving rewards for Passive DF and Volume DF as soon as the DF82 payout of Thu Mar 28 has completed. Also in preparation, have taken a snapshot of OCEAN locked & veOCEAN balances as of 00:00 am UTC Wed Mar 27 (Ethereum block 19522003) …
Predictoor DF will continue regardless of voting outcome.

And the section “Actions if ‘yes’ “ held the following key information about veOCEAN, Passive DF, and Volume DF.

veOCEAN will be retired. …
Passive DF & Volume DF will be retired.
People who have locked OCEAN for veOCEAN will be made whole, as follows.
Each address holding veOCEAN will be airdropped OCEAN in the amount of:
(1.25^years_til_unlock-1) * num_OCEAN_locked
In words: veOCEAN holders get a reward as if they had got payouts of 25% APY for their whole lock period (and kept re-upping their lock). But they get the payout soon, rather than over years of weekly DF rewards payouts. It’s otherwise the same commitment, expectations and payout as before.
This airdrop will happen within weeks after the “yes” vote.
That same address will have its OCEAN unlocked according to its normal veOCEAN mechanics and timeline (up to 4 years). After unlock, that account holder can convert the $OCEAN directly into $ASI with the published exchange rate.
Any actions taken by an account on locking / re-locking veOCEAN after the time of the snapshot will be ignored. …

The post also held key information about psdnOCEAN, predictoor DF, and the future of DF.

psdnOCEAN holders will be able to swap back to the OCEAN with a fixed-rate contract. For each 1 psdnOCEAN swapped they will receive >1 OCEAN at a respectable ROI. …
Predictoor DF continues. …
Ocean Protocol Foundation will re-use the DF budget for its incentives programs. These can include: scaling up Predictoor DF [and more].

(We added bold font to help cross-referencing with the “actions” section below.)

2. A “Yes” Happened

As of Apr 16, the vote had concluded. The result was a “yes”.

Artificial Superintelligence Alliance on Twitter: "🎉 It's official! The ASI Alliance is launching - the world's largest decentralized network for accelerating AGI and ASI.Stay tuned for updates on our multi-billion token merger and the incredible things to come!@SingularityNET @Fetch_ai @oceanprotocol pic.twitter.com/Ewh99LlOIY / Twitter"

🎉 It's official! The ASI Alliance is launching - the world's largest decentralized network for accelerating AGI and ASI.Stay tuned for updates on our multi-billion token merger and the incredible things to come!@SingularityNET @Fetch_ai @oceanprotocol pic.twitter.com/Ewh99LlOIY

3. Actions Completed Due to “Yes”

As promised to the Ocean community, we have completed the “Actions if ‘yes’” summarized above. Here are the specifics.

3.1 Promise: veOCEAN will be retired

✅ Action completed: veOCEAN is retired.

The DF webapp functionality to lock veOCEAN is removed. Incentives to lock OCEAN into veOCEAN have been turned off [1].

The DF webapp functionality to withdraw locked OCEAN has been retained; therefore when time passes and the token comes unlocked (up to 4 years), the user can come and withdraw their OCEAN.

3.2 Promise: Passive DF … will be retired

✅ Action completed: Passive DF is retired. (And you should claim your past Passive DF rewards.)

One can no longer enter into Passive DF because it relies on veOCEAN, which is retired. Passive DF rewards are permanently stopped.

To claim your past Passive DF rewards:

Go to DF Rewards page at https://df.oceandao.org/rewards. In the “Passive Rewards” section, click the “Claim All” button. This webapp functionality will remain live until Aug 1, 2024. After that, you will have to use the etherscan interface to claim rewards, which is more complex. (Same for Volume DF & airdrop below.) 3.3 Promise: Volume DF will be retired

✅ Action completed: Volume DF is retired. (And you should claim your past Volume DF rewards.)

One can no longer enter into Volume DF because it relies on veOCEAN, which is retired. Volume DF rewards are permanently stopped.

To claim your past Volume DF rewards:

Go to DF Rewards page at https://df.oceandao.org/rewards. In the “Active Rewards” section, click the “Revoke Token Lock Approval + Claim All” button. This will claim both past Volume DF Rewards and DF Airdrop Rewards. 3.4 Promise: Each address holding veOCEAN will be airdropped OCEAN

[according to the formula, using the Mar 27 snapshot]

✅ Action completed: Each address holding veOCEAN has been airdropped OCEAN. You should claim your airdrop rewards.

The OCEAN reward amount is according to the formula, using the Mar 27 snapshot (as discussed above). The reward is as if you had got payouts of 25% APY for your whole veOCEAN lock period (and kept re-upping your lock). The Appendix gives an example of payout amounts.

Here is the reward per address.

To claim DF Airdrop rewards:

It’s all inside the “Active Rewards” section, just like Volume DF. Therefore… Go to DF Rewards page at https://df.oceandao.org/rewards. In the “Active Rewards” section, click the “Revoke Token Lock Approval + Claim All” button. This will claim both past Volume DF Rewards and DF Airdrop Rewards. 3.5 Promise: psdnOCEAN reward with respectable ROI

Promise expanded: holders will be able to swap back to the OCEAN with a fixed-rate contract. For each 1 psdnOCEAN swapped they will receive >1 OCEAN at a respectable ROI.

✅ Action, over next several days: The ROI is set to 1.25. There are a small number (17) of psdnOCEAN holders, so we are handling them manually. Here are our steps:

Start from this GSheet “snapshot of psdnOCEAN balances” page Have each user send their psdnOCEAN to 0xad0A852F968e19cbCB350AB9426276685651ce41 (DF Treasury multisig) For each inbound tx, we will log in in the GSheet “transactions” page Then we will compute the user OCEAN reward, and send it to the user 3.6 Promise: Predictoor DF continues

✅ Action completed: Predictoor DF has continued while the other DF substreams were paused. It keeps going.

3.7 Promise: Ocean Protocol Foundation will re-use the DF budget for its incentives programs

✅ Action on track: We plan to scale up Predictoor DF rewards over time, especially as it hits development milestones [Ref 2024 roadmap, sec 2.2].

Other potential DF incentives include for running Unified Backend nodes [roadmap, sec 3.2], and for decentralized large-scale model training to support a world model on ground-truth physics [roadmap, sec 2.2].

4. Conclusion

This article reviewed Ocean Data Farming (DF), the ASI Alliance, and how an ASI “yes” vote would affect Data Farming.

The “yes” happened. This triggered the follow-up actions:

We just completed an airdrop to veOCEAN holders. Users can now claim OCEAN via the DF Webapp (https://df.oceandao.org/rewards) We have retired Passive & Volume DF. Predictoor DF continues, with room to scale up Predictoor DF and introduce new incentive streams. Appendix: Worked DF Airdrop Example

Example. Alice recently locked 100 OCEAN for a duration of four years, and had received 100 veOCEAN to her account.

She will get airdropped (1.25⁴–1)*100 = (2.44–1)*100 = 144 OCEAN soon after the “yes” vote In four years, her initial 100 OCEAN will unlock. In total, Alice will have received 244 OCEAN (144 soon, 100 in 4 years). Her return is approximately the same as if she’d used Passive DF & Volume DF for 4 years and got 25% APY. That is: (1.2⁵⁴-1) * 100 = 2.44 * 100. Yet this updated scheme benefits her more because her 144 of that 244 OCEAN is liquid soon. Notes

[1] We can’t actually turn off the veOCEAN contract on the Ethereum mainnet. Therefore someone could still lock OCEAN for months to years by talking directly to that contract, and then see their OCEAN unlock months to years later. It’s any user’s prerogative if they wish. But there’s no real incentive to do so.

Further resources

If you have more questions about the changes and how they apply to you, you can always contact us on:

Discord: https://discord.gg/TnXjkR5

Telegram: https://t.me/oceanprotocol_community

Beware of scams

It is crucial for you to remain alert. This is a prime time for scammers to capitalize and present you with offers to trick you.

Here’s how you can stay safe:

Check official Ocean Protocol communication: For this particular airdrop, use this blogpost as reference as well as the information published on https://df.oceandao.org/rewards. Always double-check: Before engaging in any actions involving your tokens, verify the information directly from official sources. Cross-reference any announcements on the official X profile of Ocean Protocol. Use official websites: Manually type URLs into your browser and avoid clicking on unsolicited links. Impersonator websites may mimic official sites to deceive and steal your tokens. Our official website is oceanprotocol.com. Telegram and Discord safety: Only trust links in pinned notices by Admins, and avoid private message solicitations. Our admins will never proactively message you or ask you to click on links. Also, our admins do not offer ticket support in Discord. Independent verification: No Admin or official representative will contact you directly to assist with wallet operations or token swaps. Always initiate contact through official channels if you need assistance. Keep your keys private: Never disclose your wallet’s private key or seed phrase (12, 15,18, 24 words) to anyone or enter them on any website. Responding to scams: If you suspect a scam, report it to our admins on Discord and Telegram. Beware of secondary scams offering token recovery for a fee. About Ocean Protocol

Ocean was founded to level the playing field for AI and data. Ocean tools enable businesses and individuals to trade tokenized data assets seamlessly to manage data all along the AI model life-cycle. Ocean-powered apps include enterprise-grade data exchanges, data science competitions, and data DAOs. Follow Ocean on Twitter or TG, and chat in Discord.

In Ocean Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Predictoor has over $800 million in monthly volume, just six months after launch with a roadmap to scale foundation models globally. Follow Predictoor on Twitter.

Data Farming is Ocean’s incentives program.

Passive & Volume Data Farming Airdrop Has Completed; They Are Now Retired was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


SC Media - Identity and Access

We Need an Updated Strategy to Secure Identities

Like any security control, identity needs to be reevaluated as threats advance.

Like any security control, identity needs to be reevaluated as threats advance.


Tokeny Solutions

Multi-Chain Tokenization Made Simple

The post Multi-Chain Tokenization Made Simple appeared first on Tokeny.

Product Focus

Multi-Chain Tokenization Made Simple

This content is taken from the monthly Product Focus newsletter in April 2024.

I’m excited to share the latest advancements at Tokeny, reinforcing our leadership in multi-chain tokenization capabilities to better serve innovative issuers like you.

We recognize that every issuer has unique needs. Our mission is to remove blockchain barriers with our network-agnostic tokenization platform. With our latest developments in this area, here’s how you can benefit:

Effortless Tokenization on Any EVM Chain: Our SaaS solutions and APIs empower issuers to seamlessly tokenize assets on their preferred EVM-compatible network. For integrated chains like Polygon, Avalanche, Klaytn, and Telos, issuers can quickly deploy tokens within minutes. With our scalable technology, we can promptly integrate with any EVM chain, and we are expanding to include additional chains such as Base, IOTA EVM, and new chains required by our clients.

Seamless Token Migration Between Chains: In the dynamic landscape of blockchain and financial markets, the ability to shift tokens between networks is crucial for risk management. Our team guarantees seamless token migration from one chain to another, preserving all records from the previous chain and maintaining consistent cap table views at our platform despite network transitions.

Unified Multi-Chain Management Platform: As the future of on-chain finance embraces multi-chain environments, our platform is here to support you. It offers a unified interface for you, your agents, and your investors to manage tokens, whether you’re issuing tokens on one blockchain or across multiple chains, all within a single centralized software solution.

As always, our dedicated team is committed to delivering cutting-edge solutions to equip you with the tools you need to thrive in the digital asset landscape.

Thank you for your continued support of Tokeny.

Joachim Lebrun Head of Blockchain Subscribe Newsletter

This monthly Product Focus newsletter is designed to give you insider knowledge about the development of our products. Fill out the form below to subscribe to the newsletter.

Other Product Focus Blogs Multi-Chain Tokenization Made Simple 3 May 2024 Introducing Leandexer: Simplifying Blockchain Data Interaction 3 April 2024 Breaking Down Barriers: Integrated Wallets for Tokenized Securities 1 March 2024 Tokeny’s 2024 Products: Building the Distribution Rails of the Tokenized Economy 2 February 2024 ERC-3643 Validated As The De Facto Standard For Enterprise-Ready Tokenization 29 December 2023 Introducing Multi-Party Approval for On-chain Agreements 5 December 2023 The Unified Investor App is Coming… 31 October 2023 Introducing WalletConnect V2: Discover the New Upgrades 29 September 2023 Tokeny becomes the 1st tokenization platform to achieve SOC2 Type I Compliance 1 September 2023 Permissioned Tokens: The Key to Interoperable Distribution 28 July 2023 Tokenize securities with us

Our experts with decades of experience across capital markets will help you to digitize assets on the decentralized infrastructure. 

Contact us

The post Multi-Chain Tokenization Made Simple first appeared on Tokeny.

The post Multi-Chain Tokenization Made Simple appeared first on Tokeny.


Ocean Protocol

Revealing the Secrets of Startup Success: A Venture Capital Investments Challenge

Podium : Venture Capital Investments Data Challenge Introduction The Venture Capital Investments Challenge engaged data scientists and analysts to decode the complexities of startup funding and success. This challenge drew on an extensive dataset covering various aspects of the venture capital ecosystem. Key datasets included acquisitions, degrees, funding rounds, funds, investments, IPOs, mi
Podium : Venture Capital Investments Data Challenge Introduction

The Venture Capital Investments Challenge engaged data scientists and analysts to decode the complexities of startup funding and success. This challenge drew on an extensive dataset covering various aspects of the venture capital ecosystem. Key datasets included acquisitions, degrees, funding rounds, funds, investments, IPOs, milestones, objects, offices, people, relationships, and several specialized sets designed for in-depth analysis.

Participants analyzed over 66,368 entries, exploring startup funding details and investor engagement. They examined geographical impacts on startups and the influence of educational backgrounds and degrees. Career trajectories and networks from people.csv and relationships.csv also provided insights into successful entrepreneurship patterns.

Through data processing and model development, participants identified trends and predictive factors in the funding dynamics, market positions, and strategic milestones. This initiative showcased participants’ analytical capabilities and set the stage for advanced predictive modeling in investment strategies.

Winners Podium

The top submissions of this challenge were exceptional. Participants demonstrated outstanding ability in utilizing ML and AI to examine and predict startup success within the venture capital landscape and refine investment strategies. Let’s examine the top three submissions that stood out due to their thorough analytics and insightful conclusions.

1st Place: Ahan

Ahan stood out with his application of machine learning to analyze the venture capital landscape. His detailed analysis focused on the implications of founder demographics and funding dynamics on startup outcomes. He revealed that the median acquisition price among startups with disclosed values was approximately $72.6 million, with an average time from initial investment to acquisition of 695 days. This insight highlights the broad variance in startup valuations and the typical timelines investors might anticipate for returns.

Moreover, in his dataset of over 16,000 instances, Ahan identified significant disparities in success rates by founder gender, with male founders achieving a 40.3% success rate compared to 27.4% for female founders. This finding points to potential systemic biases in the venture capital industry and underscores the need for broader diversity and inclusion initiatives.

2nd Place: Dominikus

Dominikus’ entry in the Ocean Data Challenge leveraged detailed venture capital data to build a predictive model distinguishing successful and unsuccessful startups. He restructured a complex dataset into 14 subsets in his analysis, applying statistical encoding and meticulously handling missing data. His statistical models revealed significant findings: startups in the San Francisco Bay Area, affiliated with Stanford University graduates, demonstrated a 65% higher likelihood of funding success than startups in other regions and educational backgrounds.

In his evaluation, Dominikus used precise statistical methods to measure the efficacy of his models. He reported an accuracy rate of 92%, with a precision of 90% and a recall of 88%, effectively illustrating the predictive strength of his analytical approach. Additionally, the ROC curve for his model achieved an AUC of 0.91, underscoring its robustness in classifying the potential success of startups based on multiple factors, including funding history, investor relationships, and regional economic activities.

His analysis provided a clear view of the venture capital landscape, offering insights through correlation studies that identified the relationships influencing startup success.

3rd Place: Bhalisa

Bhalisa Sodo’s analytical project thoroughly examined the factors influencing startup success within the venture capital landscape. His method involved detailed data cleaning and segmentation, processing a comprehensive dataset to uncover the dynamics of startup funding and success. Bhalisa used statistical methods to analyze correlations between founder backgrounds, funding mechanisms, and startup outcomes, presenting a quantitative foundation for his conclusions.

In his findings, Bhalisa showed that startups linked to founders from top-tier institutions like Stanford University were 30% more likely to secure funding and achieve successful exits than others. His predictive models showed an impressive accuracy rate, with the Decision Tree Classifier achieving a classification accuracy of 98% and a recall rate of 97%, highlighting its effectiveness in identifying potentially successful startups based on early-stage data inputs.

Moreover, Bhalisa’s research revealed that startups typically received their first significant funding round within the first two years of operation, and those receiving funding within the first year showed a 60% higher probability of reaching an exit through acquisition or IPO within eight years.

His analysis also noted an increasing trend in funding amounts over time, with the average funding per round growing by 15% annually since 2010, reflecting the escalating scale and stakes within the venture capital ecosystem.

Interesting Facts Higher Success Rates for Stanford Graduates

Startups linked to founders from Stanford University show a 30% higher success rate of securing funding and achieving successful exits than those from other universities. This trend highlights Stanford’s strong network and reputation within the venture capital ecosystem.

Annual Increase in Funding Amounts

Since 2010, the average amount raised per startup funding round has increased by 15% annually. This growth reflects the increasing confidence and investment in startups, driven by the expanding venture capital market and the success rate of technology-driven innovations.

Prevalence of AI and Tech Startups in Investment Portfolios

Over the last decade, investments in AI and technology-focused startups have increased by 35%. This trend reflects the industry’s growing recognition of the transformative potential of AI technologies across various sectors.

Influence of Educational Background on Startup Leadership

Founders with Ivy League educations are 50% more likely to hold C-level positions in their startups. This statistic highlights the strong correlation between prestigious educational backgrounds and leadership roles in high-growth startups, suggesting that education continues to play a critical role in shaping entrepreneurial success.

Gender Funding Gap in Startups

Analysis reveals that male founders receive about 30% more funding rounds and secure 50% higher funding than female founders. Moreover, male-led startups are 20% more likely to reach advanced funding stages, highlighting persistent gender biases in venture capital.

2024 Championship

Each challenge features a prize pool of $10,000, distributed among the top 10 participants. Our championship points system distributes 100 points across the top 10 finishers in each challenge, with each point valued at $100.

Top 10 :: Venture Capital Investments Data Challenge

By participating in challenges, contestants accumulate points toward the 2024 Championship. Last year, the top 10 champions received an extra $10 for every point they had earned.

Moreover, the top 3 participants in each challenge can collaborate directly with Ocean to develop a profitable dApp based on their algorithm. Data scientists retain their intellectual property rights while we offer assistance in monetizing their creations.

About Ocean Protocol

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data.

Follow Ocean on Twitter or Telegram to stay up to date. Chat directly with the Ocean community on Discord, or track Ocean’s progress on GitHub.

Revealing the Secrets of Startup Success: A Venture Capital Investments Challenge was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


PingTalk

Okta vs. Ping: The Best IAM for Digital Security

When it comes to selecting an Identity and Access Management (IAM) solution, the stakes are high. Your choice directly affects your organization's security, user experience, compliance, and the bottom line. To make the best choice for your organization, let's take a closer look at the major differences between two leaders in the IAM space, Okta and Ping, especially when considering an upcoming ren

When it comes to selecting an Identity and Access Management (IAM) solution, the stakes are high. Your choice directly affects your organization's security, user experience, compliance, and the bottom line. To make the best choice for your organization, let's take a closer look at the major differences between two leaders in the IAM space, Okta and Ping, especially when considering an upcoming renewal.


Finicity

Tapping into Open Banking to Identify, Manage and Prevent Identity Fraud in Account Opening

Today’s consumers’ expectations for their financial interactions are changing. They require a digitally native, seamless, consistent, instantaneous experience with their financial provider right from the get-go. No longer are they… The post Tapping into Open Banking to Identify, Manage and Prevent Identity Fraud in Account Opening appeared first on Finicity.

Today’s consumers’ expectations for their financial interactions are changing. They require a digitally native, seamless, consistent, instantaneous experience with their financial provider right from the get-go. No longer are they willing to wait several days for identity verifications or for microdeposits to clear to start using their account.  

Yet, we know that everyday bad actors are finding new ways to break the system. As more people and businesses enter the digital economy, it’s critical that we keep them secure across all touchpoints with their accounts and beyond. Financial Institutions must protect their customers’ accounts from fraud to ultimately drive primacy, grow deposits and encourage top of wallet behaviors, thus helping them recoup the estimated $450 average cost of acquisition

Open banking is the thread connecting the ecosystem to make account opening faster, secure, and more frictionless. 

Here’s a common scenario that financial institutions deal with on a daily basis:  

‘John Doe’ opened a new checking account with ‘AcmeBank’ and is ready to fund it with another existing account he has with ‘Partnerbank’   How does AcmeBank know that John is the actual owner of the account at Partnerbank? Should Acmebank proceed with posting the ACH file to the Nacha (ACH) network, letting the transaction go through? If John Doe were a bad actor, and Acme bank allowed the payment to go through without doing appropriate checks, John Doe could move that money elsewhere and AcmeBank could get an unauthorized payment return from ‘Partnerbank’, resulting in fraud losses.   Similarly, some insurance companies simply ask for account and routing number verification before disbursing funds, not verifying the identity of the receiver. Here, John Doe can impersonate another person, and use his own personal details to re-direct an insurance payout or a payroll disbursement to his account.   What is the ecosystem doing about it? 

New rules and guidelines are being published by Nacha – operator of ACH payments – that introduce additional risk management frameworks for ACH senders, as well as recipients. Ecosystem participants such as merchants, ecommerce platforms, lenders, and insurance providers may be required to include account verification and identity verification, multi-factor authentication, velocity tracking and KYC/KYB improvements. Mastercard is a Nacha Preferred Partner for Compliance and Risk and Fraud Prevention with a focus on account validation. 

In addition to more thorough fraud checks being conducted by originators, receivers now also must participate in fraud monitoring and flagging to reduce risk. In the above example, Acmebank, the receiving financial institution, will also need to perform additional fraud checks.  

What can you do? 

Mastercard Open Banking helps financial institutions identify, manage and tackle fraud risk on an ongoing basis.  Examples of our solutions include instant account details verification, device and identity verification. When used in conjunction with other customer fraud solutions, they help secure interactions that consumers have with their financial provider. 

Last year, Mastercard debuted Open Banking Identity Verification for the U.S. market and continues to invest in additional functionality that leverage our extensive fraud and identity networks. Before initiating a transaction, financial institutions can verify a number of factors, including: 

Confirming account ownership information, including name, address, phone and email, in real-time  Validating identity profiles and quantifying identity risk   Examining the risk level of user activity patterns and associations to detect fraudulent behavior  Verifying device authenticity and capturing signals of device fraud 

Beyond Open Banking Identity Verification, Mastercard offers complimentary services to streamline account funding, including:  

Account Owner Verification: A one-time API request that returns the account owner(s) name, address, email and phone number for a select account. This verifies that the bank account being linked is owned by the person opening a new account and complements KYC risk mitigation in real time.  Account Detail Verification: Instantly authenticates and verifies account details, including account and routing numbers, to help mitigate fraud, reduce manual entry errors and maximize confidence in payment transactions.  Account Balance Check: Easily determines account balance before moving funds to a new account. This ensures that the amount being moved to the new account is available with an accurate, real-time balance snapshot, and reduces costly NSF returns.  Payment Success Indicator: A score that predicts a transaction’s likelihood to settle for a specific consumer “today” and up to nine days in the future.  

Now let’s look the journey again with our solutions: 

Consumer has opened a new checking account with ‘Acme Bank’ and is ready to fund it using existing bank account at ‘Partnerbank’  Consumer agrees to T&Cs and gives permission through Mastercard’s Connect widget for their bank data to be accessed and shared with Acme bank    Consumer selects their Partnerbank account and enters banking login credentials (or biometrics where applicable)  Consumer selects funding account and amount  Acme bank calls our above APIs in the background to check account and identity details in real-time and proceeds with the processing the payment 

Get ahead and get prepared! Check out Mastercard Open Banking developer’s page for technical documentation or reach out to your Mastercard representatives to learn more. 

The post Tapping into Open Banking to Identify, Manage and Prevent Identity Fraud in Account Opening appeared first on Finicity.


KuppingerCole

1Kosmos Platform

by Martin Kuppinger This KuppingerCole Executive View report looks at the 1Kosmos platform, a solution supporting an integrated and comprehensive approach on identity verification and passwordless authentication, backed by Distributed Ledger Technology (DLT), enabling worker, customer and resident use cases.

by Martin Kuppinger

This KuppingerCole Executive View report looks at the 1Kosmos platform, a solution supporting an integrated and comprehensive approach on identity verification and passwordless authentication, backed by Distributed Ledger Technology (DLT), enabling worker, customer and resident use cases.

Thursday, 02. May 2024

FindBiometrics

Identity News Digest – May 2, 2024

Welcome to FindBiometrics’ digest of identity industry news. Here’s what you need to know about the world of digital identity and biometrics today: PE Firm Permira to Take Controlling Stake […]
Welcome to FindBiometrics’ digest of identity industry news. Here’s what you need to know about the world of digital identity and biometrics today: PE Firm Permira to Take Controlling Stake […]

Verida

Launching VDA with our trusted Launchpad partners

Verida will soon be launching the Verida Storage Credit Token (VDA) to the world. The Verida network keeps your private data private. It’s the first decentralized database network for owning, storing and controlling private data. The VDA token will play a key role in the new self-sovereign data economy. To kick start this journey, Verida is partnering with launchpads Decubate, AI Tech Pad, a

Verida will soon be launching the Verida Storage Credit Token (VDA) to the world. The Verida network keeps your private data private. It’s the first decentralized database network for owning, storing and controlling private data.

The VDA token will play a key role in the new self-sovereign data economy. To kick start this journey, Verida is partnering with launchpads Decubate, AI Tech Pad, and Sidus Pad in the lead up to the token generation event.

VDA powers the first private data DePIN for web3.

The Verida Storage Credit Token (VDA) creates a data economy enabling secure interactions between accounts to facilitate secure data storage, trusted sharing and fast querying.

Users can utilize credits to store and exchange data across the network. Storage node operators can stake VDA to provide storage capacity to the network, secure private data and get rewarded.

Partnering with launchpads like Decubate, AI Tech Pad and Sidus Pad underscores Verida’s commitment to community engagement. These launchpads provide an ideal platform for Verida to introduce its token to a global audience of enthusiasts and developers.

Joining the Verida ecosystem

A growing ecosystem of partners serve as private data oracles, technical integrators and channel partners for the Verida Network. Users can utilise these providers to take control and ownership of their data, get benefit and reward from that data, all whilst storing it privately on the network.

We welcome the Decubate, AI Tech and Sidus communities to the Verida ecosystem. We’ll be further collaborating and developing our partnership through our shared values around data privacy and ownership. By leveraging these partnerships, Verida aims to drive adoption, foster innovation, and empower users to take control of their data securely and efficiently.

Get ready for launch

Join us and our partners as we embark on this exciting journey, forging new partnerships, and revolutionizing the future of private data storage.

As we prepare for launch, don’t forget to participate in our Galxe campaign to learn about Verida’s decentralized data network, the significance of the VDA token, explore IDO launchpads, and prepare for the upcoming Token Generation Event (TGE).

Introducing our IDO launchpads

AITECH Pad serves is a gateway to seed, private, and public sales, offering investors the opportunity to participate not only in IDOs but also in projects at earlier stages and with more favorable valuations.

SidusPad is a cutting-edge Web3 launchpad developed by a team of decentralized fundraising experts. With a focus on security and innovation, SidusPad offers users exclusive access to IDOs and vesting opportunities. Backed by a robust community of over 400,000 subscribers and 150+ KOLs, including prominent investors like Animoca Brands and OKX, SidusPad ensures that only the best projects make it onto its platform.

Decubate offers a diverse ecosystem that attracts investors and entrepreneurs alike. The platform provides strategic guidance, incubation support, and expertise in tokenomics to ensure project success. With over 30,000 active investors, Decubate is EVM-compatible and committed to delivering exceptional value and opportunities for the Web3 community.

About Verida

Verida is a pioneering decentralized data network and self-custody wallet that empowers users with control over their digital identity and data. With cutting-edge technology such as zero-knowledge proofs and verifiable credentials, Verida offers secure, self-sovereign storage solutions and innovative applications for a wide range of industries. With a thriving community and a commitment to transparency and security, Verida is leading the charge towards a more decentralized and user-centric digital future.
Verida Missions | X/Twitter | Discord | Telegram | LinkedInLinkTree

Launching VDA with our trusted Launchpad partners was originally published in Verida on Medium, where people are continuing the conversation by highlighting and responding to this story.


Microsoft Entra (Azure AD) Blog

Public preview: External authentication methods in Microsoft Entra ID

Hi folks,   Today I’m thrilled to share that the public preview of external authentication methods in Microsoft Entra ID is scheduled for release in the first half of May. This feature will allow you to use your preferred multifactor authentication (MFA) solution with Entra ID.   Deploying MFA is the single most important step to securing user identities. A Microsoft Research stu

Hi folks,

 

Today I’m thrilled to share that the public preview of external authentication methods in Microsoft Entra ID is scheduled for release in the first half of May. This feature will allow you to use your preferred multifactor authentication (MFA) solution with Entra ID.

 

Deploying MFA is the single most important step to securing user identities. A Microsoft Research study of MFA effectiveness showed that the use of MFA reduced the risk of compromise by more than 99.2%! Some organizations have already deployed MFA and want to reuse that MFA solution with Entra ID. External authentication methods allows organizations to reuse any MFA solution to meet the MFA requirement with Entra ID.

 

Some of you might be familiar with custom controls. External authentication methods are the replacement of custom controls, and they provide several benefits over the custom controls approach. These include: 

 

External authentication method integration, which uses industry standards and supports an open model  External authentication methods are managed the same way as Entra methods  External authentication methods are supported for a wide range of Entra ID use cases (including PIM activation)

 

I've invited Greg Kinasewitz, Product Manager for Microsoft Entra ID, to tell you more about this new capability.

 

Thanks, and as always, let us know what you think!

 

Nitika Gupta

Group Product Manager

 

--

 

Hi folks,

 

Greg here. I’m super excited to walk you through some of the key capabilities of external authentication methods and readiness from partners. 

 

We’ve heard from some of you about wanting to use another MFA solution along with the power of Entra ID functionality like the rich features of Conditional Access, Identity Protection, and more.  Customers using Active Directory Federation Services (ADFS) with a deployment of another MFA solution have been vocal in wanting this functionality so they can migrate from AD FS to Entra ID. Organizations that are using the Conditional Access custom controls preview have given feedback on needing a solution that enables more functionality. External authentication methods enable your users to authenticate with an external provider as part of satisfying MFA requirements in Entra ID to fill these needs.

 

What are external authentication methods, and how do you use them?

 

External authentication methods can be used to satisfy MFA requirements from Conditional Access Policies, Privileged Identity Management role activation, Identity Protection risk-based polices and Microsoft Intune device registration. They’re created and managed as part of the Entra ID authentication methods policy.  This gives consistent manageability and experience with the built-in methods. You’ll add an external authentication method with the new “Add external method” button in the Entra Admin Center authentication methods management.

 

Figure 1: External authentication methods are added from and listed in authentication methods policies admin experience.

 

When a user is choosing a method to satisfy MFA, external authentication methods are listed alongside built-in methods that the user can use.

 

Figure 2: External authentication methods are shown next to the built-in methods during sign-in.

 

To learn more, check out our documentation.

 

What providers will support external authentication methods?

 

At launch, external authentication methods integrations will be available with the following identity providers. Please check with your identity provider to find out more about availability:

 

 

In addition to the providers that now have integrations in place, external authentication methods is a standards-based open model where any authentication provider that wants to build an integration can do so by following the integration documentation

 

We’re super excited for you to be able to start using external authentication methods to help secure your users, and we’re looking forward to your feedback!! 

 

If you want to learn more about these integrations, please visit the Microsoft booth at the RSA Conference next week. There will also be an RSA Conference session hosted by Microsoft Intelligent Security Association (MISA) where Duo will showcase their external authentication methods integration.

  

Register for our webinar on May 15 to learn more about external authentication methods, see demos, and join in the discussion.

 

Learn more about Microsoft Entra  

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds. 

Microsoft Entra News and Insights | Microsoft Security Blog   ⁠⁠Microsoft Entra blog | Tech Community   ⁠Microsoft Entra documentation | Microsoft Learn  Microsoft Entra discussions | Microsoft Community  

FindBiometrics

PE Firm Permira to Take Controlling Stake in BioCatch

Permira, a global private equity firm, plans to acquire a majority stake in BioCatch, at a valuation of $1.3 billion. BioCatch, founded in 2011, is known for its behavioral biometrics […]
Permira, a global private equity firm, plans to acquire a majority stake in BioCatch, at a valuation of $1.3 billion. BioCatch, founded in 2011, is known for its behavioral biometrics […]

KuppingerCole

Oracle SQL Firewall

by Alexei Balaganski It might be just my pet peeve, but I never really liked the term “Firewall”. Of course, the history of IT is full of words that have completely changed their meaning over the decades yet still occasionally cause discussions among experts. Is antivirus dead, for example, considering that they stopped making real viruses years ago? Firewall, however, stands out even more. The

by Alexei Balaganski

It might be just my pet peeve, but I never really liked the term “Firewall”. Of course, the history of IT is full of words that have completely changed their meaning over the decades yet still occasionally cause discussions among experts. Is antivirus dead, for example, considering that they stopped making real viruses years ago?

Firewall, however, stands out even more. The original brick-and-mortar one had one purpose only: to limit the spread of fire between buildings. A real firewall does not have any holes, and surely, it cannot apply any logic to different kinds of fire… A network firewall, however, could. That was its primary purpose – to filter network traffic based on defined rules, letting “good” traffic in, and keeping malicious stuff out. Over the following decades, the concept has evolved significantly, with next-generation firewalls adding capabilities like deep packet inspection, intrusion prevention, and even identity management.

Multiple ranges of specialized products have emerged, like Web Application Firewalls specializing on protecting web apps and APIs or even Database Firewalls designed to prevent SQL-specific attacks on relational databases. A modern firewall is thus a sophisticated solution that combines multiple layers of security, often powered by behavior analytics and artificial intelligence – a far cry from the original rules-based one. Is it even fair to continue referring to them as brick walls?

I can see you asking me already: why have I even brought this pet peeve of mine up? Well, recently I was looking at a new security tool — Oracle SQL Firewall — which the company has built into its upcoming Oracle Database 23ai release. And while I wholeheartedly agree with the product’s vision, surely, calling it just a firewall is a bit odd.

You see, all past and current firewalls (even Oracle’s own specialized Database Firewall) are operating on the network level, forming a perimeter around a resource that requires protection and filtering traffic between it and its clients. The problem is that in the modern hyperconnected world, there are so many potential network paths between sensitive data and potentially malicious actors that protecting them all appears to be impossible.

This is why the concept of data-centric security has emerged years ago, focusing on protecting data itself throughout its entire lifecycle instead of constantly plugging holes in existing networks, servers, and applications. Oracle Database’s “killer feature” has always been the ability to keep all kinds of information (relational, document- and graph-based, spatial, and even vector) in one place and run complex workloads like AI training directly within the database. Integrating an additional security layer to prevent SQL-level attacks directly into the DB core is therefore a major step towards data-centric security.

The new Oracle Database 23ai adds multiple new capabilities that can also create new attack vectors. For example, using Select AI to generate SQL queries from natural language prompts is a great tool for data analysts and business application developers. But to enable it, a database must communicate with an external large language model, and conventional firewalls simply cannot protect it from potential abuse.

Figure 1: High-level overview of SQL Firewall’s architecture

Oracle SQL Firewall, on the other hand, operates directly in the database core, making sure that it cannot be bypassed, regardless of the SQL execution path – whether coming from an external client, a local privileged user, or a generative AI solution. Residing directly in the database also provides it with the necessary insight into all the activities happening there.

Over time, it learns how various clients work with the data, establishing their behavior profiles and creating policies based on actions that are allowed for specific data. These allow-lists explicitly define what SQL statements a specific database user is supposed to perform. Everything else – suspicious anomalies, zero-day attacks, and, of course, SQL injection attacks – is blocked. However, it is possible to run SQL Firewall in a permissive mode as well, just generating audit records and alerts.

This protection is not only ubiquitous and impossible to bypass, but also completely transparent to any client applications, local or remote. There is no need to change existing network settings, introduce a proxy service or give a third-party vendor access to your sensitive data for monitoring. As an added benefit, SQL Firewall incorporates mandatory identity checks through session context, making credential theft or abuse much more difficult.

Of course, Oracle already had several security tools with similar coverage for years, including Audit Vault and Database Firewall – and they are even more capable in a way, providing coverage for non-Oracle databases as well. However, SQL Firewall is a core function of the new 23ai release, not an additional product. It is currently available in Oracle Database Enterprise Edition and requires either Oracle Database Vault or Oracle Audit Vault and Database Firewall.

Its configuration can be managed in several ways: either using the Oracle Cloud’s UI (exposed through Oracle Data Safe) or by utilizing command line tools or APIs. Needless to say, it is available at no extra cost and has negligible performance overhead. This way, it not only implements data-centric security, but also helps enforce the “security by design” principle and facilitates the adoption of Zero Trust architectures.

So, is SQL Firewall supposed to replace all the other data security tools? Not at all, its goal is to add another layer of protection into an existing defense-in-depth stack. Often, it will, in fact, be the last line of defense, positioned directly in front of your sensitive data. Should it be called a firewall? Again, while I, personally, don’t like the term, a rose by any other name would smell as sweet… As KuppingerCole Analysts always stress – you should not look at the labels and always check the actual capabilities offered by a product.

With Oracle’s new solution, you can address two major problems at the same time: protecting databases from SQL-based attacks and implementing 100% audit coverage of database activities. Not bad for a firewall, I think…


IBM Blockchain

Deployable architecture on IBM Cloud: A look at the IaC aspects of VPC landing zone 

The VPC landing zone deployable architectures offers a set of starting templates that can be quickly adapted to fit specific requirements. The post Deployable architecture on IBM Cloud: A look at the IaC aspects of VPC landing zone  appeared first on IBM Blog.

In the ever-evolving landscape of cloud infrastructure, creating a customizable and secure virtual private cloud (VPC) environment within a single region has become a necessity for many organizations. The VPC landing zone deployable architectures offers a solution to this need through a set of starting templates that can be quickly adapted to fit your specific requirements.

The VPC Landing Zone deployable architecture leverages Infrastructure as Code (IaC) principles, that allow you to define your infrastructure in code and automate its deployment. This approach not only promotes consistency across deployments but also makes it easier to manage and update your VPC environment. 

One of the key features of the VPC Landing Zone is its flexibility. You can easily customize the starting templates to fit your organization’s specific needs. This could include adjusting network configurations and security settings, or adding additional resources like load balancers or additional block volumes. 

The following patterns are starting templates that can be used to get started quickly with Landing Zone VPC pattern: Deploys a simple IBM Cloud® VPC infrastructure without any compute resources like VSIs or Red Hat OpenShift clusters.  QuickStart virtual server instances (VSI) pattern: Deploys edge VPC with one VSI and a jump server VSI in the management VPC.  QuickStart ROKS pattern: Deploys one ROKS cluster in workload VPC with two worker nodes.  Virtual server (VSI) pattern: Deploys identical virtual servers across the VSI subnet tier in each VPC.  Red Hat® OpenShift® pattern: The Red Hat OpenShift Kubernetes (ROKS) pattern deploys identical clusters across the VSI subnet tier in each VPC. Patterns that follow the best practices  Create a resource group to organize and manage cloud services and VPCs.  Set up Cloud Object Storage instances to store flow logs and Activity Tracker data. This allows for long-term storage and analytics of flow logs and Activity Tracker data. Store encryption keys in Key Protect or Hyper Protect Crypto Services instances. This provides a secure and centralized location for managing encryption keys.  Create a management VPC for managing and controlling network traffic and create a workload VPC for running applications and services. Connect the management and workload VPCs using a transit gateway.  Set up flow log collectors in each VPC to collect and analyse network traffic data. This provides visibility and insights into network traffic patterns and performance.  Implement necessary networking rules to allow communication between VPCs, instances, and services. This includes security groups, network ACLs, and route tables.  Set up VPEs for Cloud Object Storage in each VPC. This provides secure and private access to Cloud Object Storage from within each VPC.  Set up a VPN gateway in the management VPC. This provides secure and encrypted connectivity between the management VPC and on-premises networks.   Landing Zone patterns 

Let’s explore the Landing Zone patterns to gain a comprehensive understanding of their underlying concepts and applications. 

1. VPC Pattern 

The VPC Pattern architecture stands out as a modular solution that offers a robust foundation upon which to build or deploy compute resources as needed. Whether you’re looking to enhance your cloud environment with VSIs, Red Hat OpenShift clusters, or any other compute resources, this architecture provides the flexibility to do so. This approach not only simplifies the deployment process but also ensures that your cloud infrastructure remains adaptable and secure, meeting the evolving needs of your projects. 

Fig: Architecture diagram for the no compute pattern on VPC landing zone  2. QuickStart VSI pattern 

The Quickstart VSI pattern pattern involves deploying an edge VPC with one VSI in one of three subnets and a load balancer in the edge VPC. Additionally, it includes a jump server VSI in the management VPC that exposes a public floating IP address. While this pattern is useful for getting started quickly, it is important to note that it does not guarantee high availability or validation within the IBM Cloudfor Financial Services® framework. 

Fig: Architecture diagram for the QuickStart variation of VSI on VPC landing zone  3. QuickStart ROKS pattern 

The Quickstart ROKS pattern pattern consists of a management VPC with one subnet, an allow-all ACL, and a security group. The Workload VPC has two subnets in two different availability zones, also with an allow-all ACL and security group. A Transit Gateway is used to connect the management and workload VPCs. There is also one ROKS cluster deployed in the workload VPC, consisting of two worker nodes, with its public endpoint enabled. For added security, Key Protect is used for encryption of the cluster keys, and a Cloud Object Storage instance is set up as a required component for the ROKS cluster. 

Fig: Architecture diagram for the QuickStart variation of ROKS on VPC landing zone  4. Virtual server pattern 

The VSI pattern architecture in question supports the creation of a VSI on a VPC landing zone within the IBM Cloud environment. The VPC landing zone itself is a critical component of IBM Cloud’s secure infrastructure services, designed to provide a secure foundation for deploying and managing workloads. The VSI on VPC landing zone architecture is specifically tailored for creating a secure infrastructure with virtual servers to run workloads on a VPC network. 

Fig: Architecture diagram for the Standard variation of VSI on VPC landing zone  5. Red Hat OpenShift pattern 

The ROKS pattern architecture supports the creation and deployment of a Red Hat OpenShift Container Platform within a VPC landing zone in a single-region configuration on IBM Cloud. This allows for the management and execution of container applications within an isolated and secure environment, which provide the necessary resources and services to support their functionality. The use of a single-region architecture helps simplify the setup and management of the OpenShift platform while also making sure that all components are located within the same geographical region, reducing latency and improving performance for applications deployed within this environment. By leveraging IBM Cloud’s VPC landing zone, organizations can easily set up and manage their container infrastructure, enabling them to quickly and efficiently deploy and manage their container applications within a secure and scalable environment. 

Fig: Architecture diagram of the OpenShift Container Platform on VPC deployable architecture.  Evaluating an IBM Cloud deployable architecture 

When choosing a VPC landing zone pattern, it’s crucial to consider the advantages and disadvantages of each option, as each has its distinct pros and cons. The most suitable pattern will depend on the unique needs and objectives of your organization or project. To make a well-informed decision, assess key factors such as scalability, security, cost, and ease of management. By thoughtfully evaluating these factors and understanding your project’s requirements, you can select the most suitable VPC landing zone pattern for your needs, ensuring the success of your project. 

For more detailed guidance on selecting the right VPC landing zone pattern, read the article, which provides valuable insights and practical tips to help you make the best choice for your specific use case. 

While IBM Cloud pre-built deployable architectures provide a solid foundation for most use cases, there may be situations where customization or extension is necessary. For these situations, refer to this tutorial for a deeper dive into the customization process. To accelerate your development, start by leveraging an IBM Cloud deployable architecture and adapt it to meet your unique requirements. 

The post Deployable architecture on IBM Cloud: A look at the IaC aspects of VPC landing zone  appeared first on IBM Blog.


SC Media - Identity and Access

Qantas inadvertently exposes passenger information

Australian flag carrier Qantas had their customers' information unintentionally leaked as a result of a technology issue in its mobile app, CNBC reports.

Australian flag carrier Qantas had their customers' information unintentionally leaked as a result of a technology issue in its mobile app, CNBC reports.


Microsoft Entra (Azure AD) Blog

Public preview: Expanding passkey support in Microsoft Entra ID

We really, really want to eliminate passwords. There’s really nothing anyone can do to make them better. As more users have adopted multifactor authentication (MFA), attackers have increased their use of Adversary-in-the-Middle (AitM) phishing and social engineering attacks, which trick people into revealing their credentials.     How can we defeat these attacks while making saf

We really, really want to eliminate passwords. There’s really nothing anyone can do to make them better. As more users have adopted multifactor authentication (MFA), attackers have increased their use of Adversary-in-the-Middle (AitM) phishing and social engineering attacks, which trick people into revealing their credentials.  

 

How can we defeat these attacks while making safe sign-in even easier? Passkeys!  

 

A passkey is a strong, phishing-resistant authentication method you can use to sign in to any internet resource that supports the W3C WebAuthN standard. Passkeys represent the continuing evolution of the FIDO2 standard, which should be familiar to anyone who’s followed or joined the passwordless movement. We already support signing into Entra ID using a passkey hosted on a hardware security key and today, we’re delighted to announce additional support for passkeys. Specifically, we’re adding support for device-bound passkeys in the Microsoft Authenticator app on iOS and Android for customers with the strictest security requirements.

 

Before we describe the new capabilities we’re adding to Microsoft Authenticator, let’s review the basics of passkeys.

 

Passkeys neutralize phishing attempts

 

Passkeys provide high security assurance by applying public-private key cryptography and requiring direct interaction with the user. As I detailed in a previous blog, passkeys benefit from “Verifier Impersonation Resistance": 

 

URL-specific. The provisioning process for passkeys records the relying party’s URL, so the passkey will only work for sites with that same URL.  Device-specific. The relying party will only grant access to the user if the passkey is synched, stored, or connected to the device from which they’re requesting access.   User-specific. The user must prove they’re physically present during authentication, usually by performing a gesture on the device from which they’re requesting access.  

 

Together, these characteristics make passkeys almost impossible to phish.

 

You can host passkeys on dedicated hardware security keys, phones, tablets, and laptops

 

Users can host their passkeys on dedicated hardware security keys (such as FIDO2 security keys) or on user devices such as phones, tablets, or PCs. Windows 10/11, iOS 17, and Android 14 are examples of user device platforms that support passkeys. Each supports signing in with a passkey hosted directly on the user device itself or by connecting to a nearby user device or security key that hosts the passkey, such as a mobile device within Bluetooth range, an NFC-enabled security key, or a USB security key plugged into the user device.

 

If your organization issues dedicated hardware security keys, you sign-in by inserting your key into the USB port or tapping it to the NFC scanner and perform the PIN or biometric verification it requires.

 

To sign-in using a passkey on a user device, simply scan your face or fingerprint with your device or enter your device PIN. It’s also simple to sign-in to an application on a separate device, such as a new phone or a PC. Point the camera of the device hosting your passkey at the QR code displayed on the separate device and use your passkey along with your biometric or PIN to sign in. You may have already followed this process by using an Android or iPhone to sign into services such as Amazon.com.

 

Passkeys may be device-bound or syncable

 

Depending on the scenario, you may prefer a device-bound passkey or a syncable passkey.  

 

A device-bound passkey, as the name suggests, never leaves the device to which it’s issued. If you sign-in using a security key or Windows Hello, you’re using a device-bound passkey. By definition, you can’t back up or restore a device-bound passkey because during these operations the passkey would leave the hardware element. This restriction is important for organizations that must, sometimes by law, protect passkeys from any security vulnerabilities that could arise during synchronization and recovery.  

 

While they offer strong security, dedicated hardware keys can be expensive to issue and manage. If you lose, replace, or destroy the dedicated device, you must provision a brand-new passkey on a new device. And since device-bound passkeys aren’t portable or recoverable, they increase friction for people trying to move away from passwords. To simplify the experience for users who don’t operate in highly regulated environments, the industry introduced support for syncable passkeys. You can back up and recover a syncable passkey, which makes it possible to share the same passkey between devices or to restore it if you lose or upgrade your device—there’s no need to provision a new one.

 

Syncable passkeys on user client devices are easy to use, easy to manage, and offer high security

 

Syncable passkeys on user devices are exciting because they address many of the toughest usability and recoverability challenges that have confronted organizations trying to move to passwordless, phishing-resistant authentication. Hosting the passkey on the user’s device means organizations don’t have to issue or manage a separate device, and syncing it among the user’s client devices and the cloud massively reduces the expense of recovering and reissuing device-bound keys. And on top of all this, replacing passwords with passkeys thwarts more than 99% of identity attacks.

 

We expect this combination of benefits will make syncable passkeys the best option for the vast majority of users and organizations. Android and iOS devices can host syncable passkeys today, and we’re working to add support in Windows by this fall. Our roadmap for 2024 includes support for both device-bound and syncable passkeys in Microsoft Entra ID and Microsoft consumer accounts. Stay tuned for further announcements later this year.

 

Device-bound passkeys in Microsoft Authenticator

 

Industry or governmental regulation, or other highly strict security policies, require that some enterprises and government agencies use device-bound passkeys for signing in to Microsoft Entra. This small fraction of organizations has strict requirements governing the recovery of lost credentials and for preventing employees from sharing credentials with anyone else. Nonetheless, these organizations also want the usability, manageability, and deployment benefits of storing passkeys on user-client devices such as mobile phones.

 

Advantages of hosting passkeys on a user device: 

Organizations don’t have to provision dedicated hardware.  Users are less likely to lose track of their daily computing device.   It’s easy to sign in with a passkey hosted on a user device.

 

We know that device-bound keys are a must-have for many of our largest, most regulated and most security conscious customers. That’s why we’ve been collaborating with these customers, along with the broader FIDO community, to provide additional options. As part of this work, we’re adding support for device-bound passkeys in the Microsoft Authenticator app on iOS and Android. Instead of provisioning separate devices, high-security organizations can now configure Entra ID to let employees sign-in using their existing phone and their device-bound passkey. Users get a familiar phone interface, including biometrics or local lockscreen PIN or password, while their organizations meet strict security requirements because users can’t sync, share, or recover any device-bound passkey hosted in Microsoft Authenticator.

 

Organizations that use device-bound passkeys trade the benefit of large investments that vendors such as Google (see related article) and Apple (see related article) have made in creating high-security, self-service passkey recovery models for the benefit of meeting strict regulatory or security requirements. They become responsible for sharing and recovering device-bound passkeys, including those hosted in Microsoft Authenticator.

 

For detailed guidance on how to get started with device-bound passkeys hosted in Microsoft Authenticator, please refer to our documentation.

 

Microsoft’s commitment to passwordless authentication

 

Microsoft is continuing to enhance our support for passkey in products such as Entra, Windows, and Microsoft accounts. Please continue to send us feedback, so we can help you eliminate passwords from your environment forever.

 

Alex Weinert 

VP Director of Identity Security, Microsoft

 

 

Learn more about Microsoft Entra  

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds. 

Microsoft Entra News and Insights | Microsoft Security Blog   ⁠⁠Microsoft Entra blog | Tech Community   ⁠Microsoft Entra documentation | Microsoft Learn  Microsoft Entra discussions | Microsoft Community  

 


SC Media - Identity and Access

Data breach hits Panda Restaurants

BleepingComputer reports that major Asian-American restaurant company Panda Restaurant Group, which counts Panda Express and Hibachi-San as its subsidiaries, had the personal data of its current and former associates compromised following a breach of its corporate systems in March.

BleepingComputer reports that major Asian-American restaurant company Panda Restaurant Group, which counts Panda Express and Hibachi-San as its subsidiaries, had the personal data of its current and former associates compromised following a breach of its corporate systems in March.


KuppingerCole

eIDAS2: A Gamechanger for Global Digital Identity – Implications and Opportunities

by Joerg Resch As eIDAS2 prepares to go live, its implications extend far beyond the borders of the European Union, setting a new global standard for digital identity management. Organizations worldwide need to understand and prepare for these changes, ensuring they can operate effectively in a new era of digital identity. The 2024 European Identity and Cloud Conference (EIC) provides a unique op

by Joerg Resch

As eIDAS2 prepares to go live, its implications extend far beyond the borders of the European Union, setting a new global standard for digital identity management. Organizations worldwide need to understand and prepare for these changes, ensuring they can operate effectively in a new era of digital identity. The 2024 European Identity and Cloud Conference (EIC) provides a unique opportunity to gain insights, share knowledge, and prepare for the future of digital identity, where security, privacy, and user control are at the forefront of digital transactions.

In a digital era characterized by an increasing reliance on online services and transactions, the security and reliability of digital identities has never been more critical. The European Union has taken a groundbreaking step with the publication of EU Regulation 2024/1183, officially known as eIDAS2. This new regulation, set to come into force on May 20, 2024, not only strengthens the framework for digital identities within the EU but also sets a global precedent for how digital identity services can be managed and utilized. Its implementation, which coincides with EIC 2024, could reshape organizational strategies worldwide.

What is eIDAS2?

eIDAS2 builds on the original electronic Identification, Authentication and trust Services (eIDAS) regulation aimed at enhancing trust in electronic transactions across the EU. The revision introduces several pivotal elements, most notably the European Digital Identity Wallets (EDIW). These wallets serve as secure digital tools that allow EU citizens and businesses to store, manage, and utilize personal identification data and electronic attestations of attributes seamlessly across borders. This framework ensures that every natural and legal person in the EU can access public and private services online without sacrificing control over their personal data.

The Significance of eIDAS2 for Organizations Globally

eIDAS2 is not just a regulatory framework for Europe; it is a beacon for global digital identity management. Organizations around the world should pay attention to these developments for several reasons:

Standard Setting in Digital Identity: eIDAS2 sets a high standard for privacy, security, and interoperability that could become a global benchmark. Non-EU organizations dealing with European partners will need to understand these standards to ensure compliance and smooth interactions. Enhanced Security and Trust: With the introduction of conformity assessment bodies and certification mechanisms, eIDAS2 ensures that digital identity tools and services are reliable and secure. This level of trustworthiness is something organizations worldwide might emulate to enhance their digital identity solutions. Innovation in Identity Management: The EDIW promotes innovation in how identities and attributes are managed and utilized. Organizations can use this model to develop similar solutions, improving customer experiences and operational efficiencies. Implications for Accessing Services

A key component of eIDAS2 is its inclusivity. The regulation mandates that the use of the EDIW is voluntary and that services cannot discriminate against those who choose not to use digital wallets. This principle may influence global service delivery models, emphasizing the need for flexibility in how services and identities are managed digitally.

Relevance to the Global Digital Economy

The digital economy is inherently borderless, where services and goods traverse national boundaries in milliseconds. The eIDAS2 framework facilitates this movement in the EU, potentially creating a ripple effect worldwide as other regions seek to ensure their digital identity systems are interoperable with Europe's. This alignment could lead to smoother transactions, enhanced security, and a more connected global digital economy.

European Digital Identity Wallets: A Closer Look

EDIWs are at the heart of eIDAS2. They allow users to control their identity data fully, choosing when and how much information to share when accessing services. This user-centric approach not only enhances privacy but also empowers individuals, fostering a more trustful digital environment. For organizations, understanding how these wallets work and integrating compatible systems will be crucial.

OpenID4VC: Enhancing the European Digital Identity Wallet

One of the core elements of the EDIW is OpenID for Verifiable Credentials (OpenID4VC), a protocol that stands to revolutionize the way verifiable credentials are exchanged and managed within the eIDAS2 framework. OpenID4VC facilitates the secure and seamless exchange of credentials between issuers, holders, and verifiers, making it a pivotal component in the implementation of the EDIW.

This protocol not only simplifies the process of verifying credentials in real time but also ensures that all transactions adhere to the highest standards of security and privacy mandated by eIDAS2. By integrating OpenID4VC, the EDIW allows users to assert personal data or attributes stored in their wallets without revealing any more information than necessary. This capability is crucial for maintaining user privacy and control over personal information. For organizations globally, understanding and implementing OpenID4VC will be essential to interact efficiently with European entities under the new regulations. The protocol's adoption could also set a precedent for similar initiatives worldwide, promoting a more interconnected and interoperable digital identity landscape. The integration of OpenID4VC into the EDIW exemplifies the EU’s commitment to pioneering advanced, user-centric digital identity solutions that could influence future developments in global digital identity frameworks.

eIDAS 2 is the Key Topic at EIC

The fact that EIC, Europe's leading conference on Digital ID coincides with the enforcement of eIDAS2 is serendipitous for all stakeholders. This convergence will provide a platform for immediate feedback, discussions, and strategy development among policymakers, industry leaders, and technology developers. For attendees, it offers a firsthand look at the regulation's rollout and immediate implications, making it an essential event for anyone involved in digital identity, cybersecurity, or European market operations. Join Europe’s identity community at #EIC2024 to learn more about eIDAS 2 in Germany: Progress, Impact, Challenges; eIDAS Architecture Reference Framework Status and Progress; eIDAS 2, the Protocol Challenge and the Art of Timing; and EUDI Wallet Use Cases and hear top-level discussions on The Future History of Identity Integrity, the Latest on eIDAS Legislation and What it Means for People, Business and Government, Real-World Examples of How Smart Wallets will Transform how we Navigate our Digital World, and The Wallets We Want. To discover all the other sessions dedicated to eIDAS2 as well as what else EIC has in store, have a look at the Agenda Overview.


Ocean Protocol

DF87 Completes and DF88 Launches

Predictoor DF87 rewards available. Passive DF & Volume DF will be retired; airdrop pending. DF88 runs May 2— May 9, 2024 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by locking OCEAN, curating data, and making predictions via Predictoor. Ocean Protocol is joining with Fetch and SingularityNET to form the Superintelligence Alliance, with
Predictoor DF87 rewards available. Passive DF & Volume DF will be retired; airdrop pending. DF88 runs May 2— May 9, 2024 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by locking OCEAN, curating data, and making predictions via Predictoor.

Ocean Protocol is joining with Fetch and SingularityNET to form the Superintelligence Alliance, with a unified token $ASI. This Mar 27, 2024 article describes the key mechanisms. This merge was pending a “yes” vote from the Fetch and SingularityNET communities. As of Apr 16, 2024: it was a “yes” from both; therefore the merge is happening.
The merge has important implications for veOCEAN and Data Farming. veOCEAN will be retired. Passive DF & Volume DF rewards have stopped, and will be retired. Each address holding veOCEAN will be airdropped OCEAN in the amount of: (1.25^years_til_unlock-1) * num_OCEAN_locked. This airdrop will happen within weeks after the “yes” vote. The value num_OCEAN_locked is a snapshot of OCEAN locked & veOCEAN balances as of 00:00 am UTC Wed Mar 27 (Ethereum block 19522003). The article “Superintelligence Alliance Updates to Data Farming and veOCEAN” elaborates.

Data Farming Round 87 (DF87) has completed. Passive DF & Volume DF rewards are stopped, and will be retired. Predictoor DF claims run continuously.

DF88 is live today, May 2. It concludes on May 9. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF88 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF88

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF87 Completes and DF88 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 01. May 2024

SC Media - Identity and Access

Better identity threat detection sought by new Semperis ML-based tool

SiliconAngle reports that more robust high-risk identity threat discovery and response efforts are being aimed by enterprise identity protection startup Semperis with its new machine learning-based Lightning Identity Runtime Protection identity threat detection and response service.

SiliconAngle reports that more robust high-risk identity threat discovery and response efforts are being aimed by enterprise identity protection startup Semperis with its new machine learning-based Lightning Identity Runtime Protection identity threat detection and response service.


Elliptic

OFAC sanctions Russian drone developer Oko Design Bureau

The US Treasury’s Office of Foreign Assets Control (OFAC) has today sanctioned Oko Design Bureau and added 3 crypto addresses belonging to it to the Specially Designated Nationals (SDN) list as part of its Russia-related designations. 

The US Treasury’s Office of Foreign Assets Control (OFAC) has today sanctioned Oko Design Bureau and added 3 crypto addresses belonging to it to the Specially Designated Nationals (SDN) list as part of its Russia-related designations. 


Shyft Network

Veriscope Regulatory Recap — 16th April to 30th April 2024

Veriscope Regulatory Recap — 16th April to 30th April 2024 Welcome to our latest edition of Veriscope Regulatory Recap. In this edition, we will break down recent developments in cryptocurrency regulations across Europe and the UK. Europe Strengthens Crypto Oversight with New Regulations The European Parliament recently passed a new set of rules aimed at making cryptocurrency transact
Veriscope Regulatory Recap — 16th April to 30th April 2024

Welcome to our latest edition of Veriscope Regulatory Recap. In this edition, we will break down recent developments in cryptocurrency regulations across Europe and the UK.

Europe Strengthens Crypto Oversight with New Regulations

The European Parliament recently passed a new set of rules aimed at making cryptocurrency transactions safer and more transparent.

These rules are part of the Anti-Money Laundering Regulations (AMLR) and mainly affect companies that handle crypto transactions, such as exchanges.

Under these new regulations, companies must now do more thorough checks on their customers and monitor any suspicious activities. They’ll report these to a new regulatory body called the Authority for Anti-Money Laundering and Countering the Financing of Terrorism (AMLA).

According to the authorities, this step will prevent crimes such as money laundering and terrorism financing through crypto transactions.

Central to these regulations is the EU’s Markets in Crypto Assets (MiCA) framework, which will be fully enforced by the end of this year.

UK Plans New Framework for Crypto and Stablecoins

Over in the UK, the government is working on new guidelines for cryptocurrencies and stablecoins, expected to be introduced by July. Their reported aim is to foster innovation while ensuring consumer protection.

Bim Afolami, the economic secretary to the Treasury, highlighted this at the Innovate Finance Global Summit 2024, stressing the importance of the UK staying competitive in financial technology. The upcoming regulations will cover various aspects of crypto operations, including trading and managing digital assets.

“Once it goes live, a whole host of crypto asset activities, including operating in exchange, taking custody of customer assets and other things, will come within the regulator perimeter for the first time.”
- Bim Afolami, economic secretary to the Treasury

This move comes as part of a broader effort to modernize the UK’s financial system. Authorities are also set to get more power to directly access crypto assets in cases of suspected illegal activities.

(Image Source)

Although the UK’s crypto community and industry at large are welcoming the UK’s plan to roll out new crypto regulations by June/July this year, they are also worried about its possible impact on the broader ecosystem.

Hence, the authorities must ensure that the crypto users aren’t burdened with overly stringent measures that could stifle innovation and growth in the sector. All stakeholders must ensure that the user experience and security remain intact despite the new regulatory measures in place.

Interestingly, here’s how the UK and the EU compare in terms of their approach to crypto regulations:

Overall, the new developments in Europe and the UK demonstrate their effort to keep pace with the fast-evolving world of cryptocurrency. While focusing on security and transparency, these regulations also show an understanding of the need to adapt to the ever-changing digital landscape, ensuring that the crypto industry can continue to grow and evolve.

Interesting Reads

Guide to FATF Travel Rule Compliance in Mexico

Guide to FATF Travel Rule Compliance in Indonesia

Guide to FATF Travel Rule Compliance in Canada

The Visual Guide on Global Crypto Regulatory Outlook 2024

Almost 70% of all FATF-Assessed Countries Have Implemented the Crypto Travel Rule

About Veriscope‍

‍Veriscope, the compliance infrastructure on Shyft Network, empowers Virtual Asset Service Providers (VASPs) with the only frictionless solution for complying with the FATF Travel Rule. Enhanced by User Signing, it enables VASPs to directly request cryptographic proof from users’ non-custodial wallets, streamlining the compliance process.

For more information, visit our website and contact our team for a discussion. To keep up-to-date on all things crypto regulations, sign up for our newsletter and follow us on X (Formerly Twitter), LinkedIn, Telegram, and Medium.

Veriscope Regulatory Recap — 16th April to 30th April 2024 was originally published in Shyft Network on Medium, where people are continuing the conversation by highlighting and responding to this story.


Microsoft Entra (Azure AD) Blog

Announcing General Availability of Microsoft Entra External ID

I'm thrilled to announce that Microsoft Entra External ID, our next-generation, developer-friendly customer identity access management (CIAM) solution will be generally available starting May 15th. Whether you're building applications for partners, business customers or consumers, External ID makes secure and customizable CIAM simple.    Microsoft Entra External ID   &nbs

I'm thrilled to announce that Microsoft Entra External ID, our next-generation, developer-friendly customer identity access management (CIAM) solution will be generally available starting May 15th. Whether you're building applications for partners, business customers or consumers, External ID makes secure and customizable CIAM simple. 

 

Microsoft Entra External ID  

 

Secure and customize external identities’ access to applications 

 

 

 

 

Microsoft Entra External ID enables you to:  

 

Secure all identities with a single solution  Streamline secure collaboration  Create frictionless end user experiences  Accelerate the development of secure applications 

 

Secure all identities with a single solution  

 

Managing external identities, including customers, partners, business customers, and their access policies can be complex and costly for admins, especially when managing multiple applications with a growing number of users and evolving security requirements. With External ID, you can consolidate all identity management under the security and reliability of Microsoft Entra. Microsoft Entra provides a unified and consistent experience for managing all identity types, simplifying identity management while reducing costs and complexity.  

 

Building External ID on the same stack as Entra ID allows us to innovate quickly and enables admins to extend the Microsoft Entra capabilities they use to external identities, including our industry-leading adaptive access policies, fraud protection, verifiable credentials, and built-in identity governance. Our launch customers have chosen External ID as their CIAM solution as it allows them to manage all identity types from a single platform: 

 

"Komatsu will be using Entra External ID for all external-facing applications. This will help us deliver a great experience to our customers and ensure we're a trusted partner that is easy to do business with."

- Michael McClanahan, Vice President, Transformation and CIO  

 

 

Industry-leading identity security provides end-to-end access to applications.

 

 

Streamline secure collaboration  

 

Boundaries between consumers and business customers are blurring, as are the boundaries between partners and employees. Collaborating with external users like business customers and partners can be challenging; they need access to the right internal resources to do their work, but that access must be removed when it's no longer needed to reduce security risks and safeguard internal data. In this changing world, even trusted collaboration needs least-privilege safeguards, strong governance, and pervasive branding. With ID Governance for External ID, the same lifecycle management and access management capabilities for employees can be leveraged for business guests as well. Guest governance capabilities complement External ID B2B collaboration that’s already widely used by Entra customers worldwide to  make collaboration secure and seamless.  

 

For example, you may want to collaborate with an external marketing agency on a new campaign. With B2B collaboration, you can invite the agency staff to join your tenant as guests and assign them access to the relevant resources, such as a Teams channel for communication, a SharePoint site for project management, and a OneDrive folder for file sharing.  Cross-tenant access settings allow you to have granular controls over which users from specific external organizations get access to your resources, as well as control which external organizations your users access.  ID Governance for External ID will automatically review and revoke their access after a period of inactivity or when the project is completed. This way, you can seamlessly collaborate while ensuring only authorized external users have access to internal resources and data. 

 

Control what resources external collaborators can access with cross-tenant access settings.

 

 

Create frictionless end user experiences 

 

Personalized and flexible user experiences are critical to drive customer adoption and retention. External ID lets you reduce end-user friction at sign in by natively integrating secure authentication experiences into your web and mobile apps. You can leverage a variety of authentication options, such as social identities like Google, Facebook, local or federated accounts, and even verifiable credentials to make it easy for your end users to sign-up/sign-in. External ID enables you to immerse end-users in your brand and create engaging user-centric experiences with progressive profiling, increasing end-user satisfaction and driving brand love. 

 

Design secure, intuitive, and frictionless sign-up and sign-in user journeys that immerse external identities in your brand.

 

 

External ID allows you to further personalize and optimize end-user experiences by collecting and analyzing end-user data, improving their user journey while complying with privacy regulations. Our user insight dashboards help monitor user activities and sign-up/sign-in trends, so that you can assess and improve your end-user experience strategy with data.  

 

Accelerate the development of secure applications 

 

Identity is a foundational building block of any modern application, but many developers may have little experience integrating identity and security into their apps. External ID turns your developers into identity pros by making it easy to integrate identity into web and mobile applications with a few clicks. Developers can get started creating their first application in minutes either directly from the Microsoft Entra portal or within their developer tools such as Visual Studio Code. We recently announced that our Native Authentication now supports Android and iOS, allowing developers to build pixel-perfect sign-up and sign-in journeys into mobile apps using either our API or the Microsoft Authentication Library (MSAL): 

 

“A mobile app sign in journey could have taken us months to design and build, but with Microsoft Entra External ID Native Auth, it took the team just one week to build a functionally comparable and even more secure solution.”

– Gary McLellan, Head of Engineering Frameworks and Core Mobile Apps, Virgin Money 

 

Our Developer Center is a great starting point for developer to find quick start guides, demos, blogs and more showcasing how to build secure user flows into apps.

 

 

Backed by the reliability and resilience of Microsoft Entra, developers can launch from a globally distributed architecture designed to accommodate the needs of growing user bases; ensuring their external-facing apps can handle millions of users during peak periods, without disrupting end-user experiences or compromising security. 

 

Try it out!  

 

We are currently offering an extended free trial for all features until July 1, 2024!* Start securing your external-facing applications today with Microsoft Entra External ID. 

 

After July 1st, you can still get started for free and only pay for what you use as your business grows. Microsoft Entra External ID’s core offer is free for the first 50,000 monthly active users (MAU), with additional active users at $0.03 USD per MAU (with a launch discounted price of $0.01625 USD per MAU until May 2025). Learn more about External ID pricing and add-ons in our FAQ.  

 

*Existing subscriptions to Azure AD B2C or B2B collaboration under an Azure AD External Identities P1/P2 SKU remain valid and no migration is necessary – we will communicate upgrade options once they are available. For multi-tenant organizations, identities whose UserType is external member will not be counted as part of the External ID MAU. Learn more. 

 

Learn More  

Want to learn more about External ID? Check out these resources:  

 

Website  Documentation   Developer Center  

 

Learn more about Microsoft Entra  

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds. 

 

Microsoft Entra News and Insights | Microsoft Security Blog   ⁠⁠Microsoft Entra blog | Tech Community   ⁠Microsoft Entra documentation | Microsoft Learn  Microsoft Entra discussions | Microsoft Community  

 


SC Media - Identity and Access

UK cracks down on default passwords for smart devices

The UK has become the first country worldwide to prohibit Internet of Things device manufacturers from using default usernames and passwords in their products following the approval of the Product Security and Telecommunications Infrastructure act, which seeks to bolster smart device cybersecurity, The Hacker News reports.

The UK has become the first country worldwide to prohibit Internet of Things device manufacturers from using default usernames and passwords in their products following the approval of the Product Security and Telecommunications Infrastructure act, which seeks to bolster smart device cybersecurity, The Hacker News reports.


FTC urged to probe automakers' location data sharing practices

The Federal Trade Commission has been sought by Sens. Ron Wyden, D-Ore., and Ed Markey, D-Mass., to launch an investigation into major automakers' driver location data sharing practices after a congressional probe showed that only five of 14 car manufacturers required warrants or court orders before allowing law enforcement access to such data, according to The Record, a news site by cybersecurity

The Federal Trade Commission has been sought by Sens. Ron Wyden, D-Ore., and Ed Markey, D-Mass., to launch an investigation into major automakers' driver location data sharing practices after a congressional probe showed that only five of 14 car manufacturers required warrants or court orders before allowing law enforcement access to such data, according to The Record, a news site by cybersecurity firm Recorded Future.


More than 450K hit by JPMorgan breach

Major U.S. multinational financial services firm JPMorgan had information from more than 450,000 of its customers compromised following a data breach in August 2021, reports Cybernews.

Major U.S. multinational financial services firm JPMorgan had information from more than 450,000 of its customers compromised following a data breach in August 2021, reports Cybernews.


Philadelphia Inquirer breach impacts over 25K

The Philadelphia Inquirer has confirmed that 25,549 individuals had their personal and financial details exfiltrated following a cyberattack last May, according to BleepingComputer.

The Philadelphia Inquirer has confirmed that 25,549 individuals had their personal and financial details exfiltrated following a cyberattack last May, according to BleepingComputer.


Elliptic

Our new research: Enhancing blockchain analytics through AI

Elliptic researchers have made advances in the use of AI to detect money laundering in Bitcoin. A new paper describing this work is co-authored with researchers from the MIT-IBM Watson AI Lab. A deep learning model is used to successfully identify proceeds of crime deposited at a crypto exchange, new money laundering transaction patterns and previously-unknown illicit wallets.
Elliptic researchers have made advances in the use of AI to detect money laundering in Bitcoin. A new paper describing this work is co-authored with researchers from the MIT-IBM Watson AI Lab.

A deep learning model is used to successfully identify proceeds of crime deposited at a crypto exchange, new money laundering transaction patterns and previously-unknown illicit wallets. These outputs are already being used to enhance Elliptic’s products.

Elliptic has also made the underlying data publicly available. Containing over 200 million transactions, it will enable the wider community to develop new AI techniques for the detection of illicit cryptocurrency activity.

Ontology

Ontology Monthly Report — April

Ontology Monthly Report — April April has been a whirlwind of activities and accomplishments for Ontology. From exciting new partnerships and community engagements to significant advancements in our technology, here’s a recap of this month’s highlights: Community and Web3 Influence 🌐🤝 10M DID Fund Launch: We launched a 10M fund to significantly boost our decentralized identity (DID) ecosystem,
Ontology Monthly Report — April

April has been a whirlwind of activities and accomplishments for Ontology. From exciting new partnerships and community engagements to significant advancements in our technology, here’s a recap of this month’s highlights:

Community and Web3 Influence 🌐🤝 10M DID Fund Launch: We launched a 10M fund to significantly boost our decentralized identity (DID) ecosystem, fostering innovation and growth. Presence at PBW: It was great seeing so many of you at PBW! We appreciate every conversation and insight shared. Web3 Wonderings: This month, our discussions spanned DeFi and NFTs, with recordings available for those who missed the live sessions. Token2049 Participation: Our presence at Token2049 was a major success, expanding our visibility and connections within the blockchain community. Zealy Quest — Ontology Odyssey: Our latest quest is live, adding an exciting layer of engagement within our platform. Development/Corporate Updates 🔧 Development Milestones 🎯 Ontology EVM Trace Trading Function: Progress has reached 80%, enhancing our trading capabilities within the EVM space. ONT to ONTD Conversion Contract: We’ve hit the 50% development milestone, simplifying the conversion process for our users. ONT Leverage Staking Design: Now at 35%, this development is geared towards providing innovative staking options for the Ontology community. Events and Partnerships 🤝 StackUp Part 2 Success: Our latest campaign with StackUp was a resounding hit, thanks to your participation. New Partnerships: We celebrated new collaborations with LetsExchange and the support of GUARDA Wallet for ONT and BitGet’s listing of ONG. Community Giveaways and AMAs: The month was packed with interactive events, including giveaways with Lovely Wallet and an AMA with KuCoin. ONTO Wallet Developments 🌐🛍️ UQUID Accessibility: UQUID is now accessible in ONTO, streamlining transactions for our users. ONTO Updates: We’ve rolled out a new version update to enhance user experience. Upcoming AMA with Kita Foundation: Don’t miss our AMA with Kita Foundation, aimed at diving deeper into future collaborations. On-Chain Metrics 📊 dApp Growth: The total number of dApps on our MainNet remains strong at 177, indicating a vibrant ecosystem. Transaction Growth: This month saw an increase of 773 dApp-related transactions and 13,866 MainNet transactions, reflecting active network utilization. Community Engagement 💬 Vibrant Discussions: Our social media platforms continue to buzz with lively discussions and insights from our engaged and passionate community members. Recognition through NFTs: We’ve issued NFTs to active community members in recognition of their contributions and involvement. Follow us on social media 📱

Keep up with Ontology by following us on our social media channels. Your continued support and engagement are vital to our shared success in the evolving world of blockchain and decentralized technologies.

Ontology website / ONTO website / OWallet (GitHub)

Twitter / Reddit / Facebook / LinkedIn / YouTube / NaverBlog / Forklog

Telegram Announcement / Telegram English / GitHubDiscord

April has been a month of dynamic growth and strong community activity for Ontology. We thank our community for their unwavering support and look forward to another month of innovation, collaboration, and growth. Stay tuned for more updates, and let’s continue to push the boundaries of blockchain technology together!

Española 한국어 Türk Slovenčina русский Tagalog Français हिंदी 日本 Deutsch සිංහල

Ontology Monthly Report — April was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


PingTalk

Secure "Keep Me Signed In" Experiences Are Possible | Ping Identity

Customer loyalty is won – and lost – at login. 60% of customers admit to abandoning an account forever when they have trouble signing in. Convenience and ease of use are key, and as customer expectations become ever more exacting, organizations looking to retain customers and grow loyalty need to prioritize removing unnecessary barriers.   Given the point of login can be such a cause for

Customer loyalty is won – and lost – at login. 60% of customers admit to abandoning an account forever when they have trouble signing in. Convenience and ease of use are key, and as customer expectations become ever more exacting, organizations looking to retain customers and grow loyalty need to prioritize removing unnecessary barriers.

 

Given the point of login can be such a cause for session and cart abandonment, many websites offer a convenient "Keep Me Signed In" or "Stay Signed In" checkbox on their sign-in screen, allowing consumers to keep their sessions active longer. This functionality can significantly improve customer experience and boost revenue, and has become a familiar feature across numerous applications and online platforms. However, longer-lasting sessions bring with them an elevated risk of session hijacking, which can lead to companies shouldering higher fraud resolution costs, or possibly hesitating to implement “Keep Me Signed In” at all, to the detriment of customer experience. 

 

The challenge, therefore, lies in finding a way to properly secure long-lived sessions to satisfy customers while protecting your business. Whether your organization has already implemented “Keep Me Signed In” or you’re still considering whether it may be a good fit for your business, read on for an overview of the benefits of this customer-pleasing functionality and the role of advanced threat protection in decreasing the risk of session compromise.

Tuesday, 30. April 2024

YeshID

Ditch Reminder Chaos: YeshID Slackbot Manages Onboarding & Offboarding

Editor’s note: Thanks to Software Engineer, Kevin Chiang, for this week’s post. Any time you sign up for a service, you start receiving emails from them; it’s basically a law.... The post Ditch Reminder Chaos: YeshID Slackbot Manages Onboarding & Offboarding appeared first on YeshID.

Editor’s note: Thanks to Software Engineer, Kevin Chiang, for this week’s post.

Any time you sign up for a service, you start receiving emails from them; it’s basically a law. If your inbox looks like mine, it’s bombarded with marketing emails, weekly digests, or notifications like “You’re nearing your spending limit in GCP” (some of these might be more important than others). Email works well as the first place to send notifications since people usually sign up with their email. But with everything going to your inbox nowadays, it’s easy to miss that onboarding reminder for Sarah, who starts next Monday and still needs her Github and Zoom accounts provisioned.

So what is the solution? These days, every workplace has an all-instant messaging/chat service. That’s usually where people are most responsive. So why not receive your notifications there as well?

Let me introduce you to YeshBot – our Slack notification service.

Combined with the rest of the YeshID platform, it removes the need for you to go through your checklist and manually slack/nag task owners to finish their work. But instead, with a click of a button, all folks who are involved with onboarding or offboarding are notified of the work that needs to get done!

Connect YeshBot to your account

Connecting the bot is easy! Simply log in to your YeshID admin account and find the “Add to Slack” button under Settings > YeshID Slack Bot. 

Click the button to complete the OAuth flow and you are ready to go.  For details on how to do this see our docs.

Once the bot has connected successfully you’ll get a notification 

Currently, the YeshBot is set up to send messages for the YeshList feature that we wrote about in a previous post. So when you onboard a user through YeshID, for example, you will receive a notification:

YeshBot will send two types of notifications:

General notifications (onboarding/offboarding tasklist created or automated tasks failed) Reminders to specific users when they have tasks to do.

General notifications are sent to all YeshID admins. You might want to have other team members stay in the loop for certain playbooks–for example (e.g. IT, HR, Finance for onboarding and offboarding). You can configure YeshID to send certain notifications to other channels.

How? Read on.

Set up notification channels in YeshID (optional)

To set up notifications to a channel, we need to add the YeshBot to the channel and then configure it in YeshID

Type in @YeshID in your desired notification channels.
Click “Add to Channel”

Set your notification channel in YeshID

YeshBot is now set up to send to your channel

A few tidbits to keep in mind when configuring the notification channels:

YeshBot may be added to any slack channel, but will not send any notifications unless configured to do so in the YeshID console.  YeshBot will broadcast the same notifications to every configured channel Onboarding/offboarding information can be sensitive, so YeshBot can also be added to private channels to limit who sees the messages (Note: any user assigned a task will still be notified directly). Deleting the integration

Under the same section (Settings > YeshID Slack Bot), clicking on disconnect will remove the Bot from your Slack environment.

Conclusion

Integrating YeshBot with your Slack workspace lets you ditch the email overload and streamline your onboarding and offboarding processes. YeshBot delivers crucial notifications directly to your team, ensuring everyone stays informed, and tasks get completed on time.

Sign up for a free YeshID trial today and see how easy access management can be. 

The post Ditch Reminder Chaos: YeshID Slackbot Manages Onboarding & Offboarding appeared first on YeshID.


Spruce Systems

What the Next Generation of Digital IDs Can Learn from the First

The necessity of transitioning to digitally native identification systems is clear. Still, the next generation of these systems must learn from the flaws and failures of pioneers.

In September of 2023, a Moody’s report aired concerns about Aadhaar, the pioneering government digital ID service used by over one billion Indian citizens. Though it has mitigated fraud and helped countless Indians access services more easily, Moody’s argued Aadhaar posed major privacy and security risks, in large part thanks to its centralization.

India rebutted the concerns, stating in part that “to date no breach has been reported from Aadhaar database.”

Just months later, Aadhaar suffered an immense data theft.

The extremely sensitive personal data of nearly 800 million Indians appeared for sale on black-market forums for a mere $80,000. This included not only their Aadhaar numbers, but their addresses, phone numbers, passport numbers, and more – and despite the government’s protestations, known thefts of Aadhaar data go all the way back to 2012. This doesn’t just expose Indians to payments, benefits, and banking fraud; it is also a vector for national security risk. Some might even argue these harms outweigh the benefits of the system.

Other pioneering digital ID systems worldwide, including China’s RealDID, verification providers relying on data brokers, and digital services that aggregate user data, have also shown centralization risks. These include not just lax cybersecurity but misuse by controlling authorities themselves. The trend of transitioning to digitally native identification systems is clear. Still, the next generation of these systems must learn from the flaws and failures of pioneers – above all, by shifting away from centralized models that turn governments or corporations into ripe targets for criminals.

The Curse of the Innovator

Aadhaar was a hugely forward-thinking project when it launched in 2010, and it’s now the largest digital ID system in the world. There is no doubt that it provides immense utility to its user base of over a billion enrolled Indians. However, early projects in any realm can fall victim to the so-called “curse of the innovator” by failing to adopt new ideas following their creation. Several specific innovations, if they’re adopted by future digital ID projects, could help prevent breaches like the 2023 Aadhaar attack and other kinds of centralization risks.

Decentralization

As Moody’s pointed out, Aadhaar’s biggest flaw may be its centralization, with data and authorizations controlled by the Unique Identity Authority of India, or UIDAI. Centralization of digital identity credentials creates three related problems.

First, Aadhaar has a single point of failure – there have been many cases of benefits denial because the UIDAI couldn’t confirm a user’s identity. Second, centralization makes systems vulnerable to exploitation because all confirmations come from the same authority. For instance, Indian gangs have learned how to exploit the Aadhaar system to generate fake identities – and UIDAI has refused to disclose how many fake identities may be in the wild.

Third and most concerning, the centralization of Aadhaar’s identity data makes it an irresistible honeypot for cybercriminals. Particularly scary is that biometric data, including thousands of fingerprints, has been repeatedly stolen and for use in fraud schemes. This data would allow an attacker to create targeted biometric spoofing attacks, which could defeat both remote and physical identity verification systems. The theft of biometric data is particularly damaging because it is essentially irreversible – unlike a password, you can’t “reset” your fingerprints. 

In recent years, we have seen significant progress on models that prevent single points of failure or mass, single-target data theft. In these architectures, data can be distributed to the edges instead of accumulating in a central government database. Under this type of distributed scheme, far less data is stored by any one authority, with identity instead affirmed by various authorities, such as schools, utility companies, and government agencies. This approach is far more aligned with “zero-trust” approaches to security, and as a result, there’s no “one-stop shop” where hackers can steal or fabricate credentials.

Zero-Knowledge Attestation

According to analysis by the firm Resecurity, Aadhaar leaks have been caused in part by security breaches at third parties, such as utilities, which had downloaded sensitive data from Aadhaar servers and stored them. Until software supply chains are secured-by-default, these common utilities are likely to suffer ongoing compromises, as seen in the recent high-impact “xz Utils” and MOVEit exploits. Innovations in cryptography, including a technique known as “zero-knowledge proofs,” now make it possible to affirm certain data without revealing it to the requester, thereby reducing (or even eliminating) the need for data transfers and the use of intermediate tools that can mishandle sensitive data.

For instance, a third party could confirm the match of a person’s biometric data without actually accessing a raw fingerprint or iris scan. Worldcoin, the Sam Altman-founded global ID startup, uses cryptographic “hashes” of users’ iris scans, rather than the raw data, to establish unique identity. This should prevent the theft of iris data – though this particular technique must be combined with several additional components and precautions to become a complete system that can combat disincentives and misuse.

Robust Data Siloing

Third parties that store sensitive data also often add more data of their own, violating the principle of “data siloing.” For instance, the 2023 Aadhaar hack included full packages combining ID numbers with biometrics, passport numbers, addresses, and phone numbers.

This unfolded largely because over time, Aadhaar identity numbers became required for more and more services. As one critic wrote back in 2018, “This turns Aadhaar into a dangerous bridge between these previously isolated silos. With each new data silo that gets linked, an important protection against 360-degree profiling gets weakened, leaving Indians vulnerable to data mining and identity theft.”

In addition to cryptographic methods that prevent third parties from reading raw identity data, the use of hardware security components can keep identity data from circulating while keeping attestations trustworthy.

Distributed Authority

One of the most disturbing uses of a digital identity system was reported by the New York Times, which covered a Chinese authority’s use of COVID-19 health codes to control the movement of citizens protesting the freezing of their bank accounts. While details remain unclear, it appears that authorities may have matched the identities of the owners of frozen accounts to their health registration accounts, allowing them to flag potential protestors as COVID risks.

This represents a kind of security breach where a system is used to accomplish unrelated (and potentially illegitimate) goals. As Chinese critics pointed out, this isn’t simply a one-time abuse, but undermines long-term faith and public trust in the identity system. Broadly, this is more evidence in favor of decentralized identity systems – not only separating data across authorities, but separating command and control functions from data entirely. With carefully designed digital identity architecture, “should not be abused” can become “cannot be abused.”

Instead of having one central IT department control all of a society’s digital identity and credentials, it is possible to build decentralized architectures that allow multiple authorities with additional checks and balances, transparency requirements, and improved individual control. This clear separation of powers can ensure that powerful financial, healthcare, and security systems cannot collude against an individual.

Distributed Identity For a Safer, Freer Digital Future

The mixed track record of digital identity services so far demonstrates how important it is to design them from the ground up with the right priorities: privacy, security, and individual control. These overriding goals have become more obvious thanks to the experimentation of early adopters, and technological solutions for achieving them have also emerged over time.

With identity systems, it’s not enough to say that systems won’t be compromised, relying on the expertise or ethics of human authorities. Instead, the systems must be built from the ground up with many layers of protection so they can’t be compromised. While there are still details to fill in, ID systems that are architected to use distributed data storage, advanced cryptography, and information minimization are the way to reach that goal. While these services have their flaws, they also have immense potential to provide public utility and fight fraud.

We must learn from programs that have paved the way, such as Aadhar, and ensure that we can enjoy the benefits of digital identity without compromising security or individual freedoms such as free speech, privacy, and user choice.

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


SC Media - Identity and Access

Change Healthcare incident caused by compromised Citrix credentials

UnitedHealth Group’s CEO Andrew Witty set to testify before Congress tomorrow – security pros say there’s more to the story and it will take several more months of investigation before we know the full kill chain.

UnitedHealth Group’s CEO Andrew Witty set to testify before Congress tomorrow – security pros say there’s more to the story and it will take several more months of investigation before we know the full kill chain.


AI, Okta, Chrome, Quantum, Kaiser Permanente, FTC, FCC, NCSC, Josh Marpet, and more. - SWN #382


Indicio

Indicio Network Updates

The post Indicio Network Updates appeared first on Indicio.

Indicio offers three Networks for anyone who works in the Decentralized Identity/Verifiable Credentials space:

TestNet: to learn and test this technology DemoNet: to run stable solutions MainNet: to operate live solutions

Pick the right Networks for you and the work you’re doing!

As an FYI, we will be doing the annual scheduled TestNet reset on Tuesday, May 7, 2024 at 8:00 AM PT/11:00 AM ET. Following standard TestNet operating procedures means conducting a reset that removes old transactions from the Network and allows us to introduce new Genesis files.

If you are a TestNet Node Operator, you need to join a meeting on Tuesday, May 7 so we can support you through the reset and setting up of the new files; if you’re a TestNet Node Operator, you have this invite in your inboxes (let us know if you don’t!). No changes or actions are needed for those using Indicio DemoNet or MainNet.

Reach out to support@indicio.tech with any questions or support needs.

####

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post Indicio Network Updates appeared first on Indicio.


1Kosmos BlockID

Vlog: Defeating Deepfakes

Join Michael Cichon, CMO at 1Kosmos, and Vikram Subramaniam, VP of Solutions, as they delve into the alarming rise of deepfakes. Discover how this evolving technology threatens security and authentication measures, and explore 1Kosmos’ innovative solutions to combat this growing threat. Stay informed and safeguard your digital identity.  Hello everybody, this is Michael Cichon. … Continued The

Join Michael Cichon, CMO at 1Kosmos, and Vikram Subramaniam, VP of Solutions, as they delve into the alarming rise of deepfakes. Discover how this evolving technology threatens security and authentication measures, and explore 1Kosmos’ innovative solutions to combat this growing threat. Stay informed and safeguard your digital identity.

Hello everybody, this is Michael Cichon. I’m the Chief Marketing Officer at 1Kosmos, here with Vikram Subramaniam, our VP of Solutions, to talk about deepfakes. Vik, it’s great to have you, and welcome to the vlog. Deepfakes are in the news lately. Frightening news. One case recently, they were used to con a company out of $25 million, several payments totaling 25 million in Asia Pacific. Closer to home, we’ve heard stories of deepfakes, dialing the phone, calling people, folks answer the phone, and their friends, family, loved ones appear to be in distress and asking to be rescued. Some are claiming to have kidnapped people and asking for ransom. Quite devastating for anybody to receive these types of things. So this phenomenon of deepfakes, how did we get here? What are they?

Yeah, I think, imagine that, Mission Impossible is becoming real. We are experiencing it in our everyday lives. I’ve only seen it in the movies, where you’ve got large command centers, where they’re taking small voice clips of different things and then, playing and tricking the bad guys into thinking that it’s one of their own. Regular people are facing it. And essentially, with AI and just freely available software these days, that takes advantage of the chips that we have in our computers and even generally available cloud and internet, you can produce video, you can produce voice, you can produce anything that looks or feels real to the person on the other side of a virtual line. So I’m thinking of form line, I’m thinking of a laptop, but yeah, this is where sometimes I just need to shake your hand to know if you are real, right?

Yeah. Do you know I’m real right now? Is this not a deepfake? Is this somebody pretending to be me?

Exactly. Exactly right. I think so that’s where the world is going to, but these are essentially deepfakes, the same things that we have seen in the movies. So people wearing masks, you creating videos of or morphing someone’s face on top. And then, yeah, it just sounding like someone else. It’s crazy what’s out there with deepfakes.

Yeah, the evolution has been stunning. We’ve talked generally about artificial intelligence, but over the last 20 years, we’ve seen this evolution of artificial intelligence being the ability to recognize patterns very deeply in data somehow to get into a predictive science and now, to even approach what looks like thinking, when computers can draw inferences without being told to do so. The deepfake, the lineage, how we got here, essentially, what we have is two chess players, if you will, two adversarial networks creating a likeness and presenting it to another adversarial engine to be graded for similarity to a sample. And then, for this, basically, this iteration of synthetic media, at the speed of light, whether that’s a voice, whether that’s a video, even texting patterns, the way that somebody might text a friend or chat with somebody, that can also be deepfaked. Training, well, first, before I get into training deepfake attack, this technology now, five years ago, you had to be one of the world’s leading data scientists.

Yeah, data scientists, researcher, videographer, one of those special effects companies, right? Yeah.

Right. Now, I just did a simple Google Search the other day using ChatGTP, and this thing brings me a half a dozen tools that I can, within probably an hour, create a deepfake video. Deepfakes trained on humans are one issue. We saw that with the $25 million that was taking. And certainly to the human eye, these deepfakes are very hard to detect. So in terms of humans, of humans, we’ve got to train people. We start to move millions of dollars around. We have to have policies, procedures, and oversight to prevent fraud at that scale. But I think, more on the sinister side, over the past, what, six, seven years, the authentication using live biometrics has set in. This is a step up from what we used to do. A thumb print or a face ID on a phone, we’re now logging in with essentially a live selfie, and that’s a step up in security.

But talk a little bit, if you will, about the way that deepfakes can be used to attack biometric authentication. Because before you start that, companies move into biometric authentication to get rid of passwords, and a live selfie had been pretty hard to fake. A live selfie can also be verified against a driver’s license. So you could have that direct connection of the live selfie to an offline credential and prove identity, but now, deepfakes are attacking. So can you talk a little bit about how deepfakes are being trained on authentication in cyber attacks?

Yeah, absolutely. Right. So you mentioned it correctly, right? So we want convenience. We want to look at something, and then, it needs to open. So we have that happening on our phones. We have that happening at the immigration gates, and now, we obviously want it with our enterprise systems. So until now, obviously, hey, if you needed to fool something, if you needed to fool a face detection software, you would come and present a picture. You would come and present a video of the person that you wanted to impersonate. And in the early days, of course, you couldn’t find it. But I think technology has improved over multiple decades that technology now can prevent presentation attacks, where you’re actually presenting something to the camera, where you’re capturing like a picture, and if it’s a picture of a picture, from a screen, from a video, all of those things, you know that it’s not a live person. And it’s only then that the face is getting to match.

So the presentation attacks, yes, are evolving a little bit, but you know what? For the most part, I think a lot of companies and our solution basically can help prevent those presentation attacks. But because more and more companies are moving towards facial recognition and people are remote, now, impersonation has become big. And like you mentioned, there’s technology available out there to create videos and create stuff about things or videos or pictures that look like you, and from just freely available content. You’ve got public content. This vlog is going to be public. People can take small snippets from you, take your voice, take your face and create a short video and very likely try and fool the system. The problem there is, till now, you have to have another device to show it to the camera, and the moment you did that, the camera and generally having the picture and different objects out there, people could detect that it’s not a live person and it’s something that someone’s trying to attack.

But with deepfakes, the thing is that they are utilizing what we call as injection attacks. Injection is basically fooling the entire piece of software, because there is dependency on capturing a picture from the camera. And there is trust that pictures are coming from the camera, and we can depend on all of the things of the features of the camera to capture the picture, that trust is gone. So injection attacks are coming directly into the stream of the video, that comes in from the camera and goes out to the server. So it could very well be, we’ve got different kinds of attacks.

We’ve got JavaScript attacks, we’ve got the multiple cams or the substitute cams attacks, anything that really can inject itself into the stream, with the advent of technology, different scripts of being available, different pieces of technology, like mini cams. So they allow you to have multiple cameras and then, inject a video out there. And the software is no smarter and thinks that it is actually a person doing it. So now, that’s where deepfakes are there, but obviously, there is a way to prevent them. But Michael, I know you’ve been doing some research on deepfakes. Are there other things that you have noticed that people are doing out there?

Okay, well, let’s talk about this a little bit. You talk about presentation attacks. What are the defenses against presentation attacks?

Sure. So this is why, when we are doing a login at 1Kosmos, we verify multiple things. We verify the liveness of a user and also, do a face match of the user. And we are doing a one-to-one match. We’re not doing a one-to-many match. So which means that we are exactly verifying if the user is who they say they are. And along with that, before we do anything of that, we are verifying that the user is a live person. A couple of ways to do that. One is where we have the user follow specific instructions on the screen, like, “Turn left, turn right, smile.”

You know what I mean? I used to say, and I think it’s still valid, we’re the happiest MFA out there. We make people smile. So essentially, when people smile, we know that it is an actual human who’s able to follow instructions and then, do it. And along with that, we are able to go ahead and verify all of the other parameters of the picture that we captured of the person. At a random interval, we are able to capture the picture and then say that this is a live person or not.

Okay, awesome. So what you’re describing there, it’s the active liveness detection?

Correct, it’s the active liveness detection. And along with that, now, technology has evolved and we have evolved, where we have brought out the passive liveness. Passive liveness, we can determine the liveness of a person just by means of taking a selfie. There are a couple of attributes about the picture that we can utilize to go ahead and determine that the person is indeed who they say they are. And the very fact that they’re even coming in to take a selfie and just look, once they look at the screen, we are able to go ahead and determine that they are who they say they are. We are not waiting for them to upload a picture or something like that. We’re substituting or we’re actually doing active liveness, without the need of spoiling the user experience.

Okay. Okay. All right. So then, on the injection side, we’re monitoring for what? Virtual cameras? External cameras?

So see the thing is, injection side, you’re injecting into the stream, which means that something needs to sit on the client or something needs to sit on at the place where the picture is getting captured or where the video is getting captured to determine liveness. Because that’s exactly where all of these other pieces of software are sitting. Now, that means that this piece of code or this SDK that sits at the browser layer or at the client is able to determine, “Do you have multiple cameras out there? Are you using something that is plugged into the USB? Or is it using something that is virtual?”

And also preventing something from injecting random pieces of information in the stream that is going out to our server. So what that means is that, yes, cracking that in is going to be highly improbable, and we’ll see, obviously, there are different avenues to go ahead and improve the technology. But right now, we are able to go ahead and determine, at the point where the picture is getting captured that, “Hey, is there an injection attack that’s happening?” And this could be done even on the mobile device, right? So no problem.

Okay, all right. All right. So when we talk about identity verification, at the very highest levels of assurance, we’re essentially talking about an enrollment process, where somebody presents themselves to a camera, and that likeness is verified against an offline credential, something like a state-issued ID, driver’s license, a passport. So for this to happen, at the point of enrollments, we’re doing several things, correct? We’re doing a passive likeness, we’re doing an active likeness, we’re checking for injection attacks, then somebody presents a credential, and we’re also inspecting that credential for signs of manipulation. “Has the photograph been manipulated? Is it a valid credential, with,” for example, “the databases that contain the driver’s license information from the states?” Only when it passes that, all those tests, all those inspections, has somebody then successfully registered. Once that process, the enrollment process, is completed, now folks are using this biometric to log in. So again, at the point where a access request is being made, we’re again inspecting for signs of manipulation, being liveness, passive, active, and then, the injection attacks.

This is where authentication is changing for the enterprise, constantly changing for the customers. We are definitely, the entire space is evolving, because now earlier enrollment used to be, hey, you have a username, password, and then, get in. Max, you do an MFA. Now, enrollment can happen really quick, self-service, just by utilizing your credential, that’s powerful. You’ve already gotten a credential, you can do this remotely. And the consuming, either company, it’s an enterprise, or whether it’s a customer piece of software, can trust what is happening. Then on, as a customer, now, like you mentioned, I have the ability to authenticate using my face, and we are able to go ahead and say confidently, “Know what, you’re a live person, I know who you are, Vikram. Welcome.” Wouldn’t you like that experience?

I certainly would. So at 1Kosmos, we talk and we have talked a long time about certifications, and for some, that’s a snoozer topic, “Why are we talking about certifications?” So what’s the role of certifications in this? It’s one thing for any number of companies to claim that, “We do this stuff,” but what proves that we do this stuff?

Right, yeah, I think you’re right. So certifications, even the clients where we have implemented, they’ve said, “Okay, you know what? I really want to test out your software. I want to bring a picture in there. I want to put a mask on me. I want to put a video out there, and I’ll test it.” And we say, “All right, go test it. But that’s exactly why we do the certifications.” We’re doing this, because there are third parties who go through and try and break the system. We do pen testing. So it’s very important. It’s something that our clients can utilize and take it to their auditors and go, “Hey, I’ve gotten a certified system, which has been through the ringer, and now, I can just cut my timeline short, implement it, and get my users to actually use it and focus on what’s important for improving the business.”

Got it. So in terms of these certifications, just real briefly, ISO has a set of presentation attack defense specifications. So that’s a set of certifications where the system has to be rigorously tested, that it cannot be fooled, for example, by a 3D mask or by a mask with the eyes cut out. There are, with the NIST certifications, the 800-63 certification, this what certifies the biometric. Talk a little bit about NIST.

Yeah, the NIST has the FRVT certification, and essentially, now, they’re calling it FRTE. And of course, this the organization iBeta, which actually is dedicated towards testing biometrics. So we go through all of these agencies to make sure that we are not only complying with their tests, but also, standards and regulations that rule the use of biometrics. It’s one thing to be able to capture it and do it correctly, but can you do it in a privacy preserving way? I think that differentiates us.

All right. That’s bringing up another thing. Okay. But real briefly, what the UK version, the UK…

DIATF.

Right? So that’s there. FIDO2 is there. I know I’m forgetting a couple, but you mentioned privacy. So biometric, now that’s a pretty personal piece of information, along with my birthday and social security number, everything, right? So talk about privacy. What’s the big deal?

Yeah, I know we’re on the topic of trying to figure out liveness, are you live person and everything. But yeah, once you capture it, where are you storing it? How are you storing it? So this is where the 1Kosmos’ architecture comes into play wherein we have the key for the private key that is always in possession of the user. Now, this could be a pin or this could be a private key that is there that is stored within their device. Or really, what we are able to do is to calculate a key right at the edge by using your face. Your face is your key. That is what opens the door. So it is really amazing, that amazing technology that we’re able to do, wherein we are able to do that, manage your wallet for you, and think about what a wallet is.

It is just like your physical wallet, where you put different things, and you can put in your face, you can put in your fingerprint, you can put in all the other biometrics that are in there, along with your FIDO credential and your smart car credential. If you want to put your legacy password, you can put it in there. It’s simply a wallet that is in your possession, that can be opened up only by using certain keys.

Yeah, I love that. Vik, that means two things to me. It means that, as a consumer, I control the data. I want to delete the data, I delete it. I want to use it, I determine, at the point of use, exactly what information I’m going to share, I feel comfortable sharing. The other thing it means is for the IT and security people, it’s not introducing another vulnerability, because you’ve introduced a better way to authenticate. And that point of vulnerability is this honeypot of customer information. Why have it when you don’t have to have it? So eliminating that honeypot to me is important, and I think it should be important to IT and security people. Have we left anything out? This feels like a pretty good conversation. One thing, we do have a new white paper coming out. It’s called Biometric Authentication Versus the Threat of Deepfakes. It’ll be on our website here shortly, so look out for that. Any parting shots, Vik, any stone unturned on deepfakes?

Yeah, I would just say, look out for them. And just be sure, just like, earlier, it was with the phishing emails, now it’s with deepfakes, make sure that you’re doing something real. Do your due diligence, and yeah, if you need help, talk to 1Kosmos.

Absolutely. Yeah. I see deepfakes is kind of the latest salvo from the cybercrime community to attack one of the creators of value on the digital side, that creators of value is getting workers quick access to systems, plugging the holes, so that account takeover doesn’t lead to data breach or ransomware. A lot of news about ransomware recently. So yeah, some really interesting developments. And once again, here at 1Kosmos, we’re trying to stay ahead of that with our innovative solutions. So Vik, thanks for spending time with me today. Very much appreciate your time.

Absolutely. Thanks, Michael. That was great.

The post Vlog: Defeating Deepfakes appeared first on 1Kosmos.


IDnow

On the road to automated identification: The future of the German financial sector?

Armin Bauer, Co-Founder & Chief Technology and Security Officer at IDnow, explains all about the German Federal Ministry of Finance’s new draft bill on the Money Laundering Video Identification Ordinance (GwVideoIdentV), what the amendments would mean for banks and financial service providers, and why IDnow has been driving these developments for years and welcomes the […]
Armin Bauer, Co-Founder & Chief Technology and Security Officer at IDnow, explains all about the German Federal Ministry of Finance’s new draft bill on the Money Laundering Video Identification Ordinance (GwVideoIdentV), what the amendments would mean for banks and financial service providers, and why IDnow has been driving these developments for years and welcomes the revisions.

Germany is lagging behind in its digital transformation efforts compared to the rest of Europe, despite its technological expertise. For example, as the first country to accept video identification procedures, Germany took on a pioneering role, but has subsequently lost this position due to  its inactivity and inability to approve successor solutions at national level.

The current basis for video identification in the banking sector continues to be rules contained in the Circular (“Rundschreiben”), which was issued by the German Federal Financial Supervisory Authority (Bundesanstalt für Finanzdienstleistungsaufsicht, BaFin) in 2017.

There have been few updates to video identification procedures in the German financial sector in recent years, with automation, modernization and user-friendliness all seemingly often in competition with security concerns. There were also inklings from Berlin that coordination on the GwVideoIdent regulation was tough.

However, an exciting breakthrough came in mid-April 2024: the Federal Ministry of Finance published the draft bill for the ordinance on money laundering identification by video identification.

What changes does the draft envisage? Classic VideoIdent procedures are now to be regulated by ordinance, and no longer just by the BaFin Circular from 2017. Companies whose services and products fall under the Money Laundering Act (GWG) and which use the existing VideoIdent procedure must offer mandatory identification using the German online ID function (German ID card with activated online function and NFC-enabled smartphone, eID for short) in addition to other solutions. According to the draft, this must be equivalent to other procedures: “In particular, providers must not deliberately mislead the person to be identified into preferring the video identification procedure via the user interface design”. The draft will create more use cases for the electronic ID card (eID) in the financial sector. The spread of the online ID function among the population is expected to increase. According to a PwC study, only seven percent of Germans would decide against using eID, while 70 percent are prepared to use the online ID card for transactions with banks and in legal matters. Partially automated procedures for identity verification are to be permitted. Partially automated means that parts of the identification process can in the future be automated, but there are still parts that may not. In addition, a direct (not downstream) check by a human must be part of the verification process. IDnow’s  “VideoIdent Flex” solution offers a clever combination of AI-supported, expert-led identity verification. Fully automated solutions for identity verification could also be approved for up to two years, subject to certain requirements (equivalence to a human check) – after a maximum six-month registration phase for testing under the supervision of the BSI (Bundesamt für Sicherheit in der Informationstechnik, Federal Office for Information Security). The BSI retains the final decision power regarding approval. How IDnow became a pioneer for secure and intuitive identification processes in the financial sector.

IDnow presented the idea of partially and fully automated video identification to the Ministry of Finance back in 2020. We therefore welcome the fact that legislation is now to follow the existing technical possibilities and that further identification procedures are to be approved. IDnow’s fully automated process has been setting industry standards for a long time, is certified in accordance with various industry standards (e.g. ETSI TS 119 461) and is already used for money laundering-compliant identification in other European countries.

Since IDnow was founded, the Munich-based IT company has worked closely with regulatory authorities and other market participants to drive progress and always ensure the highest level of security for identity verification. We believe that security does not have to come at the expense of intuitive design and user-friendliness. Our solutions provide customers with secure, reliable and straightforward identity verification processes.

Security, innovation and trust in digital identity management.

IDnow is a leading platform provider for identity verification and a German pioneer in digital applications and innovations – both in sensitive parts of the administration and in highly regulated areas, such as the financial sector.

We offer users a reliable, secure and seamless solution from a single source that meets the relevant regulatory requirements and security levels and protects against misuse, such as fraud, identity theft or forged signatures.

Creating a high level of security and sovereignty in the digital space.

In our view, clear rules and standards for digital identification procedures are the basis for ensuring trust, security and privacy. In fact, secure forms of digital identification can strengthen the sovereignty of citizens in the digital space and in the financial sector.

We have the expertise to meet the diverse range of security requirements, which vary depending on the industry and sensitivity of the use case. Germany must always strive to provide its citizens with the most secure, innovative and usable solutions and play a pioneering role in digitalization – this is what IDnow stands for with its broad portfolio of solutions.

We offer secure and powerful identity verification and fraud prevention solutions, including:

Automated identity verification (AutoIdent, AI-supported and GwG-compliant), Video verification solutions (VideoIdent, a GwG-compliant verification with high security requirements via video chat, supported by AI technology; and VideoIdent Flex, a customizable version), Chip-based identity verification, utilizing the German electronic ID card (eID) and NFC-enabled smartphones.

By

Armin Bauer
Chief Technology and Security Officer at IDnow
Connect with Armin on LinkedIn


Elliptic

Crypto regulatory affairs: Crackdown on crypto mixers and privacy wallets continues with Samurai Wallet takedown

Recent law enforcement and activity in the US is making it clear that intense pressure to disrupt the use of cryptoasset mixers and other privacy-enhancing services will not ease anytime soon. 

Recent law enforcement and activity in the US is making it clear that intense pressure to disrupt the use of cryptoasset mixers and other privacy-enhancing services will not ease anytime soon. 


SC Media - Identity and Access

US carriers fined nearly $200M over illegal customer location data sharing

The U.S. Federal Communications Commission imposed $196 million in total fines to AT&T, T-Mobile, and Verizon for engaging in the unlawful sale of customers' location information to data aggregators, reports The Record, a news site by cybersecurity firm Recorded Future.

The U.S. Federal Communications Commission imposed $196 million in total fines to AT&T, T-Mobile, and Verizon for engaging in the unlawful sale of customers' location information to data aggregators, reports The Record, a news site by cybersecurity firm Recorded Future.


Nearly 2M impacted by Financial Business and Consumer Solutions breach

BleepingComputer reports that U.S. nationally licensed debt collection agency Financial Business and Consumer Solutions had information from more than 1.95 million individuals across the country compromised following a data breach in February.

BleepingComputer reports that U.S. nationally licensed debt collection agency Financial Business and Consumer Solutions had information from more than 1.95 million individuals across the country compromised following a data breach in February.


Indicio

Integrating Indicio Proven: You don’t want a platform, you want control

The post Integrating Indicio Proven: You don’t want a platform, you want control appeared first on Indicio.
Indicio Proven® offers a simple three-step process for integrating its decentralized identity solution into existing systems, ensuring efficient data sharing and security with support from the Indicio team.

By Aashi Srivastava

In today’s fast-changing digital world, the demand for fast data you can trust has never been more pressing. Indicio Proven® is a reliable decentralized identity solution that can help organizations create more efficiency around the complexities of data sharing and security. Let’s explore how you can easily incorporate Indicio Proven into your existing systems.

Indicio Proven is not a rip-and-replace solution. Instead, it’s designed to seamlessly complement your existing systems, offering a flexible approach to enhance data security. By integrating Proven into your infrastructure, you retain control of your data while gaining the benefits of decentralized identity solutions, allowing you to implement verifiable credentials. This flexibility enables you to use the power of Proven without disrupting your current operations, enabling you to adapt and evolve your data management practices to meet the demands of today’s digital landscape.

You don’t need a platform. You want control, making existing investments better.

Integrating Indicio Proven into your existing systems might seem daunting, but it can be broken down into a simple three-step process. Every step of this integration process can be done on your own or in collaboration with the Indicio team to ensure a hassle-free experience.

Step 1: Installation of Proven Issuer Software Agent

The process begins with installing the Proven Issuer software agent on your server. This software agent is the backbone of your credential issuance process and connects seamlessly to your data source or service via standard REST APIs.

Indicio Proven is designed to meet your comprehensive data security needs while ensuring regulatory compliance, offering a robust solution that covers all essential aspects of decentralized identity management. It prioritizes compliance with industry regulations, providing peace of mind and confidence in navigating the complex landscape of data governance.

Step 2: Connection and Issuance of Verifiable Credentials

Once the Proven Issuer software agent is installed, API integration allows for a seamless system connection from your server to the Issuer agent. The next step is to connect the Issuer agent to the data’s intended recipients or holders, including customers, employees, employers, and partner organizations. You can start issuing verifiable credentials to the recipients using an invitation URL (often in a QR code) to connect to the holder’s agent. The holder agent is a software app downloaded by the holder to their mobile device. The Proven Mobile SDK can incorporate the agent functionality into existing mobile applications. Your Issuer agent is connected to the Holder agent with a secure encrypted channel, ensuring the smooth transmission of credentials. These credentials contain encrypted data securely stored in a wallet under the holder’s control and typically on their mobile device.

Step 3: Verification and Transmission of Data

When connecting the Indicio Proven Verifier application to the holder via an invitation URL, holders can consent to share their data securely and selectively. The application generates a presentation of the data for their credentials, allowing the verifier application to receive and verify the data. The verifier application can also be connected via APIs to other services or systems to transmit the verified data and share the trust established by the systems, protocols, and cryptography.

Through the integration process, Indicio’s team of industry veterans will be by your side, ready to provide support. There are several options for assistance, including onsite, personalized training, virtual office hours, or collaborative coding sessions. Indicio ensures that integration is not a barrier but an opportunity for growth.

Integrating Proven into your existing systems does not have to be an intimidating task. It’s a step towards greater data efficiency, security, and trust.

####

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post Integrating Indicio Proven: You don’t want a platform, you want control appeared first on Indicio.


IDnow

The importance of data-first fraud investigations, with Peter Taylor.

We sit down with the owner of a UK fraud consultancy firm to discuss how fraud has changed over the years, why fraud has become so pervasive in the UK, and the role of organizations in protecting their customers from fraud attacks. Despite very quickly reaching the rank of CID officer, you were with the […]
We sit down with the owner of a UK fraud consultancy firm to discuss how fraud has changed over the years, why fraud has become so pervasive in the UK, and the role of organizations in protecting their customers from fraud attacks. Despite very quickly reaching the rank of CID officer, you were with the police for a comparatively short time – just shy of eight years; what were the main reasons for you moving to the private sector? 

My police career was short but busy. I became a CID officer within about three years, which is unusual, and worked in a very dynamic and young CID office. Fraud had always been my passion and the police were increasingly turning their back on it to focus on other priorities.  

It was apparent at that time that people were disposing of their own cars and then making fraudulent insurance claims, and unfortunately, in many cases, it didn’t lead to prosecution, as the insurance companies were unable to detect it. I decided to take a risk and set up my own specialist fraud investigation business Taylor & Moore Investigations. At the time it was the first of its kind and within 18 months we had 22 people working for us and half the top 10 UK insurance companies as clients.  

What specific experience from your years in the police best prepared you for the transition to work as a fraud investigator for insurance companies? 

The business I set up was based on a good CID office. I began by intelligence gathering to understand how insurance fraud worked. I then produced a cost-effective system to detect and investigate insurance fraud. I built a whole training system based around what was needed, including criminal and insurance law. I also made great use of the intelligence we uncovered about those frauds. So, even back then we took a strategic approach of prevention, detection, containment, and analysis that is still used today by most of the industry.  

Obviously, fraud investigation techniques have changed dramatically since you started in the ‘80s, but what were some of the major differences in how you approached fraud investigation in the police and in the private sector? 

There are of course advantages in having a warrant card, a power of arrest, and the huge resources police have available.

However, there are also advantages to not being a police officer. I have never seen this mentioned by anyone else, but people were more willing to speak and tell me things than they had been when I was a police officer.

Peter Taylor, owner of fraud consultancy firm.

As my specialism was interviewing that made my life much easier. I am not sure folks realise the extent to which suspects and witnesses clam up when talking to figures of authority like the police.

What about fraud investigation techniques in general, how have they changed over the years? 

Interviews, data, and forensics in that order were the core of investigations at the time I left the police. Our business model was based on the same principles. The evolution of fraud and counter fraud has seen development in all three areas but particularly in data and forensics. A simple view is that the order in which they are relied upon has changed now to be data, forensics, and then interviews last.  

Gathering intelligence is much easier nowadays because of the widespread use of the internet and the volume of data now routinely available to investigators. In some cases, the interview is still very much part of gathering evidence but can often just be there to make sure we are not making a mistake by giving the people involved the opportunity to explain things that look suspicious.

Fraud and fraud prevention has always been a hotly contested game of cat and mouse. Was there any period of history where the ‘game’ was more one-sided? 

The numbers tell the story. When I started out, fraud was around 1% of all crime and we were definitely the cats and the criminals, the mice. That slowly changed around the year 2000. I believe the web enabling criminals to easily communicate with each other and reach targets within a much wider circle has been the main catalyst.  

Another factor that fuelled the fire was policing in the UK and other countries had massive funding cuts whilst having to focus on other especially important stuff like terrorism, domestic violence, and grooming. Fraud is now running at 40% of all crime, much of it does not get reported so it is hard to claim anything other than that the criminals now have the upper hand. That will change but not any time soon.

The future of fraud and digital identity verification in the UK. Our experts explore the current situation around fraud in the UK, how fraudsters are carrying out their nefarious activities at scale, and how technology can both help and hinder the fight against fraud. Watch now Technology is often regarded as a double-edged sword when it comes to fraud. What impact has the proliferation of technology had on the fraud landscape in the UK, and wider market? Is there a danger of ceding too much control, and outsourcing too much of the process to artificial intelligence? 

When we introduce new technology, there tends to be three stages of adoption: 

Stage one: What can we do with this?
Stage two: Can we make it safe?
Stage two: Can we make sure it is legal? 

Then it becomes a cycle of constant development and fixing things that we never quite catch up on. 

You cannot efficiently deal with any volume of crime without technology. For example, uou would not try to run a bank or any other business without it, and countering fraud is no exception. AI in some ways is just the latest standard, albeit like adding a turbo charger to an already fast car.  

Routine process should go through automation wherever practical, and we should use all the tools available. The dangers come from ignoring people who can see it is not working properly, and we have to be objective enough to actually listen to accused people who say it was not working properly too. Hence the Post Office Scandal.  

The biggest worry I have right now though is complacency around data accuracy. If the data is inaccurate or the AI poorly trained, then it will get things wrong. For me technology must be supervised rather than supervise important decisions.  

The UK is often referred to as the fraud capital of Europe, why do you think this is? What changes do you think need to be implemented to address this? 

In many ways illicit trade follows the same paths as legitimate trade. Although we are no longer part of the EU, the UK has for many decades been a hub and gateway to the EU for the US and other countries and vice versa. As the English language is also used across so many nations as a primary or secondary language, it is therefore familiar and a sort of lingua franca to many of those who commit crime.  

It’s also important to recognize that as the UK was present in many other nations through trading, colonialism, and military conflict for hundreds of years, there is also a perception abroad that we are all wealthy as individuals and businesses. 

This all makes the UK a target for overseas fraudsters, but we cannot ignore the reality that many fraudsters are based in the UK too. I often hear of us referred to as the butler to the world because of our involvement in laundering foreign wealth. 

Organized fraud sits within three areas: white collar crime, organized cybercrime, and organized crime. The UK criminals are prolific in all three areas. 

The changes needed are wholesale and we do at last seem to be making progress, but it will take time. What we need are: 

Consequences for committing fraud i.e. arrests and convictions.
A long-term strategy ideally within education that encourages young people to stay away from cybercrime, and to know how to protect themselves from financial crime and risk.
A focus on ethics and not just profit across government, public services, education, and the private sector.  A way of confirming who people are and what age they are without having to submit documents like passports and driving licences.  What part does consumer awareness play in the fight against fraud? 

Educating consumers as to the risks they face and how to protect themselves from fraud is a vital element of fraud prevention. Making people aware of what may happen and giving them clear advice helps. I am not convinced we do enough across the board to provide a remedy for them when an attempt is made, or a successful fraud occurs. However, this is only one side of the coin, organizations must also use technology to protect the consumer too.  

Many attacks can be stopped through good anti-virus software, as well as good standards of multi-factor authentication. I foresee the day when all this functionality will automatically be included on every device sold. Look at cars for example. They all now have transponders, dead locks, and alarms built in. Car thefts are rising again right now but until recently they were reducing year on year having hit around a million thefts per annum in the 1990s. 

How can organizations educate their customers on the dangers of fraud more effectively? 

I would like to see some sort of reward system for those who thwart serious fraud attempts, such as authorised push payment. It could be monetary, or it could be some other benefit but to acknowledge that the consumer has done the right thing could go a long way to prevent fraud. With the banks being expected to compensate customers who lose money to fraud then the banks benefit from fraudsters not getting away with it rather than the consumer. No good deed, however, goes unpunished and there would inevitably be attempts to cheat any such system. 

You deliver training in fraud prevention across a wide range of verticals, including insurance, banking, lending, online retail, workplace fraud, and organised cyber fraud, are there any similarities in fraud attacks across the industries, and conversely, what are the main differences? 

The UK Fraud Act 2006 clearly sets out the main fraud offences and the way frauds can be committed happen across all industries. People like to give names to different types of fraud based on the method and the target. I understand that and do it too and it can help with statistics, data, and analysis. Despite there being hundred of categories of fraud, the reality is there are a only handful of ways that fraud is committed e.g. false representation, not disclosing information, obtaining services by deception, abuse of position, and making or supplying articles for use in fraud. 

There are other financial crimes that come into play, such as bribery and corruption, money laundering, and theft, but most, if not all, frauds involve the above. The similarities far outweigh any specifics for different industries, which is why around 80% of the training I deliver is generic with the remainder being specific to the delegates taking part. 

From my experience of working with prolific career fraudsters they commit fraud across the board. One person I interviewed about his life of crime committed card fraud, insurance fraud, bank fraud, benefits fraud, and tax fraud. That surely shows that the same methods are adapted across different sectors and that it is the same fraudsters targeting different sectors too.  

How can organizations better protect themselves and their customers from the threat of fraud?

I have found that much like charity, protection from fraud and cybercrime starts at home. I have just worked with a client who has allowed me to train and educate all their staff on how fraud and cybercrime works, how they are at risk, and how to protect themselves from it. Get that done first and then they will automatically protect their organization and their customers in the same way. Feedback and engagement with the delegates has been tremendous. They were not just trained they changed what they do.

Learn more about the UK’s fraud landscape in an upcoming webinar with Peter and other thought leaders on May 21, 2024. More information can be found here.

If you’re interested in more insights from industry insiders and thought leaders from the world of fraud and fraud prevention, check out one of our interviews from our Spotlight Interview series below.

Jinisha Bhatt, financial crime investigator
Paul Stratton, ex-police officer and financial crime trainer
David Birch, global advisor and investor in digital financial services
Or, discover all about the rise of social media fraud, and how one man almost lost a million euros to a pig butchering scam in our blog, ‘The rise of social media fraud: How one man almost lost it all.’

By

Jody Houton
Senior Content Manager at IDnow
Connect with Jody on LinkedIn


liminal (was OWI)

Customer Identity and Access Management Market Guide 2024

The post Customer Identity and Access Management Market Guide 2024 appeared first on Liminal.co.

Elliptic

HMT consultation on improving the effectiveness of the UK’s Money Laundering Regulations

His Majesty’s Treasury (HMT), published in March 2024 a consultation on improving the effectiveness of the Money Laundering Regulations. The consultation ends on 9 June. Some of the topics for consultation come from HMT’s 2022 review of the UK’s anti-money laundering and counter-terrorist financing regulatory and supervisory regime.

His Majesty’s Treasury (HMT), published in March 2024 a consultation on improving the effectiveness of the Money Laundering Regulations. The consultation ends on 9 June. Some of the topics for consultation come from HMT’s 2022 review of the UK’s anti-money laundering and counter-terrorist financing regulatory and supervisory regime.

Monday, 29. April 2024

Verida

Verida is enabling the privacy preserving AI tech stack

The Verida Network provides storage infrastructure perfect for AI solutions and the upcoming data connector framework will create a new data economy that benefits end users Verida is enabling the privacy preserving AI tech stack Written by Chris Were (Verida CEO & Co-Founder) and originally published on tahpot: Web3 on the edge, this post is Part 3 of a Privacy / AI series. See: - P

The Verida Network provides storage infrastructure perfect for AI solutions and the upcoming data connector framework will create a new data economy that benefits end users

Verida is enabling the privacy preserving AI tech stack

Written by Chris Were (Verida CEO & Co-Founder) and originally published on tahpot: Web3 on the edge, this post is Part 3 of a Privacy / AI series.
See:
- Part 1: Top Three Data Privacy Issues Facing AI Today
- Part 2: How web3 and DePIN solves AI’s data privacy problems.

Verida is providing key infrastructure that will underpin the next generation of the privacy preserving AI technology stack. The Verida Network provides private storage, sources of personal data and expandable infrastructure to make this future a reality.

Let’s dive into each of these areas in more detail.

Private storage for personal AI models

The Verida network is designed for storing private, personal data. It is a highly performant, low cost, regulatory compliant solution for storing structured database data for any type of application.

Data stored on the network is protected with a user’s private key, ensuring they are the only account that can request access to, and encrypt their data (unless they provide permission to another account).

This makes the Verida Network ideal for storing private AI models for end users. The network’s high performance (leveraging P2P web socket connections), makes it suitable for high speed read / write applications such as training LLMs.

Source of data for training AI models

We’ve all heard the saying “garbage in, garbage out” when it comes to data. This also applies to training AI models. They are only as good as the data they are fed for training purposes.

The Verida ecosystem provides a broad range of capabilities that make it ideally suited to being a primary source of highly valuable data for training AI models.

Verida has been developing an API data connector framework that enables users to easily connect to existing API’s of centralized platforms and claim their personal data, that they can securely store on the Verida network.

Users on the Verida network will be able to pull health activity data from the likes of Strava and Fitbit. They can pull their private messages from chat platforms, their data from Google and Apple accounts. This can all then be leveraged to train AI models for exclusive use by the user, or that data can be anonymized and contributed to larger training models.

Establishing a data-driven token economy offers a promising avenue for fostering fairness among all stakeholders. Eventually, major tech and data corporations may introduce a token system for service payments, thereby incentivizing users to share their data.

For an example; individuals could leverage their anonymous health data to train AI models for healthcare research and receive token rewards in return. These rewards could then be utilized for subscribing to the service or unlocking premium features, establishing a self-sustaining cycle where data sharing leads to increased service access as a reward. This model fosters a secure and equitable relationship between data contribution and enhanced service access, ensuring that those who contribute more to the ecosystem reap greater benefits in return.

Users could also use their personal data to train AI models designed just for them. Imagine a digital AI assistant guiding you through your life. Suggesting meetup events to attend to improve your career, suggesting a cheaper greener electricity retailer based on your usage, suggesting a better phone plan or simply reminding you of an event you forgot to add to your calendar.

Expandable infrastructure

As touched on in “How web3 and DePIN solves AI’s data privacy problems”, privacy preserving AI will need access to privacy preserving computation to train AI models and respond to user prompts.

Verida is not in the business of providing private decentralized computation, however the Verida identity framework (based on the DID-Core W3C standard) is expandable to connect to this type of Decentralized Physical Infrastructure (DePIN).

Identities on the Verida network can currently be linked to three types of DePIN; Database storage, Private inbox messages, Private notifications. This architecture can easily be extended to support new use cases such as “Private compute” or “Personal AI prompt API”.

With the appropriate partners who support decentralized private compute, there is a very clear pathway to enable personalized, privacy preserving AI leveraging a 100% DePIN technology stack.

This is incredibly exciting, as it will provide a more secure, privacy preserving solution as an alternative to giving all our data to large centralized technology companies.

About Chris

Chris Were is the CEO of Verida, a decentralized, self-sovereign data network that empowers individuals to control their digital identity and personal data. Chris is an Australian-based technology entrepreneur who has spent over 20 years developing innovative software solutions, most recently with Verida. With his application of the latest technologies, Chris has disrupted the finance, media, and healthcare industries.

Verida is enabling the privacy preserving AI tech stack was originally published in Verida on Medium, where people are continuing the conversation by highlighting and responding to this story.


uquodo

NFC: The Keystone of Trust in a Digital World – Why Physical Document Verification Matters?

The post NFC: The Keystone of Trust in a Digital World – Why Physical Document Verification Matters? appeared first on uqudo.

SC Media - Identity and Access

Meet Silver SAML: Golden SAML in the Cloud - Eric Woodruff - BSW #348


auth0

The Backend for Frontend Pattern

Learn how to keep tokens more secure by using the Backend for Frontend (BFF) architectural pattern.
Learn how to keep tokens more secure by using the Backend for Frontend (BFF) architectural pattern.

IBM Blockchain

How fintech innovation is driving digital transformation for communities across the globe  

From democratizing finance to establishing digital currencies, fintechs have been transformational for the financial services industry. The post How fintech innovation is driving digital transformation for communities across the globe   appeared first on IBM Blog.

To meet the demands of today’s consumers, enterprises must be continuously innovating. But innovation doesn’t happen in silos. Fintechs, for example, have been transformational for the financial services industry, from democratizing finance to establishing digital currencies that revolutionized the way that we think of money.  

As fintechs race to keep up with the needs of their customers and co-create with larger financial institutions, they can leverage AI and hybrid cloud solutions to drive true digital transformation and meet these evolving demands. 

How Dollarito is connecting larger financial institutions with financially underserved communities 

According to research from the US Government Accountability Office, roughly 45 million people lack credit scores because they don’t have certain data points that credit scores are based on, which limits their eligibility. Traditional credit report models use parameters such as the status of an active loan or credit card payment records to give an individual a credit score. If someone does not fit within these parameters, it can be difficult to procure a loan, take out a mortgage or even buy a car. However, with a more accurate model, such as one powered by AI, financial institutions can better identify applicants who are fit for credit. This can result in a higher approval rate for these populations that otherwise would typically be overlooked. 

Dollarito, a digital lending platform, is focused on helping the Hispanic population with no credit history or low FICO scores access fair credit. The platform offers a unique solution that measures repayment capabilities by using new methodology based on AI, behavioral economics, cloud technology and real-time data. Leveraging AI, Dollarito’s models tap into a wide store of data from banking transactions, behavioral data and economic variables related to the credit applicant’s income source. 

With IBM Cloud for Financial Services®, Dollarito, an IBM Business Partner, is able to scale their models continuously and quickly deploy the services that their clients need, while ensuring their services meet the standards and regulations of the industry.  

“Dollarito uses IBM Cloud for Financial Services technologies to optimize infrastructure and demonstrate compliance, allowing us to focus on our mission of providing financial services to underserved communities. We are dedicated to building a bridge of trust between these populations and traditional financial institutions and capital markets. With AI and hybrid cloud technologies from IBM, we are developing solutions to serve these groups in a cost-effective way while addressing risk.” – Carmen Roman, CEO and Founder of Dollarito 

Dollarito is also embracing generative AI, integrating IBM watsonx™ assistant to help its users interact easily and get financial insights to improve the likelihood of access to credit. Like IBM®, Dollarito recognizes the great opportunity that AI brings for the financial services industry, allowing enterprises to tap into a wealth of new market opportunities.  

How Ionburst is helping to protect critical data in a hybrid world 

Data security is central to nearly everything that we do, especially within financial services as banks and other institutions are trusted to protect the most sensitive consumer data. As data now lives everywhere, across multiple clouds, on-premises and at the edge, it is more important than ever before that banks manage their security centrally. And this is where Ionburst comes in. 

With their platform running on IBM Cloud, Ionburst provides data protection across hybrid cloud environments, prioritizing compliance, security and recovery of data. Ionburst’s platform provides a seamless and unified interface allowing for central management of data and is designed to help clients address their regulatory requirements, including data sovereignty, which can ultimately help them reduce compliance costs.  

Ionburst is actively bridging the security gap between data on-premises and the cloud by providing strong security guardrails and integrated data management. With Ionburst’s solution available on IBM Cloud for Financial Services, we are working together to reduce data security risks throughout the financial services industry. 

“It’s critical financial institutions consider how they can best mitigate risk. With Ionburst’s platform, we’re working to give organizations control and visibility over their data everywhere. IBM Cloud’s focus on compliance and security is helping us make this possible and enabling us to give customers confidence that their data is protected – which is critically important in the financial services sector,” – David Lanc and Anne Lanc, Co-Founders and Inventors of Ionburst 

Leveraging the value of ecosystems  

Tapping into innovations from fintechs has immensely impacted the financial services industry. As shown by Ionburst and Dollarito, having an innovative ecosystem that supports your mission as a larger financial institution is critical for success and accelerating the adoption of AI and hybrid cloud technology can help drive innovation throughout the industry. 

With IBM Cloud for Financial Services, IBM is positioned to help fintechs ensure that their products and services are compliant and adhere to the same stringent regulations that banks must meet. With security and controls built into the cloud platform and designed by the industry, we aim to help fintechs and larger financial institutions mitigate risk, address evolving regulations and accelerate cloud and AI adoption. 

Learn more about IBM Cloud Explore the IBM Cloud Fintech Program

The post How fintech innovation is driving digital transformation for communities across the globe   appeared first on IBM Blog.


SC Media - Identity and Access

Okta spots ‘unprecedented’ spike in credential stuffing attacks

The IAM service provider said easy access to stolen credentials and residential proxy services were behind the surge.

The IAM service provider said easy access to stolen credentials and residential proxy services were behind the surge.


Cloud sector rejects proposed know-your-customer EO

Ahead of its imminent approval, the Biden administration's proposed executive order mandating U.S. cloud infrastructure-as-a-service providers to strengthen the verification of their users' identities has received industry opposition due to the increased financial and logistical burdens that would arise from such a rule, according to The Record, a news site by cybersecurity firm Recorded Future.

Ahead of its imminent approval, the Biden administration's proposed executive order mandating U.S. cloud infrastructure-as-a-service providers to strengthen the verification of their users' identities has received industry opposition due to the increased financial and logistical burdens that would arise from such a rule, according to The Record, a news site by cybersecurity firm Recorded Future.


Transmute TechTalk

Introducing ICC KTDDE: Embracing Modern Trade Standards with Transmute’s VDP

April 24th, 2024 Today marks a pivotal moment as Transmute proudly co-announces the release of the Key Trade Documents and Data Elements (KTDDE) initiative led by the International Chamber of Commerce (ICC) Digital Standards Initiative (DSI). KTDDE represents a concerted effort to standardize essential trade documents and data elements, fostering a seamless digital trade ecosystem. Transmut
April 24th, 2024

Today marks a pivotal moment as Transmute proudly co-announces the release of the Key Trade Documents and Data Elements (KTDDE) initiative led by the International Chamber of Commerce (ICC) Digital Standards Initiative (DSI). KTDDE represents a concerted effort to standardize essential trade documents and data elements, fostering a seamless digital trade ecosystem.

Transmute’s involvement in the KTDDE project underscores our commitment to driving innovation and facilitating trade digitization globally. As a proud collaborator in this transformative project, Transmute introduces its Verifiable Data Platform (VDP) as the leading solution for businesses seeking to embrace the newly published KTDDE standards.

Transmute is well known for our contribution to technology and international trade standards. We pride ourselves in always implementing the standards we are part of driving. Our Verifiable Data Platform offers a robust implementation of the KTDDE data models. Built on advanced verifiability technology, VDP empowers businesses to leverage the KTDDE data models effectively, enabling secure exchange and verification of trade documents. This implementation ensures authenticity, integrity, and trust throughout the trade process, aligning seamlessly with the objectives of the KTDDE initiative.

Mapping of sample KTDDE elements (left) implemented verifiably on Transmute’s Verifiable Data Platform (right).

We urge businesses embarking on their digitization journey to take proactive steps towards embracing the KTDDE standards by signing up for Transmute’s Verifiable Data Platform today. Right now we offer a free 6 months Pro plan by joining our Trade Council. Sign up today and join us in shaping the future of trade digitization.

For a more in-depth exploration of the implementation of KTDDE and application of modern technologies such as Verifiable Credentials, Decentralized Identifiers, and Linked Data, please see:

Digital Trade Document Relationships

Introducing ICC KTDDE: Embracing Modern Trade Standards with Transmute’s VDP was originally published in Transmute on Medium, where people are continuing the conversation by highlighting and responding to this story.


SC Media - Identity and Access

Misconfiguration exposes Empire Distribution data

U.S. independent record label Empire Distribution, which has worked with Kendrick Lamar, Snoop Dogg, and 50 Cent, had its sensitive data exposed as a result of an environment file misconfiguration, Cybernews reports.

U.S. independent record label Empire Distribution, which has worked with Kendrick Lamar, Snoop Dogg, and 50 Cent, had its sensitive data exposed as a result of an environment file misconfiguration, Cybernews reports.


California state welfare platform hack impacts over 19K accounts

Officials at the California Statewide Automated Welfare System disclosed that over 19,000 accounts in the state's BenefitsCal welfare program portal were compromised for almost a year, reports The Record, a news site by cybersecurity firm Recorded Future.

Officials at the California Statewide Automated Welfare System disclosed that over 19,000 accounts in the state's BenefitsCal welfare program portal were compromised for almost a year, reports The Record, a news site by cybersecurity firm Recorded Future.


1Kosmos BlockID

Vlog: How the Reserve Bank of India Guidelines Align with 1Kosmos

Watch our latest vlog for an insightful conversation between Michael Cichon, Chief Marketing Officer at 1Kosmos, and Siddharth Gandhi, Chief Operating Officer of the Asia Pacific region. In this video, they explore the latest Reserve Bank of India guidelines on digital payment transactions and its implications for the banking and financial sector. Join them as … Continued The post Vlog: How the

Watch our latest vlog for an insightful conversation between Michael Cichon, Chief Marketing Officer at 1Kosmos, and Siddharth Gandhi, Chief Operating Officer of the Asia Pacific region. In this video, they explore the latest Reserve Bank of India guidelines on digital payment transactions and its implications for the banking and financial sector. Join them as they delve into the challenges of current authentication methods, the need for stronger security measures, and the innovative solutions offered by 1Kosmos. Get ready to gain valuable insights into the evolving landscape of digital authentication and its impact on user experience, privacy, and transaction security.

Michael Cichon:
Well, hello everybody. This is Michael Cichon, the Chief Marketing Officer here at 1Kosmos. I’m here today with Sid Gandhi, the Chief operating Officer of our Asia Pacific region. Sid, welcome to the vlog. I invited you here today to talk about the Reserve Bank of India guidelines, the new guideline. There are some pretty interesting implications for online payments. You’re familiar with those. Can you talk about them a bit?

Siddharth Gandhi:
Sure, absolutely, Michael. Pleasure to be here. Thank you so much for inviting me for the v-log. So yes, interesting times for us at 1Kosmos. The Reserve Bank of India recently announced a guideline which talks about principle-based framework for authentication of digital payment transactions. And very interestingly, I believe that the RBI or the Federal Bank of India has done a fantastic job over the years, not just to well manage the monetary policies, but they also do look at technology and security aspects and provide guidelines for the BFSI sector.

The recent one is interesting for a few reasons why. Number one is RBI has always spoken about having second factor as an authentication for internet banking. So while SMS-based OTP has been very commonly used today across the globe and in India, but interestingly, RBI has also mentioned there are three forms of authentication, right? Something that the user knows, which is a user ID password and a PIN. Something that the user has, which is their credit card, their ATM card, their phone, or something that the user is, which we all are aware is their biometric characteristics like their fingerprint, their facial features, or perhaps even their iris.

Now very interestingly, I was going through some of the content that RBI has published over time. It does call out saying that a true multifactor authentication requires at least two or more factors. However, what the industry has largely done is that they have gone with something that the user knows, which is the user ID password, and gone with the second factor of something that the user has, which is the SMS-based OTP and that has been around probably as long as I can remember for a few decades. This is the first time RBI has now said that you need to start looking beyond SMS based OTP, thanks to all the technological advances or the innovation that are emerging in the recent years. And we are here today ready to offer that to the industry as alternate factors.

Michael Cichon:
Okay. All right, so let’s take a step back here. As consumers, we’re all familiar with step-up authentication now. We might not know that it’s called that, but whenever I try to buy something online, I get the code, will you please key in this code to prove that this is legitimate? So it’s that built-in lack of trust in the password that got me to the shopping cart or the payment transaction and then I prompted for the code. So what’s wrong with the code? Why change?

Siddharth Gandhi:
Why change for the, I think more than one reason, Michael. Today SMS-based codes are considered as weak form of second factor. Why? Because of man-in-the-middle attack or SMS interception. You have dependency on having a SIM card. You have dependency on having the network with you to receive those codes. And you also have challenges around expiration or delivery delays. Especially in the recent few years we’ve seen SIM swapping also take place, which is again a big concern and increasing number of breaches happening even with a second factor like SMS-based OTP available. And for that reason is why I believe RBI is also indicating and telling the banks that you must start looking beyond.

Michael Cichon:
Okay. All right, so this is the kind of what I would call proxies for identity, right? You are you because you knew the password, you are you because you’re able to key in this code that I just sent you on your phone. You are you because your SIM card matches what we thought it would be. So all these are kind of guesses or approximations, it get us closer to identity, but it is almost what silly human tricks, can you go through these calisthenics to prove you are who you are. So not only are they not specific, they’re not exact, they’re guesses still. They’re clumsy, right? You want to buy something, you’re going to pay for something, and now you got to go to your inbox or go to your message folder, look at the code, remember the code, key the code in, hopefully key it in right. If you’re not, you got to go back, look at again, key it in again. So a lot of implications here, but there’s more to this principle-based framework than just convenience. There’s a privacy angle, right?

Siddharth Gandhi:
Correct, yes. There’s a privacy angle and there’s a security angle. You spoke about how do you prove who you are and absolutely with the credentials and SMS-based OTP, it’s really hard to prove. I could share my credentials with you. I could give you my OTP and you could still transact on my behalf as a friend, but what if you are not my friend? You get hold of my credentials in the dark web. You could very well go ahead and take the money away from me and that’s being reported every single day.

What we are able to do at 1Kosmos is prove who the user is based on our platform capabilities of identity verification and proofing, which is based on global standards. And along with that, we have multiple other capabilities of bringing in advanced biometric. We have our IP, something called Live ID, where we actually ask the user to showcase features by blinking, smiling and asking them to prove they are real person. And also the platform capabilities have a private and a permission blockchain, which even makes it even more secure. The idea is not just providing a second factor to the user, but how you deliver it to the user and how the authentication happens throughout that journey till conclusion of the transaction also plays a very, very critical goal.

Michael Cichon:
Right. Well, so this is interesting to me because it seems like what the Reserve Bank of India has done, it’s the latest iteration or the latest generation of guidelines that have surfaced around essentially the 1Kosmos business model or architecture, if you will. Dating back to whatever 2016, 2017 we saw in the United States, the NIST guidelines come out, the 800-63. We saw FIDO2, if you want to do biometric authentication, there’s a FIDO2 spec for that. We have the specification in the United Kingdom. And now here is the Reserve Bank of India coming up with the latest guidelines on what sounds like how you need to deploy this technology so we get to a higher level of both integrity, trust, and security in the transaction, but also privacy and security of the user’s information as well.

Siddharth Gandhi:
Exactly, Michael. You’re absolutely right. And not to forget at the end of the day, user convenience as well. So we are not only just providing security, privacy by default, but the experience of the user is very seamless. It’s very elegant by when you provide password-less experience to the user, they would enjoy transacting on the platform instead of trying to figure out or recall their eight, 12, 16 character passwords, waiting on the second factor, which may or may not arrive. So you’re absolutely right. I think we’ve been talking about it for the last few years and some of the BFSI customers do use our platform today and we are very happy and very excited with the recent announcement from RBI.

Michael Cichon:
Well, sure, because the guideline points to specific capabilities that we’ve been supporting for a while now. So when you step away from what you know to the who you are, there’s a range of possibilities there, right?

Siddharth Gandhi:
Correct. You are absolutely right. So we are able to bring in various factors that we’ve already discussed about, and these are based on global standards. I think that’s very important that the industry understands this. We spoke about FIDO certification from a privacy standpoint, we do, are able to meet global privacy standards like GDPR. India now has a DPDPA. And the interesting thing about our architecture is that we don’t store specifically any user information, which makes us very unique compared to some of the other players in the market and which is what customers or customer like banks would want, where they are safeguarding their customer data.

Michael Cichon:
So specifically in terms of the biometrics that are used for this, what I would call step-up authentication, they range from device biometrics, for example, what we’re all familiar with a thumb or a face ID on your device all the way up to a live biometric, correct?

Siddharth Gandhi:
That is correct, yes. So I would probably turn it a little bit in terms of how to think of it. So yes, we definitely want to bring in something that the user is by bringing in the biometrics. At the base level, you probably may want to use the device biometrics, which include the fingerprint or the face ID.

RBI in general also talks about risk-based authentication where the, for example, higher amount of transaction, you want to ensure that you are able to step-up. In our case, the way we look at step-up is that we bring in a Live ID where we are going to ask the person to blink, smile, turn left, right on a random basis to prove that the person is real. Alternately, yes, this does bring in a little bit of dependency on the phone, but we also as part of our platform capability can offer an additional alternate form of authentication, which is app-less. So if a customer is sitting on the laptop, he could use his camera to still provide those facial features or what we call as passive liveness to authenticate as well. Worst case scenario, you have time-based OTP, which is still more secure, I would say, than SMS or an email-based OTP.

Michael Cichon:
Well, so this is interesting. So not only do the guidelines specify or indicate that there’s a range of biometrics that might be appropriate based on the risk of the transaction for a low-value transaction, maybe a thumb or a face ID on the phone. It’s thousands of dollars, might require that you blink and smile and turn your head.

Siddharth Gandhi:
Correct. Yes. That’s what we are offering, Michael. I don’t think the guidelines would specifically talk about what the bank should be doing based on risk based or step-up. Guidelines are broad guide norms that they’re asking the banks to adhere to. The interpretations are left to the bank in terms of what and how they want to adopt it.

Michael Cichon:
Right. Well, that’s important too. I mean, there’s new developments, there’s the new shiny objects, but no one organization magically use the magic wand and all of a sudden everything is the latest and greatest. So being able to support this mix of authentication or step-up authentication use cases is super important and we’ve done that for a while with our platform.

Well, Sid, I know that we’re working extensively with some large financial institutions in Asia Pacific. Is it fair to talk a little bit about what we’ve done that aligns with these guidelines for those organizations?

Siddharth Gandhi:
Absolutely. I think so. Unfortunately because of the NDA, I am not going to be able to take names of the customers, but we can definitely talk about what we’ve done for them. So we are working with a few banking and financial institutions in India in the Southeast Asian markets.

One of the private sector bank has been our customer for the last or three years. It’s a known fact that RBI does audit the banks and the financial institutions on a yearly basis so we undergone the audit without any major challenges. The bank use us for their internal workforce to protect their internal enterprise applications and data, but also the customers are using it and we are having some very interesting conversation with some of the other banks that are there.

For one of the digital-only bank in Southeast Asian market, we are providing them with our liveness capability as part of our platform where the users are actually authenticating for transaction with the liveness factor, which kind of prevents any fraud that may potentially happen and ensuring that the customer data, the transactions, and the payment transactions are happening in a secure manner.

Michael Cichon:
Well that’s awesome. Well, Sid, I appreciate you spending time with us today. It’s, from my perspective, really gratifying to see these principle-based guidelines emerge in step with where 1Kosmos has been for the last couple of years. So it’s good to get some validation that we’ve been on the right track and remain on the right track.

Siddharth Gandhi:
Yes, absolutely, Michael. We’ve been talking about it for the last few years, so it’s very heartening to see the RBI coming out and asking the industry to start looking at the alternate factors of authentication. And we at 1Kosmos are sure ready to assist anyone who needs assistance with the guidelines.

Michael Cichon:
Indeed, we are. All right, thank you very much, Sid.

Siddharth Gandhi:
Thank you so much, Michael.

The post Vlog: How the Reserve Bank of India Guidelines Align with 1Kosmos appeared first on 1Kosmos.


Shyft Network

Guide to FATF Travel Rule Compliance in Mexico

Mexico is in the process of implementing the FATF Travel Rule. The country has a stringent KYC process for crypto businesses, which consists of identity verification, customer due diligence, and transaction monitoring. Crypto service providers must also register and report high-value client transactions from the past six months to the country’s ‘AML Portal.’ Mexico, a North American
Mexico is in the process of implementing the FATF Travel Rule. The country has a stringent KYC process for crypto businesses, which consists of identity verification, customer due diligence, and transaction monitoring. Crypto service providers must also register and report high-value client transactions from the past six months to the country’s ‘AML Portal.’

Mexico, a North American country and one of the largest recipients of remittances globally, is experiencing growing crypto adoption. As per Chainalysis’s 2023 Global Crypto Adoption Index, it ranks among the top 20 nations for grassroots crypto adoption.

On the regulatory front, the country lacks comprehensive crypto rules. However, it has strong AML/CFT regulations in place and is in the process of implementing the FATF Travel Rule.

Is Crypto Regulated in Mexico?

The purchase, sale, transfer, and custody of crypto assets are regulated under Mexico’s Fintech Law.

The country’s main financial regulators, the National Banking and Securities Commission (CNBV), the Secretariat of the Treasury and Public Credit (SHCP), and the Bank of Mexico (Banxico), have clarified that financial institutions may carry out operations with crypto but only with prior authorization from the central bank.

Crypto Travel Rule in Mexico

According to the Financial Action Task Force (FATF), the country is progressing in enacting the rule for virtual asset service providers (VASPs) to comply with the Crypto Travel rule.

For now, the country has conducted a risk assessment covering virtual assets and VASPs, enacted legislation requiring VASPs to be registered and apply AML/CFT measures, and covers VASPs in its supervisory inspection plan.

Key Features

As a member of the FATF, Mexico is required to comply with the organization’s recommendations, which means the country’s crypto sector must adopt the Crypto Travel rule.

Mexico has also designed its anti-money laundering (AML) compliance and Know Your Customer (KYC) requirements around FATF’s international standards in its fight against financial crimes.

Its comprehensive KYC program consists of three main components:

Identity verification (IDV) — Financial institutions are required to collect and verify their customers’ personal information. Authorities mandate that businesses confirm their customers’ identities to facilitate the identification of suspicious accounts and enable regulators to investigate and apprehend bad actors.

Customer due diligence (CDD) — Businesses perform multiple checks to verify their customers’ identities and assess their risk profiles to gauge their potential involvement in money laundering. High-risk customers may be subject to additional scrutiny and enhanced security measures, or they may be denied services altogether.

Transaction monitoring — Businesses continuously monitor customer transactions for any signs of suspicious activity. This involves assessing customer information, transfers, deposits, and withdrawals. Businesses must also set alerts and establish policies that dictate the necessary actions to be taken.

Compliance Requirements

Transactions related to crypto assets are considered vulnerable activities under Mexico’s Anti-Money Laundering Legal Framework. This means businesses dealing with crypto must comply with the AML/CFT obligations.

According to this, businesses must identify customers for which the following information and documentation, at minimum, are to be requested:

- Name

- Date of birth

- Nationality

- Residential address

- Email address

- Phone number

- Taxpayer registration code

In case the customer is an organization, the business must obtain the following information and verify it:

- Name

- Nationality

- Email & physical address

- Phone number

- Date of formation

- Taxpayer registration code

- Serial number of advanced electronic signature

Beneficial owners are also required to be identified and verified in the same manner.

Impact on Cryptocurrency Exchanges and Wallets

In Mexico, crypto transactions by VASPs are considered a regulated activity. As a result, VASPs must register and submit details of clients involved in high-value transactions over the past six months to the ‘AML Portal.’

VASPs must also draft a compliance manual, designate an officer for AML compliance, and employ a risk-based approach to assess AML risks.

Furthermore, these businesses are required to gather and maintain detailed transaction records for each client for the last six months. They must keep these records for at least five years and provide them to authorities when requested.

Moreover, during the client onboarding process, VASPs need to screen clients against lists provided by the Financial Intelligence Unit (FIU) and may also use other national and international lists to identify individuals linked to money laundering or terrorist financing.

While specific regulations for self-custodial wallets and decentralized finance (DeFi) platforms are not established in Mexico, they must still comply with general laws on consumer protection and personal data security.

Global Context

In August 2023, a study by the Latin American Financial Action Task Force claimed that Mexico is “leading the way” in Latin America’s adoption of the Crypto Travel Rule. Locally known as GAFILAT, the task force is an intergovernmental organization made up of 18 Latin American countries and is affiliated with the FATF.

According to the report, Mexico is far ahead of many other nations in adopting FATF’s Travel Rule guidelines, and it has “developed secondary regulations” to comply with the agency’s recommendations.

Meanwhile, FATF’s most recent report highlighting the jurisdictional implementation of VASP regulation and supervision for AML/CFT purposes stated that Mexico is in the process of implementing the FATF Travel Rule.

Concluding Thoughts

Although Mexico has yet to create a comprehensive regulatory framework, it focuses especially on combating money laundering and terrorist financing. These laws apply to individuals, businesses, and foreign firms operating in the country in an attempt to prevent crimes and ensure customer protection.

‍About Veriscope

‍Veriscope, the compliance infrastructure on Shyft Network, empowers Virtual Asset Service Providers (VASPs) with the only frictionless solution for complying with the FATF Travel Rule. Enhanced by User Signing, it enables VASPs to directly request cryptographic proof from users’ non-custodial wallets, streamlining the compliance process.

For more information, visit our website and contact our team for a discussion. To keep up-to-date on all things crypto regulations, sign up for our newsletter and follow us on X (Formerly Twitter), LinkedIn, Telegram, and Medium.

Guide to FATF Travel Rule Compliance in Mexico was originally published in Shyft Network on Medium, where people are continuing the conversation by highlighting and responding to this story.


Elliptic

Sanctions compliance and crypto: Managing your reporting and blocking obligations

Years of regulatory policy and enforcement by the US Department of the Treasury’s Office of Foreign Asset Control (OFAC) have made certain elements of the United States’ sanctions regime clear: most financial transactions springing from a comprehensively sanctioned jurisdiction must be blocked. Similarly, many goods imported from such a jurisdiction likewise must be blocked. Evasion of

Years of regulatory policy and enforcement by the US Department of the Treasury’s Office of Foreign Asset Control (OFAC) have made certain elements of the United States’ sanctions regime clear: most financial transactions springing from a comprehensively sanctioned jurisdiction must be blocked. Similarly, many goods imported from such a jurisdiction likewise must be blocked. Evasion of these controls, including any transaction structures designed to benefit the sanction party, are strictly prohibited.


TBD

9 Things You Didn't Know About Decentralized Identifiers

From nostalgic usernames to the next era of digital identity: gain deeper insights into how decentralized identifiers really work.

Remember your first username? If you were anything like me in the early 2000s—too young to surf the web but excited about the possibilities of connecting with the rest of the world—your username probably makes you cringe today.

Self-expression is the main driver for my internet usage. Over the years, I've created various usernames representing different parts of me at distinct periods of my life. From Millsberry to Myspace, each new website meant a new profile, leading to a fragmented experience. The most annoying part is that there's no connection between my profiles or "identities", so I have to remember all my passwords or rely on a password manager. Unfortunately, password managers are susceptible to security breaches.

This fragmentation of identity on the web poses a significant challenge: How do we manage these scattered identities efficiently and securely?

Many organizations are working hard to answer this question. Some are going passwordless via passkeys. Others, like the Open Researcher and Contributor ID (ORCID), implemented digital identifiers to associate publications, research, and open source contributions with a particular researcher.

Companies focused on advancing identity tech and self-sovereign identity are embracing Decentralized Identifiers (DIDs) as a solution. DIDs are globally unique, alphanumeric, immutable strings representing who you are across the decentralized web.

Speaking of DIDs -- did you know that Decentralized Identifiers are one of the pillars of Web5?

In this post, we'll explore nine more things you may not have known about Decentralized Identifiers.

1. DIDs are a W3C Standard

In 1994, Tim Berners Lee, the creator of the World Wide Web, founded the World Wide Web Consortium (W3C). The W3C is made up of groups of people focused on setting the best practices and standards for building the web. For example, the W3C develops and maintains standards for HTML, CSS, Web Accessibility, and Web Security. In July 2022, The W3C officially published standards for Decentralized Identifiers. This way, technologists would have blueprint for building and managing digital identities as we make the shift towards controlling your own identity on the internet.

2. DIDs can represent more than just humans

While DIDs represent people across the web, they can also represent organizations such as manufacturers, businesses, or government agencies. Technologists are exploring using DIDs to represent IoT devices like smart hubs, smart thermostats, or autonomous cars, allowing you to maintain control over your data usage! Here's a taboo but realistic use case that might make you blush— DIDs can even represent sex tech devices! 😳

3. It's nearly impossible for someone to steal your DID

A common question people often ask me is, "Can someone steal my DID?" DIDs are alphanumeric strings, so they may give people the impression that DIDs are top-secret passwords or API keys. But that's not the case; you can share your DID with anyone without compromising your safety. It's nearly impossible for someone to steal your DID and pretend to be you because DIDs are all cryptographically signed.

What does 'cryptographically signed' mean?

Cryptographically signed means that each DID has a unique digital fingerprint generated by a fancy, complicated algorithm. Two keys—a public key and a private key—make up the digital fingerprint. Your public key shows other people that the DID legitimately belongs to you, but the private key needs to remain private. The private key is like your master key. Someone can only steal your DID, tamper with your DID, or impersonate you if they have access to your private key. Fortunately, it's not easy to access your private key because it is protected by encryption and multiple layers of security. Some DID methods even support key rotation meaning you can update your cryptographic keys to reduce the risk of compromise.

Store your DID in an authenticated digital wallet

In addition to all these security algorithms that protect your DID from being stolen, users typically store their DIDs in an authenticated digital wallet (similar to how your Apple Wallet or Google Wallet stores your debit card information). Storing your DID in a digital wallet provides an additional layer of security because you often have to use a form of authentication like Face ID or a passcode to access items stored in the wallet.

4. Your DID is more than a string

While your DID is a string, it's part of a larger JSON object called a DID document. In addition to the DID string, the DID document includes metadata like:

Cryptographic keys to prove ownership Rules about how your DID can be used, managed, or modified

Below is an example of a DID document:

{

"@context": [

"https://www.w3.org/ns/did/v1",

"https://w3id.org/security/suites/ed25519-2020/v1"

]

"id": "did:example:123456789abcdefghi",

"verificationMethod": [{

"id": "did:example:123456789abcdefghi#key-1",

"type": "Ed25519VerificationKey2020",

"controller": "did:example:123456789abcdefghi",

"publicKeyMultibase": "zH3C2AVvLMv6gmMNam3uVAjZpfkcJCwDwnZn6z3wXmqPV"

}, ...],

"authentication": [

"#key-1"

]

}

Source: https://www.w3.org/TR/did-core/#example-an-example-of-a-relative-did-url

Learn more about DID documents here.

5. Every DID has a DID method

Let's examine the anatomy of a DID method.

Every DID:

Starts with the schema did: Followed by a word or acronym representing the did method Then, ends with a did method-specific identifier in the form of an alphanumeric string

A schema and a DID method together may look like did:web or did:jwk.

What are DID methods?

DID methods define the rules for creating, managing, and deactivating DIDs.

6. There are over 100 DID methods

Anyone can create a DID method. Companies, individuals, or communities may create a custom DID method to fit a specific use case or live on a specific type of ledger. However, to ensure the DID method is recognized, interoperable, and meets the correct standards, it's strongly recommended to include the DID method on the W3C DID Spec Registry.

7. TBD created its own DID method

We (TBD) created our own DID method called DID:DHT. DHT stands for Distributed Hash Table indicating the use of Mainline DHT. You can learn more about DID:DHT via the spec and this blog post written by TBD’s Director of Open Standards, Gabe Cohen.

At TBD, we colloquially call DID:DHT, DID THAT.

8. You don't have to use blockchain; we use BitTorrent

When I hear the words, "decentralized" or "immutable", I immediately think of blockchain and cryptocurrency. I don't think that train of thought is in the minority.

For instance, to ensure DIDs have no central authority and that individual users can own them, folks typically anchor DIDs on an immutable ledger.

What does anchoring a DID mean?

Anchoring a DID means recording DID transactions on a distributed ledger.

DID:DHT uses BitTorrent; not blockchain

At TBD, we actually took a blockchain-less approach. We anchored DID:DHT to BitTorrent. As mentioned above, DID:DHT uses a Mainline DHT, which is a distributed hash table used by the BitTorrent protocol.

9. You can create a DID with the Web5 SDK

You can use the Web5 SDK to create a DID!

Web5.connect()

You can generate a DID using the Web5.connect() method with the following steps:

Install the web5/api package

npm install @web5/api

Import the package

import { Web5 } from '@web5/api';

Call Web5.connect()

const { web5, did: myDid } = await Web5.connect(); console.log(myDid) More ways to create a DID

Check out the documentation to learn more ways to create a DID in various languages including JavaScript, Kotlin, and Swift.

Learn more about Decentralized Identifiers Have a question? Ask it in our Discord; we're happy to help! Eager to start building? Follow the guides in our documentation. Curious about the entire ecosystem? Watch our YouTube videos.

Sunday, 28. April 2024

KuppingerCole

Analyst Chat #212: Securing the Front Door: The Importance of ITDR in Identity Protection

Matthias Reinwarth and Mike Neuenschwander discuss ITDR (Identity Threat Detection and Response) and its importance in cybersecurity. They explain that attackers are now targeting identities as a vector into enterprise systems, making it critical to have threat detection and response specifically for identity systems. They discuss the key features of ITDR tools, including identity posture, adminis

Matthias Reinwarth and Mike Neuenschwander discuss ITDR (Identity Threat Detection and Response) and its importance in cybersecurity. They explain that attackers are now targeting identities as a vector into enterprise systems, making it critical to have threat detection and response specifically for identity systems. They discuss the key features of ITDR tools, including identity posture, administrative functions, continuous monitoring, and response capabilities. They also highlight the need for collaboration between security operations centers and identity teams. The conversation concludes with a discussion on the evolving trend of identity-centric security and the upcoming European Identity & Cloud Conference (EIC) where ITDR will be a topic of discussion.



Friday, 26. April 2024

IBM Blockchain

AI transforms the IT support experience

Generative AI transforms customer service by introducing context-aware conversations that go beyond simple question-and-answer interactions. The post AI transforms the IT support experience appeared first on IBM Blog.

We know that understanding clients’ technical issues is paramount for delivering effective support service. Enterprises demand prompt and accurate solutions to their technical issues, requiring support teams to possess deep technical knowledge and communicate action plans clearly. Product-embedded or online support tools, such as virtual assistants, can drive more informed and efficient support interactions with client self-service.

About 85% of execs say generative AI will be interacting directly with customers in the next two years. Those who implement self-service search into their sites and tools can become exponentially more powerful with generative AI. Generative AI can learn from vast datasets and can produce nuanced and personalized replies. The ability to understand the underlying context of a question (considering variables such as tone, sentiment, and context) empowers AI to provide responses that align with the user’s specific needs, and with automation can execute tasks, such as opening a ticket to order a replacement part.

Even when topics come up that the virtual assistants can’t solve on its own, automation can easily connect clients with a live agent who can help. If escalated to a live agent, an AI-generated summary of the conversation history can be provided, so they can seamlessly pick up where the virtual assistant left off.

As a developer of AI, IBM works with thousands of clients to help them infuse the technology throughout their enterprise for new levels of insights and efficiency. Much of our experience comes from implementing AI in our own processes and tools, which we can then bring to client engagements.

Our clients tell us their businesses require streamlined proactive support processes that can anticipate the user needs leading to faster responses, minimized downtime and future issues.

Clients can self-service 24/7 and proactively address potential issues

IBM Technology Lifecycle Services (TLS) leverage AI and automation capabilities to offer streamlined support services to IBM clients through various channels, including chat, email, phone and the web. Integrating AI and automation into our customer support service tools and operations was pivotal for enhancing efficiency and elevating the overall client experience:

Online chat via Virtual Assistant: The IBM virtual assistant is designed to streamline service operation by providing a consistent interface to navigate through IBM. With access to various guides and past interactions, many inquiries can be first be addressed through self-service. Additionally, it can transition to a live agent if needed, and alternatively open a ticket to be resolved by a support engineer. This experience is unified across IBM and powered by watsonx™, IBM’s AI platform. Automated help initiated through the product: IBM servers and storage systems have a feature called Call Home/Enterprise Service Agent (ESA) which clients can enable to automatically send notifications to IBM 24x7x365. When Call Home has been enabled, the products will send to IBM the appropriate error details (such as for a drive failure, or firmware error). For errors received which require corrective actions (where valid support entitlement is in place), a service request will be automatically opened and worked per the terms of the client’s support contract. In fact, 91% of Call Home requests were responded to through automation. Service requests are electronically routed directly to the appropriate IBM support center with no client intervention. When a system reports a potential problem, it transmits essential technical detail including extended error information, such as error logs and system snapshots. The typical result for clients is streamlined problem diagnosis and resolution time. Automated end-to-end view of clients’ IT infrastructure: IBM Support Insights Pro provides visibility across IBM clients’ IBM and multivendor infrastructure to unify the support experience. It highlights potential issues and provides recommended actions. This cloud-based service is designed to help IT teams proactively improve uptime and address security vulnerabilities with analytics-driven insights, inventory management and preventive maintenance recommendations. The service is built to help clients improve IT reliability, reduce support gaps and streamline inventory management for IBM and other OEM systems. Suggested mitigations and “what-if” analysis comparing different resolution options can help clients and support personnel identify the best option, given their chosen risk profile. Today, over 3,000 clients are leveraging IBM Support Insights to manage more than four million IT assets.  Empowering IBM support agents with automation tools and AI for faster case resolution and insights

Generative AI offers another advantage by discerning patterns and insights from the data it collects, engineered to help support agents navigate complex issues with greater ease. This capability provides agents comprehensive visibility into the clients’ situation and history, empowering them to offer more informed assistance. Additionally, AI can produce automated summaries, tailored communications and recommendations such as teaching clients on better uses of products, and offer valuable insights for the development of new services.

At IBM TLS, having access to the watsonx technology and automation tools we have built services to help our  support engineers to work more productively and efficiently. These include:

Agent Assist is an AI cloud service, based on IBM watsonx, and used by IBM support agents. At IBM, we have an extensive product knowledge base, and pulling the most relevant information quickly is paramount when working on a case. Agent Assist supports teams by finding the most relevant information in the IBM knowledge base and providing recommended solutions to the agent. It helps agents save time by getting to the desired information faster. Case summarization is another IBM watsonx AI-powered tool our agents use. Depending on complexity, some support cases can take several weeks to resolve. During this time, information such as problem description, analysis results, action plans and other communication takes place between the IBM Support team and the client.  Providing updates and details for a case is crucial throughout its duration until resolution. Generative AI is helping to simplify this process, making it easier to create case summaries with minimal effort. The IBM Support portal, powered by IBM Watson and Salesforce, provides a common platform for our clients and support agents to have a unified view of support tickets, regardless of how they were generated (voice, chat, web, call home and email). Once authenticated, the users have visibility into all cases for their company across the globe. Additionally, IBM support agents can track of support trends across the globe which are automatically analyzed and leveraged to provide fast proactive tips and guidance. Agents get assistance with first course of action and the creation of internal tech-notes to aid with generating documentation during case closure process. This tool also helps them identify “Where is” and “How to” questions, which helps identify opportunities to improve support content and product user experience.

Meeting client needs and expectations in technical support involves a coordinated blend of technical expertise, good communication, effective use of tools and proactive problem-solving. Generative AI transforms customer service by introducing dynamic and context-aware conversations that go beyond simple question-and-answer interactions. This leads to a refined and user-centric interaction. Additionally, it can automate tasks, analyze data to identify patterns and insights and facilitate faster resolution of customer issues.

Optimize your infrastructure

The post AI transforms the IT support experience appeared first on IBM Blog.


SC Media - Identity and Access

Kaiser Permanente notifies 13.4M patients of potential data exposure

Patient data may have been transferred via apps to third-party vendors like Google, Microsoft and X.

Patient data may have been transferred via apps to third-party vendors like Google, Microsoft and X.


IBM Blockchain

Bigger isn’t always better: How hybrid AI pattern enables smaller language models

Bigger is not always better, as specialized models outperform general-purpose models with lower infrastructure requirements.  The post Bigger isn’t always better: How hybrid AI pattern enables smaller language models appeared first on IBM Blog.

As large language models (LLMs) have entered the common vernacular, people have discovered how to use apps that access them. Modern AI tools can generate, create, summarize, translate, classify and even converse. Tools in the generative AI domain allow us to generate responses to prompts after learning from existing artifacts.

One area that has not seen much innovation is at the far edge and on constrained devices. We see some versions of AI apps running locally on mobile devices with embedded language translation features, but we haven’t reached the point where LLMs generate value outside of cloud providers.

However, there are smaller models that have the potential to innovate gen AI capabilities on mobile devices. Let’s examine these solutions from the perspective of a hybrid AI model.

The basics of LLMs

LLMs are a special class of AI models powering this new paradigm. Natural language processing (NLP) enables this capability. To train LLMs, developers use massive amounts of data from various sources, including the internet. The billions of parameters processed make them so large.

While LLMs are knowledgeable about a wide range of topics, they are limited solely to the data on which they were trained. This means they are not always “current” or accurate. Because of their size, LLMs are typically hosted in the cloud, which require beefy hardware deployments with lots of GPUs.

This means that enterprises looking to mine information from their private or proprietary business data cannot use LLMs out of the box. To answer specific questions, generate summaries or create briefs, they must include their data with public LLMs or create their own models. The way to append one’s own data to the LLM is known as retrieval augmentation generation, or the RAG pattern. It is a gen AI design pattern that adds external data to the LLM.

Is smaller better?

Enterprises that operate in specialized domains, like telcos or healthcare or oil and gas companies, have a laser focus. While they can and do benefit from typical gen AI scenarios and use cases, they would be better served with smaller models.

In the case of telcos, for example, some of the common use cases are AI assistants in contact centers, personalized offers in service delivery and AI-powered chatbots for enhanced customer experience. Use cases that help telcos improve the performance of their network, increase spectral efficiency in 5G networks or help them determine specific bottlenecks in their network are best served by the enterprise’s own data (as opposed to a public LLM).

That brings us to the notion that smaller is better. There are now Small Language Models (SLMs) that are “smaller” in size compared to LLMs. SLMs are trained on 10s of billions of parameters, while LLMs are trained on 100s of billions of parameters. More importantly, SLMs are trained on data pertaining to a specific domain. They might not have broad contextual information, but they perform very well in their chosen domain. 

Because of their smaller size, these models can be hosted in an enterprise’s data center instead of the cloud. SLMs might even run on a single GPU chip at scale, saving thousands of dollars in annual computing costs. However, the delineation between what can only be run in a cloud or in an enterprise data center becomes less clear with advancements in chip design.

Whether it is because of cost, data privacy or data sovereignty, enterprises might want to run these SLMs in their data centers. Most enterprises do not like sending their data to the cloud. Another key reason is performance. Gen AI at the edge performs the computation and inferencing as close to the data as possible, making it faster and more secure than through a cloud provider.

It is worth noting that SLMs require less computational power and are ideal for deployment in resource-constrained environments and even on mobile devices.

An on-premises example might be an IBM Cloud® Satellite location, which has a secure high-speed connection to IBM Cloud hosting the LLMs. Telcos could host these SLMs at their base stations and offer this option to their clients as well. It is all a matter of optimizing the use of GPUs, as the distance that data must travel is decreased, resulting in improved bandwidth.

How small can you go?

Back to the original question of being able to run these models on a mobile device. The mobile device might be a high-end phone, an automobile or even a robot. Device manufacturers have discovered that significant bandwidth is required to run LLMs. Tiny LLMs are smaller-size models that can be run locally on mobile phones and medical devices.

Developers use techniques like low-rank adaptation to create these models. They enable users to fine-tune the models to unique requirements while keeping the number of trainable parameters relatively low. In fact, there is even a TinyLlama project on GitHub.  

Chip manufacturers are developing chips that can run a trimmed down version of LLMs through image diffusion and knowledge distillation. System-on-chip (SOC) and neuro-processing units (NPUs) assist edge devices in running gen AI tasks.

While some of these concepts are not yet in production,  solution architects should consider what is possible today. SLMs working and collaborating with LLMs may be a viable solution. Enterprises can decide to use existing smaller specialized AI models for their industry or create their own to provide a personalized customer experience.

Is hybrid AI the answer?

While running SLMs on-premises seems practical and tiny LLMs on mobile edge devices are enticing, what if the model requires a larger corpus of data to respond to some prompts? 

Hybrid cloud computing offers the best of both worlds. Might the same be applied to AI models? The image below shows this concept.

When smaller models fall short, the hybrid AI model could provide the option to access LLM in the public cloud. It makes sense to enable such technology. This would allow enterprises to keep their data secure within their premises by using domain-specific SLMs, and they could access LLMs in the public cloud when needed. As mobile devices with SOC become more capable, this seems like a more efficient way to distribute generative AI workloads.

IBM® recently announced the availability of the open source Mistral AI Model on their watson™ platform. This compact LLM requires less resources to run, but it is just as effective and has better performance compared to traditional LLMs. IBM also released a Granite 7B model as part of its highly curated, trustworthy family of foundation models.

It is our contention that enterprises should focus on building small, domain-specific models with internal enterprise data to differentiate their core competency and use insights from their data (rather than venturing to build their own generic LLMs, which they can easily access from multiple providers).

Bigger is not always better

Telcos are a prime example of an enterprise that would benefit from adopting this hybrid AI model. They have a unique role, as they can be both consumers and providers. Similar scenarios may be applicable to healthcare, oil rigs, logistics companies and other industries. Are the telcos prepared to make good use of gen AI? We know they have a lot of data, but do they have a time-series model that fits the data?

When it comes to AI models, IBM has a multimodel strategy to accommodate each unique use case. Bigger is not always better, as specialized models outperform general-purpose models with lower infrastructure requirements. 

Create nimble, domain-specific language models Learn more about generative AI with IBM

The post Bigger isn’t always better: How hybrid AI pattern enables smaller language models appeared first on IBM Blog.


auth0

Be a Devops Hero! Automate Your Identity Infrastructure with Auth0 CLI and Ansible

Learn how to Automate your identity infrastructure with the help of Auth0 CLI and Ansible.
Learn how to Automate your identity infrastructure with the help of Auth0 CLI and Ansible.

SC Media - Identity and Access

Phishing attack compromises LA County Health Services data

Individuals receiving healthcare across Los Angeles had their personal and health data compromised following a successful phishing attack against Los Angeles County Department of Health Services, which is the second largest U.S. public healthcare system, in February, according to BleepingComputer.

Individuals receiving healthcare across Los Angeles had their personal and health data compromised following a successful phishing attack against Los Angeles County Department of Health Services, which is the second largest U.S. public healthcare system, in February, according to BleepingComputer.


Microsoft credentials targeted by phishing campaign using Autodesk Drive

Hacked email accounts have been used by threat actors to facilitate a phishing campaign that involves the use of Autodesk Drive-hosted PDF documents to compromise Microsoft account credentials, SecurityWeek reports.

Hacked email accounts have been used by threat actors to facilitate a phishing campaign that involves the use of Autodesk Drive-hosted PDF documents to compromise Microsoft account credentials, SecurityWeek reports.


Elliptic

Tracking illicit actors through bridges, DEXs, and swaps

Detecting illicit activities when looking at crypto movements has always been complex, but as assets and blockchains become increasingly interconnected, this problem has become even more difficult to solve. However, with the release of our new Holistic upgrade, Elliptic users can trace through asset swaps with ease.

Detecting illicit activities when looking at crypto movements has always been complex, but as assets and blockchains become increasingly interconnected, this problem has become even more difficult to solve. However, with the release of our new Holistic upgrade, Elliptic users can trace through asset swaps with ease.


Innopay

INNOPAY to present during Dutch insurance event

INNOPAY to present during Dutch insurance event 11 Jun 2024 trudy 26 April 2024 - 07:24 Amsterdam 52.354731843629, 4.9039604 The Dutch Association of Insurers, together with Plug and Play and INNOPAY, is organising an event called ‘Open Insurance & Innovat
INNOPAY to present during Dutch insurance event 11 Jun 2024 trudy 26 April 2024 - 07:24 Amsterdam 52.354731843629, 4.9039604

The Dutch Association of Insurers, together with Plug and Play and INNOPAY, is organising an event called ‘Open Insurance & Innovation: An Evolution in the Dutch Insurance Sector’ on 11 June. Besides an interesting line-up from the field of regulators, innovators and business, Maarten Bakker, INNOPAY Vice President, will be speaking about the upcoming EU Financial Data Access Regulation (FIDA) and the necessity for scheme building.

Click here to sign up for the event (13:00-18:00 on 11 June, Amsterdam).


IBM Blockchain

VeloxCon 2024: Innovation in data management

VeloxCon 2024 brought together industry leaders, engineers, and enthusiasts to explore the future of data management. The post VeloxCon 2024: Innovation in data management appeared first on IBM Blog.

VeloxCon 2024, the premier developer conference that is dedicated to the Velox open-source project, brought together industry leaders, engineers, and enthusiasts to explore the latest advancements and collaborative efforts shaping the future of data management. Hosted by IBM® in partnership with Meta, VeloxCon showcased the latest innovation in Velox including project roadmap, Prestissimo (Presto-on-Velox), Gluten (Spark-on-Velox), hardware acceleration, and much more.

An overview of Velox

Velox is a unified execution engine that is built and open-sourced by Meta, aimed at accelerating data management systems and streamlining their development. One of the biggest benefits of Velox is that it consolidates and unifies data management systems so you don’t need to keep rewriting the engine. Today Velox is in various stages of integration with several data systems including Presto (Prestissimo), Spark (Gluten), PyTorch (TorchArrow), and Apache Arrow. You can read more about why Velox was built in Meta’s engineering blog.

Velox at IBM

Presto is the engine for watsonx.data, IBM’s open data lakehouse platform. Over the last year, we’ve been working hard on advancing Velox for Presto – Prestissimo – at IBM. Presto Java workers are being replaced by a C++ process based on Velox. We now have several committers to the Prestissimo project and continue to partner closely with Meta as we work on building Presto 2.0.

Some of the key benefits of Prestissimo include:

Hugh performance boost: query processing can be done with much smaller clusters No performance cliffs: no Java processes, JVM, or garbage collections, as memory arbitration improves efficiency Easier to build and operate at scale: Velox gives you reusable and extensible primitives across data engines (like Spark)

This year, we plan to do even more with Prestissimo including:

The Iceberg reader Production readiness (metrics collection with Prometheus) New Velox system implementation TPC-DS benchmark runs VeloxCon 2024

We worked closely with Meta to organize VeloxCon 2024, and it was a fantastic community event. We heard speakers from Meta, IBM, Pinterest, Intel, Microsoft, and others share what they’re working on and their vision for Velox over two dynamic days.

Day 1 highlights

The conference kicked off with sessions from Meta including Amit Purohit reaffirming Meta’s commitment to open source and community collaboration. Pedro Pedreira, alongside Manos Karpathiotakis and Deblina Gupta, delved into the concept of composability in data management, showcasing Velox’s versatility and its alignment with Arrow.

Amit Dutta of Meta explored Prestissimo’s batch efficiency at Meta, shedding light on the advancements made in optimizing data processing workflows. Remus Lazar, VP Data & AI Software at IBM presented Velox’s journey within IBM and vision for its future. Aditi Pandit of IBM followed with insights into Prestissimo’s integration at IBM, highlighting feature enhancements and future plans.

The afternoon sessions were equally insightful, with Jimmy Lu of Meta unveiling the latest optimizations and features in Velox. While Binwei Yang of Intel discussed the integration of Velox with the Apache Gluten project, emphasizing its global impact. Engineers from Pinterest and Microsoft shared their experiences of unlocking data query performance by using Velox and Gluten, showcasing tangible performance gains.

The day concluded with sessions from Meta on Velox’s memory management by Xiaoxuan Meng and a glimpse into the new simple aggregation function interface that was presented by Wei He.

Day 2 highlights

The second day began with a keynote from Orri Erling, co-creator of Velox. He shared insights into Velox Wave and Accelerators, showcasing its potential for acceleration. Krishna Maheshwari from NeuroBlade highlighted their collaboration with the Velox community, introducing NeuroBlade’s SPU (SQL Processing Unit) and its transformative impact on Velox’s computational speed and efficiency.

Sergei Lewis from Rivos explored the potential of offloading work to accelerators to enhance Velox’s pipeline performance. William Malpica and Amin Aramoon from Voltron Data introduced Theseus, a composable, scalable, distributed data analytics engine, using Velox as a CPU backend.

Yoav Helfman from Meta unveiled Nimble, a cutting-edge columnar file format that is designed to enhance data storage and retrieval. Pedro Pedreira and Sridhar Anumandla from Meta elaborated on Velox’s new technical governance model, emphasizing its importance in guiding the project’s development sustainability.

The day also featured sessions on Velox’s I/O optimizations by Deepak Majeti from IBM, strategies for safeguarding against Out-Of-Memory (OOM) kills by Vikram Joshi from ComputeAI, and a hands-on demo on debugging Velox applications by Deepak Majeti.

What’s next with Velox

VeloxCon 2024 was a testament to the vibrant ecosystem surrounding the Velox project, showcasing groundbreaking innovations and fostering collaboration among industry leaders and developers alike. The conference provided attendees with valuable insights, practical knowledge, and networking opportunities, solidifying Velox’s position as a leading open source project in the data management ecosystem.

If you’re interested in learning more and joining the Velox community, here are some resources to get started:

Join the Presto Native Worker working group Prestissimo GitHub Velox GitHub Session recordings from the conference A recap video created by Velox community member Emmanuel Francis

Stay tuned for more updates and developments from the Velox community, as we continue to push the boundaries of data management and accelerate innovation together.

Try Presto with a free trial of watsonx.data

The post VeloxCon 2024: Innovation in data management appeared first on IBM Blog.

Thursday, 25. April 2024

Entrust

Why Zero Trust is a Must for Strong Corporate Governance

Gone are the days of delegating technology and cybersecurity concerns to be solved solely by... The post Why Zero Trust is a Must for Strong Corporate Governance appeared first on Entrust Blog.

Gone are the days of delegating technology and cybersecurity concerns to be solved solely by the IT department. With artificial intelligence (AI), post-quantum (PQ), and an intensifying threat landscape, senior leadership teams and boards must make the right investments and provide strategic guidance to help keep the organization, employees, customers, and other key stakeholders safe. If that’s not enough incentive, federal agencies are continuing efforts to accelerate breach disclosures and hold executives liable for security and data privacy incidents. This is why pursuing an enterprise-wide Zero Trust strategy is critical for strong corporate governance and increasingly a board-level priority.

Reinforcing this strategic link between Zero Trust and governance is NIST’s recently released Cybersecurity Framework (CSF) 2.0. The renewed CSF provides guidance and examples for adopting Zero Trust and adds “Govern” to the other five key critical framework functions of Identify, Protect, Detect, Respond, and Recover. While governance was implied in earlier CSF iterations, it is now codified to ensure an organization’s strategy is directly linked to cybersecurity roles and responsibilities, informing the business what it needs to do to address the other five functions. NIST’s focus on governance reinforces that the entire leadership team is in this together and really calls out the fiduciary responsibilities of the board.

All this focus on governance is key to minimizing business risk and protecting shareholder value, but also puts tremendous pressure on leadership teams to effectively communicate cyber risks to their board and meet regulatory requirements. This is where Zero Trust comes in.

Zero Trust is not a product to buy or a box to check. As an executive officer or director, you should understand it’s a strategic approach. Zero Trust improves cyber resilience and can also serve to increase an organization’s agility, reduce cost of compliance, decrease IT complexity and total cost of ownership, and of course strengthen corporate governance. CISA’s Zero Trust Maturity Model 2.0 provides a roadmap to pursue a Zero Trust strategy with updated guidelines around the five key pillars of Identity, Devices, Networks, Data, and Applications and Workloads. Like the CSF 2.0, governance is front and center in this latest version. CISA’s updated guidelines reinforce that governance of cybersecurity policies, procedures, and processes within and across the five pillars is essential to improving cyber resilience and maintaining regulatory compliance.

So, there you have it. While long considered a cybersecurity best practice, pursuing a Zero Trust strategy is now also an express requirement from both NIST and CISA for strong corporate governance.

The post Why Zero Trust is a Must for Strong Corporate Governance appeared first on Entrust Blog.


1Kosmos BlockID

Elevating Government Digital Transformation with an Advanced Credential Service Provider – 1Kosmos

As a leading provider of identity management solutions, 1Kosmos is excited to announce our new capability as a Credential Service Provider (CSP) for government agencies. This development represents a significant step forward in our mission to revolutionize the way organizations of any type can manage digital identities and secure access to critical resources. What is … Continued The post Elevati

As a leading provider of identity management solutions, 1Kosmos is excited to announce our new capability as a Credential Service Provider (CSP) for government agencies. This development represents a significant step forward in our mission to revolutionize the way organizations of any type can manage digital identities and secure access to critical resources.

What is a Credential Service Provider (CSP)?

A CSP is a trusted entity, in this case 1Kosmos, responsible for ID verification, onboarding and delivery of strong phishing resistant credentials. 1Kosmos acts as a managed service, verifying and authenticating citizens and residents accessing government data and services. From initial registration to ongoing authentication, the 1Kosmos CSP plays a vital role in ensuring secure access.

There are five main considerations a full service a CSP can deliver:

Security: A CSP must have robust security measures in place to protect against unauthorized access and cyber threats. This includes advanced encryption protocols, secure storage of credentials, and continuous monitoring for suspicious activity. Compliance: Meeting stringent regulations and standards – some industry, some governmental – is a fundamental requirement for any CSP. User Experience: While security is paramount, a seamless user experience is crucial for user adoption and operational efficiency. A user-centric approach ensures that the authentication process is intuitive and easy to navigate. Privacy: Privacy and security of citizen biometrics and other personal identifiable Information (PII) is critical to comply with 230+ similar privacy regulations around the world, including California Consumer Privacy Act (CCPA). It is also important to give residents the assurance they need that their information is not accessible without their explicit consent. Access for All: Providing equality in access to all citizens and residents is complicated, while much of the population may have access to mobile devices and desktops, that will not be the case for some, and therefore alternate use cases will need to be considered and addresses Why is a CSP Needed?

Demands by residents for digital government services have resulted in rampant identity fraud, costing taxpayers millions. Stopping fraud means blocking synthetic and stolen identities during the application process and securing resident accounts from phishing and social engineering attacks aimed at account takeover. Delivering these services is no easy task. Investment in hardware, software, and management resources is high, with little return. Additionally, each agency would need to invest in delivering these services, creating a drain on taxpayers’ dollars while providing an experience that differs across all agencies.

Government agencies face unique challenges in implementing and maintaining effective identity management solutions. From the complexity of integrating with existing IT systems to stringent compliance requirements, these challenges can often hinder the adoption of robust identity management solutions. This is where a CSP solves these unique problems.

Elevating Government Identity Management with 1Kosmos

As a leading provider of identity management solutions, 1Kosmos is uniquely positioned to elevate the way government agencies approach credential service and identity management. By leveraging the capabilities available in the 1Kosmos Credential Service Provider (CSP) solution, government agencies can unlock a range of benefits that set 1Kosmos apart in the market.

Robust Identity Proofing and Credential Issuance

1Kosmos’ CSP solution enables government agencies to perform Identity Assurance Level 2 (IAL2) identity verification and issue Authentication Assurance Level 2 (AAL2) credentials that are certified to NIST 800-63-3 standards. This ensures a high level of confidence in the identity of citizens and residents accessing services and resources.

Secure and Decentralized Identity Management

The 1Kosmos CSP leverages a “privacy by design” approach, empowering residents with complete control over their personal information. By utilizing a private and permissioned distributed ledger, the solution decentralizes data storage and eliminates the risk of a centralized “honeypot” of personally identifiable information (PII).

Combating Fraud and Phishing

1Kosmos’ CSP incorporates advanced security measures to combat phishing and fraudulent activities targeting citizens and residents. The solution offers a streamlined, self-service identity verification process and leverages phishing-resistant authentication methods, such as FIDO passkeys and biometric authentication, to enhance the overall security posture, safeguarding citizen and resident accounts.

Seamless Integration and Interoperability

1Kosmos’ CSP is designed to integrate seamlessly with existing government IT systems and infrastructure, reducing the complexity and time required for implementation. Additionally, the solution’s adherence to open standards ensures interoperability across different government agencies and systems, enabling a unified approach to identity management.

Scalability and Cost-Effectiveness

The 1Kosmos CSP is highly scalable, allowing government agencies to accommodate growing user and transaction volumes without incurring significant additional costs. By automating identity enrollment and authentication processes, the solution also helps to reduce IT management overhead and operational expenses.

Non-Biased Decisioning and Access for All

1Kosmos adopts an innovative approach to identity proofing and authentication based on deterministic verification of an individual to truly identify the user behind a device rather than assuming identity based on how closely they resemble a static, unverified biometric. 1Kosmos utilizes government-issued identification documents and live biometrics to identify and authenticate citizens and residents. At no time is anyone compared to others in a database. Their real, live biometrics and government issued IDs are used to verify their identities digitally, just as they would be verified in person, but without human error.

Privacy-Preserving Identity Management

1Kosmos’ CSP is designed with a strong focus on privacy protection. The solution employs advanced cryptographic techniques, cryptographically paired public-private key architectures and zero-knowledge proofs, to enable citizens to selectively disclose only the necessary information required for authentication, without revealing their full identity details. This ensures that personal data is kept secure and minimizes the risk of unauthorized access or misuse.

By choosing 1Kosmos as their credential service provider, government agencies can leverage these key benefits to enhance the security, efficiency, and user experience of their identity management initiatives. As agencies strive to modernize their identity management capabilities and comply with evolving regulatory requirements, 1Kosmos’ CSP delivers a complete full-service solution that addresses the unique challenges faced by the government sector.

If you would like to learn more about our CSP click here.

Read the press release here.

Get a demo of the 1Kosmos CSP.

The post Elevating Government Digital Transformation with an Advanced Credential Service Provider – 1Kosmos appeared first on 1Kosmos.


Anonym

4 Pieces of Advice our CTO Has for Companies Developing Identity Products

For companies developing products in the booming identity and privacy space, our CTO, Dr Paul Ashley, has this simple but powerful advice:   Dr Ashley should know: not only is he in demand on the decentralized identity speaking circuit, he has been at the helm of privacy pioneer, Anonyome Labs, alongside co-CEO JD Mumford for years.  […] The post 4 Pieces of Advice our CTO Has for

For companies developing products in the booming identity and privacy space, our CTO, Dr Paul Ashley, has this simple but powerful advice:  

“Have a way into your app that’s free, then have tiers starting cheaply; for example, 99 cents. Users tend to go up sub levels as they embrace and rely on the app.”   Focus on uses cases more than features: “Use cases are interesting to users.”  Listen to users through support. “Get stuff out to real users; get it in production so you can learn what’s right and wrong.”  Look for processes where you’re “using paper or have tricky integration challenges” and apply future-focused decentralized identity technology to find a solution.  

Dr Ashley should know: not only is he in demand on the decentralized identity speaking circuit, he has been at the helm of privacy pioneer, Anonyome Labs, alongside co-CEO JD Mumford for years. 

Dr Ashley gave the advice on a recent episode of the Future of Identity podcast with host Riley Hughes. 

In the episode, Dr Ashley takes the listener through the history and success of Anonyome Labs’ all-in-one privacy app MySudo, as well as our enterprise platform, Sudo Platform (for which MySudo is an exemplar application), and explains how our products link to and enable decentralized identity—the biggest privacy breakthrough of the next decade.  

As Riley says, “This is a conversation that will interest anyone who has a passion for privacy and safety online and will be very insightful for anyone building a consumer identity product.” 

Listen to the episode now

Apart from product development advice, the episode covers 6 topics: 

What’s the WHY for MySudo? 

Dr Ashley explains how the erosion of privacy online and the rise of data brokers and surveillance capitalism led Anonyome Labs to create our innovative ID tech product, MySudo.  

“You could see the user really had very little hope of protecting themselves. We wanted to offer tech for everyday users that provides them greater privacy, greater safety and security and our mission was to deliver product in that space,” Dr Ashley says. 

Now highly regarded as the world’s only all-in-one privacy app, MySudo allows users to create secure digital profiles or personas called Sudos, with unique disposable phone numbers, emails, credit cards, handles, and private browsers to use across the internet. 

As host Riley points out, with hundreds of thousands of users, “MySudo has defied conventional wisdom that consumers won’t pay for identity services, making it among a handful of successful, sustainable ID tech businesses.”  

Discover MySudo

What’s the history of MySudo? 

Dr Ashley says Anonyome Labs realized early the privacy problem was an identity problem. 

“The identity problem was that when you went out into the world and did different things, it was always you with the same identifiers and it made it very simple for the trackers to follow you around. That was because users were using their single, real identity to do everything. Then, if there was a sale or theft of that info, the user was in trouble,” he explained. 

In response, Anonyome Labs came up with the idea of a Sudo, based on the concept of a pseudonym, and then applied the concept of compartmentalization. MySudo leverages compartmentalization by allowing the user to have multiple Sudo digital identities or personas—one digital identity for shopping, one for dating, one for selling on classifieds, one close to the user’s legal identity for booking airline tickets, and so on. See how a MySudo user applies the app to their busy life.  

The MySudo we know today is actually a combination of two previous app iterations: Sudo app for communications and Sudo Pay which introduced Anonyome Labs’ virtual cards to meet the need for limitless virtual payment card options.  

As Dr Ashley told Riley Hughes, “Users can go out with a whole bunch of different personas to do different things. Each persona has its own attributes plus can have a VPN profile, virtual credit card and so on—all the things you need to be different in situation A and situation B.”  

“MySudo has been out there for three to four years now [as] a consumer ID product for creating personas for people to use throughout their life online,” Dr Ashley summarized. 

How does Anonyome Labs market MySudo? 

In the podcast episode, Dr Ashley breaks down how Anonyome Labs has succeeded by taking a practical product approach and by talking about use cases more than features

The most important use case for MySudo is the user logging in with a separate Sudo email and phone number rather than their personal ones. Most MySudo users apply that capability to online shopping, selling, dating, and travel. Users can also do 2FA with MySudo, Dr Ashley says. 

How does decentralized identity fit into our product roadmap? 

Dr Ashley outlines some of the opportunities and challenges in the “nascent space” of decentralized identity (DI), including navigating the complex technology landscape, how to find good problems for decentralized identity to solve first, and how Anonyome Labs’ experience building what some would call a “web2 identity product” (MySudo) is informing the way we tackle the UX of verifiable credentials

Dr Ashley says DI is a natural accompaniment to Sudos: “We came at MySudo from the privacy direction—a toolkit of privacy tools. Then we saw decentralized identity come along and we said straight away it’s a tech that’s 100% designed for privacy and safety,” Dr Ashley explains. 

Anonyome Labs has been active in the DI space ever since: helping to run three DI blockchains, actively contributing to DI standards, contributing source code back to the DI open source community, and building DI capabilities into Sudo Platform for enterprises to leverage, including identity wallets.  

“DI is the future-leaning part of the business and we’re 100% behind it,” Dr Ashley says.  

How will Anonyome Labs take the DI concepts to the people? 

Dr Ashley says in the past four years DI has boomed, including around standards and what he calls “the killer feature of DI, verifiable credentials”, which has led to the need for an identity wallet: “There’s no question that having that stack is correct,” Dr Ashley says. 

Dr Ashley then runs through many use cases for DI, including the big one: selective disclosure around digital proof of identity, where “we’re already seeing lots of great government uses such as licenses,” and everyday applications such as gym memberships.  

“We have got to stop people having hundreds of different passwords in a password manager.  

“Even just managing the log-in problem will be a marvelous thing for the world,” Dr Ashley says.  

Read our whitepaper: Realizing the Business Value of Decentralized Identity.  

Dr Ashley told Riley Hughes that Sudos are a bridge between the old and new worlds of identity management: “This technology is absolutely perfect for Anonyome Labs and fits into the Sudo perfectly.” 

And interest from enterprise is booming: “All of a sudden there are a lot of organizations wanting to do projects in this area. We have a lot of expertise over four-plus years, plus two years of project experience. A lot of enterprises are saying this tech looks good. The timing is great; the wave is cresting.” 

“But we’re not looking at boiling the ocean. It’s about making processes more efficient— replacing paper-based systems [with] simpler, cheaper [processes] using verifiable credentials. 

“Enterprises should look for processes where [they’re] using paper or have tricky integration challenges. They should also listen to users through support. Get stuff out to real users; get it in production so you can learn what’s right and wrong,” Dr Ashley advises.  

Get more advice on getting started with DI here

What does the future of identity look like to Dr Ashley?  

The podcast episode ends with this hope from Dr Ashley: “We have got to put the user back in control and swing the pendulum back. It’s not easy because of the trillion dollar companies like Meta and Google not wanting that. But enterprises are going back to respecting user data and only using what they need.  

“In the next 10 years, there’ll be a tussle but hopefully the user will win and get their privacy back, and hopefully the big data companies will have just faded away,” he says. 

Download MySudo as an exemplar of what’s possible with Sudo Platform  

For more on Sudo Platform, look on our website and contact us to talk about how we can help you quickly get identity and privacy products to market. 

The post 4 Pieces of Advice our CTO Has for Companies Developing Identity Products appeared first on Anonyome Labs.


auth0

Fine-Grained Authorization in your Next.js application using Okta FGA

In this post, we will focus on adding fine-grained authorization to your Next.js application using Okta FGA.
In this post, we will focus on adding fine-grained authorization to your Next.js application using Okta FGA.

Microsoft Entra (Azure AD) Blog

Collaborate across M365 tenants with Entra ID multi-tenant organization

Hi everyone,    I’m excited to announce the general availability of Microsoft Entra ID multi-tenant organization platform capabilities!    As your organization evolves, you may need to integrate multiple tenants to facilitate collaboration; for example, your organization may have recently acquired a new company, merged with another company, or restructured with newly fo

Hi everyone, 

 

I’m excited to announce the general availability of Microsoft Entra ID multi-tenant organization platform capabilities! 

 

As your organization evolves, you may need to integrate multiple tenants to facilitate collaboration; for example, your organization may have recently acquired a new company, merged with another company, or restructured with newly formed business units. With disparate identity management systems, it can be costly and complex for admins to manage multiple tenants while ensuring users across tenants have access to resources to collaborate. 

 

To enable application sharing across tenants, many of you have already deployed features like B2B collaboration to grant application access across tenants in your organization, cross-tenant access settings to allow for granular access controls, and cross-tenant synchronization to manage the lifecycle of users across tenants.  

 

To further improve employee collaboration across your tenants, many of you have asked for unified chat across tenants in Microsoft Teams and seamless cross-tenant employee engagement and community experiences in Viva Engage. 

 

You can now use Entra ID multi-tenant organizations to improve the cross-tenant collaboration experience in Microsoft Teams and Viva Engage. The capabilities are now generally available with Microsoft Entra ID P1 in the M365 commercial cloud

 

 

Multi-tenant organization capabilities: 

 

 

  Get started with Entra ID multi-tenant organizations: 

 

In this example, we’ll follow Contoso EMEA and Contoso APAC, two divisions of Contoso Conglomerate. Employees from Contoso EMEA and APAC are already using Microsoft Entra to share apps across tenants. Now they need to communicate across tenants using Microsoft Teams and Viva Engage. Leaders in the organization need to share announcements and storylines across the organization and employees need to chat using Teams. Let’s look at how the admins at Contoso configure their multi-tenant organization to meet these needs.   

 

Step 1 – Form a multi-tenant organization 

 

The tenant administrators of Contoso EMEA and Contoso APAC agree to form an Entra ID multi-tenant organization, facilitated by an invite-and-accept flow between them. The Contoso EMEA admin navigates to the M365 admin center and initiates the process, while the Contoso APAC admin confirms. This results in a mutually agreed upon multi-tenant organization of two tenants in both directories. 

 

Microsoft Teams and Viva Engage applications in Contoso EMEA (APAC) will now interpret any external member users of identity provider Contoso APAC (EMEA) as employees of the multi-tenant organization with corresponding improved collaboration experience. 

 

[Caption] A multi-tenant organization or two tenants in M365 admin center.

 

Step 2 – Provision external member users at scale 

 

Microsoft Teams improved collaboration experience relies on reciprocal provisioning of collaborating users. So, Alice of Contoso EMEA should be provisioned as an external member user into Contoso APAC, while Bob of Contoso APAC should be provisioned as external member user into Contoso EMEA. 

 

Viva Engage improved employee engagement experiences rely on centralized provisioning of employees into a central tenant, say Contoso EMEA. As such, Bob of Contoso APAC should be provisioned as an external member user into Contoso EMEA. 

 

Cross-tenant synchronization is the ideal tool to accomplish this at scale, via the Entra admin center for complex identity landscapes, or via the M365 admin center for simplified setups. If you already have your own at scale user provisioning engine, you can continue using it. 

 

Step 3 – Complete requirements by Microsoft Teams or Viva Engage 

 

Any Microsoft Teams requirements such as using the new Teams clients can be found under M365 multi-tenant collaboration, while any Viva Engage configuration requirements can be found under Viva Engage for multi-tenant organizations

 

Once your requirements for Microsoft Teams and/or Viva Engage have been completed, your employees will be able to collaborate seamlessly across your organization of multiple tenants, with unified chat experiences in Microsoft Teams and seamless conversations in Viva Engage communities. 

 

 

How does multi-tenant organization licensing work? 

 

Entra ID multi-tenant organization license requirement - Your employees can enjoy the new multi-tenant organization platform benefits with Microsoft Entra ID P1 licenses. Only one Microsoft Entra ID P1 license is required per employee per multi-tenant organization. For Microsoft Teams and Viva Engage license requirements, please review the M365 multi-tenant organization announcement

 

I’m very excited for this milestone which helps your multi-tenant organization achieve better collaboration and communication experiences for your employees! Go and plan your multi-tenant organization rollout today. We love hearing from you and look forward to your feedback on Azure forum

 

Joseph Dadzie, Partner Director of Product Management  

Linkedin: @joedadzie  

Twitter: @joe_dadzie  

 

 

Learn more about Microsoft Entra: 

Related Articles: ​ What is a multi-tenant organization in Microsoft Entra ID?   What is a cross-tenant synchronization in Microsoft Entra ID?​   ​Properties of a B2B guest user - Microsoft Entra External ID  See recent Microsoft Entra blogs  Dive into Microsoft Entra technical documentation  Join the conversation on the Microsoft Entra discussion space and Twitter  Learn more about Microsoft Security  

SC Media - Identity and Access

FTC sends $5.6M in refunds to Ring users impacted by unwanted access, hacks

Amazon's home security product subsidiary Ring will have users whose video feeds were subjected to unauthorized access or accounts were compromised be given $5.6 million worth of refunds by the Federal Trade Commission, BleepingComputer reports.

Amazon's home security product subsidiary Ring will have users whose video feeds were subjected to unauthorized access or accounts were compromised be given $5.6 million worth of refunds by the Federal Trade Commission, BleepingComputer reports.


Samourai cryptomixer founders indicted for money laundering

BleepingComputer reports that the U.S. Department of Justice has filed charges against cryptocurrency mixer service Samourai founders Keonne Rodriguez and William Lonergan Hill for engaging in a far-reaching money laundering scheme.

BleepingComputer reports that the U.S. Department of Justice has filed charges against cryptocurrency mixer service Samourai founders Keonne Rodriguez and William Lonergan Hill for engaging in a far-reaching money laundering scheme.


Almost a billion users' keystrokes possibly leaked by Chinese keyboard apps

Eight of nine major Chinese keyboard apps were found to have vulnerabilities that could be leveraged to expose nearly a billion users' keystrokes, The Hacker News reports.

Eight of nine major Chinese keyboard apps were found to have vulnerabilities that could be leveraged to expose nearly a billion users' keystrokes, The Hacker News reports.


IBM Blockchain

How generative AI will revolutionize supply chain 

From addressing queries to resolving alerts, learn how the combination of data and AI will transform businesses from reactive to proactive. The post How generative AI will revolutionize supply chain  appeared first on IBM Blog.

Unlocking the full potential of supply chain management has long been a goal for businesses that seek efficiency, resilience and sustainability. In the age of digital transformation, the integration of advanced technologies like generative artificial intelligence brings a new era of innovation and optimization. AI tools help users address queries and resolve alerts by using supply chain data, and natural language processing helps analysts access inventory, order and shipment data for decision-making. 

A recent IBM Institute of Business Value study, The CEO’s guide to generative AI: Supply chain, explains how the powerful combination of data and AI will transform businesses from reactive to proactive. Generative AI, with its ability to autonomously generate solutions to complex problems, will revolutionize every aspect of the supply chain landscape. From demand forecasting to route optimization, inventory management and risk mitigation, the applications of generative AI are limitless. 

Here are some ways generative AI is transforming supply chain management: 

Sustainability

Generative AI helps to optimize companies’ supply chains for sustainability by identifying opportunities to reduce carbon emissions, minimize waste and promote ethical sourcing practices through scenario analysis and optimization algorithms. For example, combining generative AI with technologies such as blockchain helps to keep data about the material-to-product transformation unchangeable across different entities, providing clear visibility into products’ origin and carbon footprint. This allows companies proof of sustainability to drive customer loyalty and comply with regulations. 

Inventory management

Generative AI models can continuously generate optimized replenishment plans based on real-time demand signals, supplier lead times and inventory levels. This helps maintain optimal stock levels that minimize carrying costs and can improve customer satisfaction through accurate available-to-promise (ATP) calculations and AI-driven fulfillment optimization. 

Supplier relationship management

Generative AI can analyze supplier performance data and market conditions to identify potential risks and opportunities, recommend alternative suppliers and negotiate favorable terms, enhancing supplier relationship management. 

Risk management

Generative AI models can simulate various risk scenarios, such as supplier disruptions, natural disasters, weather events or even geopolitical events, allowing companies to proactively identify vulnerabilities or react to disruptions with agility. AI-supported what-if modeling helps develop contingency plans such as inventory, supplier or distribution center reallocation. 

Route optimization

Generative AI algorithms can dynamically optimize transportation routes based on factors like traffic conditions, weather forecasts and delivery deadlines, reducing transportation costs and improving delivery efficiency. 

Demand forecasting

Generative AI can analyze historical data and market trends to generate accurate demand forecasts, which helps companies optimize inventory levels and minimize stockouts or overstock situations. Users can predict outcomes by quickly analyzing large-scale, fine-grain data for what-if scenarios in real time, allowing companies to pivot quickly. 

The integration of generative AI in supply chain management holds immense promise for businesses seeking to transform their operations. By using generative AI, companies can enhance efficiency, resilience and sustainability while staying ahead in today’s dynamic marketplace.  

Learn more about IBM supply chain AI-infused solutions

The post How generative AI will revolutionize supply chain  appeared first on IBM Blog.


SC Media - Identity and Access

iSharing app vulnerabilities put users' locations at risk

TechCrunch reports that popular phone tracking app iSharing had the exact location details of its more than 35 million users exposed due to vulnerabilities that prevented the app's servers from conducting proper checks of user data access.

TechCrunch reports that popular phone tracking app iSharing had the exact location details of its more than 35 million users exposed due to vulnerabilities that prevented the app's servers from conducting proper checks of user data access.


Trinsic Podcast: Future of ID

Adrian Field - OneID’s Approach to Driving BankID Adoption in the UK

In this episode we talk with Adrian Field, the Director of Market Development at OneID, which is a bank-based identity verification product focused on the UK market. We cover a range of topics, including: How they apply a revenue-share model to incentivize banks to participate in their ecosystem The main use cases they’re focusing on in their go-to-market and the drivers that qualify those as

In this episode we talk with Adrian Field, the Director of Market Development at OneID, which is a bank-based identity verification product focused on the UK market.

We cover a range of topics, including:

How they apply a revenue-share model to incentivize banks to participate in their ecosystem The main use cases they’re focusing on in their go-to-market and the drivers that qualify those as good use cases How Adrian sees the user experience evolving with emerging standards like verifiable credentials

Adrian has a very good grasp on the digital ID ecosystem and is generous sharing his insights after four years working in this space.

You can learn more about OneID on their website: https://oneid.uk/.

Subscribe to our weekly newsletter for more announcements related to the future of identity at trinsic.id/podcast

Reach out to Riley (@rileyphughes) and Trinsic (@trinsic_id) on Twitter. We’d love to hear from you.


Ontology

Securing Our Digital Selves

Decentralized Identity in the Age of Wearable Technology The rise of wearable technology, as detailed by Luis Quintero in his insightful article on The Conversation, presents an exciting yet daunting evolution in how we interact with digital devices. Wearables now extend beyond fitness trackers to include devices capable of monitoring a broad spectrum of physiological data, from heart rates to br
Decentralized Identity in the Age of Wearable Technology

The rise of wearable technology, as detailed by Luis Quintero in his insightful article on The Conversation, presents an exciting yet daunting evolution in how we interact with digital devices. Wearables now extend beyond fitness trackers to include devices capable of monitoring a broad spectrum of physiological data, from heart rates to brain activity. While these devices promise enhanced personal health monitoring and more immersive digital experiences, they also raise significant privacy concerns. This piece aims to explore how decentralized identity (DID) can provide robust solutions to these concerns.

Wearable devices are now becoming a more significant element in this discussion due to their ability to collect continuous data, without the wearer necessarily being aware of it. — Read the full article by Luis Quintero on The Conversation
Continuous and Invasive Data Collection

Quintero adeptly highlights the dual-edged nature of wearable technologies: while they offer personalized health insights, they also pose risks due to the continuous and often non-consensual collection of personal data. This data collection can become invasive, extending into areas we might prefer remained private.

Decentralized Identity Response:
Decentralized identity systems empower users by ensuring they maintain control over their personal data. Through DIDs, users can effectively manage who has access to their data and under what conditions. For instance, they could grant a fitness app access to their workout data without exposing other sensitive health information. This selective sharing mechanism, enforced through blockchain technology, ensures data privacy and security by design.

The Exploitation of Sensitive Data

The potential for exploiting personal data collected by wearables for commercial gain is a pressing issue. Without stringent controls, companies could misuse this data, affecting user privacy and autonomy.

Decentralized Identity Response:
Implementing DIDs can safeguard against such exploitation. By using encryption and blockchain, each user’s data remains securely under their control, accessible only through permissions that they can grant or revoke at any time. This approach not only secures data against unauthorized access but also provides a transparent record of who accesses the data and for what purpose.

Enhanced AI Capabilities and Privacy Risks

As AI integrates more deeply with wearable technologies, the scope for analyzing this data expands, leading to enhanced capabilities but also increased privacy risks.

Decentralized Identity Response:
DIDs can mitigate these risks by enabling the creation of anonymized datasets that AI algorithms can process without accessing directly identifiable information. This allows users to benefit from advanced AI applications in their devices while their identity and personal data remain protected.

Addressing Emerging Technologies

With wearable technologies becoming capable of more deeply intrusive monitoring — such as tracking brain activity or emotional states — the need for robust privacy safeguards becomes even more critical.

Decentralized Identity Response:
The flexibility of DIDs is key here. They allow users to set specific, context-based permissions for data access, which is essential for technologies that monitor highly sensitive physiological and mental data. Users can ensure that their most personal data is shared only when absolutely necessary and only with entities they trust explicitly.

Conclusion: Empowering Users Through Decentralized Identity

The integration of wearable technology into our daily lives must be approached with a strong emphasis on maintaining and enhancing user privacy. Decentralized identity offers a powerful tool to achieve this by putting control back in the hands of users, thus enabling a future where technology serves humanity without compromising individual privacy.

As we move forward, it is crucial for policymakers, technology developers, and consumers to come together to support the adoption of decentralized identity solutions. By fostering an environment where privacy is valued and protected, we can ensure that the advancements in wearable technology will enhance our lives without endangering our personal information.

Join Us in Shaping the Future with Decentralized Identity

Interested in solving the complex privacy challenges such as those posed by wearable technology? We invite you to join Ontology’s $10 million initiative aimed at fostering innovation in decentralized identity. Help us empower users to take control of their data in an increasingly connected world.

Apply to the Fund: If you have ideas or projects that advance decentralized identity solutions, we want to hear from you. Learn more and submit your proposal here.

Securing Our Digital Selves was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


KuppingerCole

Zero Trust Network Access (ZTNA)

by Alejandro Leal The concept of Zero Trust is based on the assumption that any network is always hostile, and thus, any IT system, application, or user is constantly exposed to potential external and internal threats. This Zero Trust philosophy has become increasingly relevant as organizations grapple with the proliferation of remote work, cloud adoption, and the growing sophistication of cyber t

by Alejandro Leal

The concept of Zero Trust is based on the assumption that any network is always hostile, and thus, any IT system, application, or user is constantly exposed to potential external and internal threats. This Zero Trust philosophy has become increasingly relevant as organizations grapple with the proliferation of remote work, cloud adoption, and the growing sophistication of cyber threats. Within Zero Trust, the concept of ZTNA (Zero Trust Network Access) plays a central role.

May 29, 2024: Road to EIC: Deepfakes and Misinformation vs Decentralized Identity

Combating deepfakes and misinformation is commonly framed as an arms race, constantly one-upping each other for more realistic attacks and more sophisticated detection. But is this a game that really needs to be played? Rather than escalating competition, is it possible to disarm deepfakes? Decentralized identity is all about establishing a chain of trust for transactions, building the foundation f
Combating deepfakes and misinformation is commonly framed as an arms race, constantly one-upping each other for more realistic attacks and more sophisticated detection. But is this a game that really needs to be played? Rather than escalating competition, is it possible to disarm deepfakes? Decentralized identity is all about establishing a chain of trust for transactions, building the foundation for proving identity and content authenticity. Is it possible that decentralized systems can fundamentally negate the risk that deepfakes pose?

Tokeny Solutions

Globacap and Tokeny Join Forces to Enhance Tokenized Private Asset Distribution

The post Globacap and Tokeny Join Forces to Enhance Tokenized Private Asset Distribution appeared first on Tokeny.

Luxembourg, 25 April 2024 – Capital markets technology firms Tokeny, a leader in tokenization technology for capital markets assets, and Globacap, a world leader in the automation of private markets operational workflow, have partnered to expand the DINO Network and transform the distribution landscape for tokenized private assets.

Private capital markets have witnessed robust growth over the past decade, surpassing global public markets’ expansion by 1.7 times. Tokenization has emerged as a key enabler of accessibility, efficiency, transparency, and liquidity for the private market by providing a unified and programmable infrastructure.

One of the key challenges in tokenized private assets distribution is enforcing post-issuance compliance and ensuring interoperability with distribution platforms. ERC-3643, the token standard for tokenized securities, addresses these issues by restricting token interactions to eligible users while maintaining compatibility with applications supporting ERC-20 tokens.

Globacap is now part of the DINO Network initiative, an interoperable distribution network for digital securities leveraging the ERC-3643 token standard, to expand the reach of its marketplace. Tokeny acts as a connector provider between Globacap and the DINO Network. Combined with Globacap’s workflow automation software, this partnership aims to bring public market-like efficiency to private markets, enabling greater execution capabilities in secondary markets, streamlining workflows, and ensuring robust record integrity.

Globacap digitizes the workflow and execution across the entire private markets journey, from primary raises through vehicle and investor management and the execution and settlement of secondary transactions. Its technology has been used to host over 150+ primary placements, digitize over $20bn in investment vehicles, and execute and settle over $600m in secondary transactions of private assets.

Tokeny, with six years of experience in tokenization and €28 billion in assets tokenized, is the initial creator of the ERC-3643 standard, advancing market standardization in tokenization. After having built a robust tokenization engine for financial institutions, Tokeny is now helping the ecosystem to build blockchain-based distribution rails.

The DINO Network leverages ERC-3643 to enhance liquidity by ensuring compliance and interoperability across platforms. Together with leading platforms like Globacap, we are revolutionizing private markets distribution, making it efficient, transparent, and liquid. Luc FalempinCEO Tokeny Despite the size and importance of private markets which have over $13 trillion AUM, for years they have lacked the infrastructure and transparency necessary for efficient transactions. Globacap provides rails that accelerate transaction capability in private markets while significantly reducing operational overheads. The combination of our offering with Tokeny is immense and will help to drive private markets innovation and growth forward. Myles MilstonCo-founder and CEO of Globacap Contact

Globacap

Nick Murray-Leslie/Michael Deeny

globacap@chatsworthcommunications.com

xxxxxxxxxxxxxxx

Tokeny

Shurong Li

shurong@tokeny.com

About Tokeny

Tokeny provides a compliance infrastructure for digital assets. It allows financial actors operating in private markets to compliantly and seamlessly issue, transfer, and manage securities using distributed ledger technology. By applying trust, compliance, and control on a hyper-efficient infrastructure, Tokeny enables market participants to unlock significant advancements in the management and liquidity of financial instruments. 

About Globacap 

Globacap is a leading capital markets technology firm that digitizes and automates the world’s private capital markets.

It delivers a white-label SaaS solution that brings public markets-like efficiency to private markets. The software’s digital workflows enable financial institutions including securities exchanges, securities firms, private banks, and asset managers to accelerate their private market commercial activity while also driving down operating costs. 

One platform. Next-generation technology. Powerful placement and liquidity management.

The post Globacap and Tokeny Join Forces to Enhance Tokenized Private Asset Distribution first appeared on Tokeny.

The post Globacap and Tokeny Join Forces to Enhance Tokenized Private Asset Distribution appeared first on Tokeny.


Ayan Works

How Does Combining AI and Blockchain Redefine Our Future?

In the rapidly evolving landscape of technology, two groundbreaking technologies are set to reshape our future — blockchain and artificial intelligence (AI). The convergence of these technologies holds immense potential, offering unique possibilities for innovation and transformation across various industries. In this blog, we delve into the transformative partnership of AI and Blockchain, explori

In the rapidly evolving landscape of technology, two groundbreaking technologies are set to reshape our future — blockchain and artificial intelligence (AI). The convergence of these technologies holds immense potential, offering unique possibilities for innovation and transformation across various industries. In this blog, we delve into the transformative partnership of AI and Blockchain, exploring how their collaboration enhances security, transparency, and efficiency across diverse domains.

The Forces of Change: Blockchain and Artificial Intelligence

Blockchain and AI are leading the way in modern-day innovations with their deep impact on different aspects of human life. AI, with its ability to simulate human intelligence, facilitates automation, predictive analysis, and personalized experiences. On the other hand, Blockchain, originally designed for secure and transparent transactions, provides a decentralized and tamper-proof ledger.

The collaboration between AI and Blockchain addresses critical issues such as data security, transparency, and efficiency. The combined market size of these technologies is projected to exceed USD 980.70 million by 2030, reflecting a remarkable CAGR of 24.06% from 2021 to 2030. Businesses can strategically leverage this integration to enhance the security and transparency of AI applications, paving the way for advanced models while ensuring data integrity.

Transforming the AI Ecosystem with Blockchain:

Conversely, Blockchain transforms the AI ecosystem in several ways:

1. Streamlined Transactions: Blockchain ensures immutable, real-time recording of app data, customer details, and financial transactions, fostering faster, secure, and fair transactions. For instance, consider a scenario where a financial institution utilizes Blockchain to record and verify transactions, reducing processing time and enhancing security.

2. Enhanced Data Quality: Blockchain provides decentralized, high-quality data accessible to AI, overcoming challenges related to limited data access and data authentication. A practical example is a healthcare system leveraging Blockchain to securely share patient data across authorized entities, ensuring accurate and reliable information for AI applications.

3. Decentralized Intelligence: Blockchain enables frictionless access to shared, authenticated information, overcoming data centralization issues and enhancing AI system accuracy. An illustration would be a supply chain management utilizing Blockchain ensures real-time sharing of production and logistics data, improving the reliability of AI-driven predictions and optimizations.

4. Enhanced Transparency: Blockchain’s transparency features enhance the accountability of AI systems, allowing businesses to inspect decision-making processes for continuous improvement. For example, a financial institution implementing Blockchain for auditing purposes can enhance transparency in financial transactions, building trust among stakeholders.

5. Trust Establishment: Blockchain establishes a publicly accessible, immutable registry, enhancing trust in AI systems by providing verified real-time information. Imagine a scenario where an e-commerce platform utilizes Blockchain to verify product authenticity, ensuring customers can trust the AI-driven recommendations and purchase decisions.

Benefits of combining AI and blockchain

The combining of AI and Blockchain unlocks numerous benefits, including:

1. Improved Security: AI algorithms identify irregularities and prevent fraudulent activities and Blockchain makes sure that data stays unchanged and safe by using the cryptography technique.

2. Higher Trustability: Blockchain maintains an unchangeable record of every step in the decision-making process. Once information is added to the blockchain, it cannot be altered or tampered with, boosting public trust in AI systems.

3. AI for Smart Contracts: Smart contracts gain intelligence and flexibility through AI, which enables smart contracts to examine data, recognize patterns, and predict outcomes, leading to enhanced efficiency and better decision-making.

4. Decentralized Decision-Making: Integrating AI algorithms with blockchain enables decentralized decision-making processes, which means that algorithms can access and analyze data from various sources without relying on a central authority. In agriculture, AI models can make more precise predictions by considering diverse weather data stored on a blockchain.

5. Decentralized AI Computing: AI uses a lot of computing power all kept in centralized data centers. But with blockchain tech, it becomes possible to decentralize AI processes. This decentralized methodology not only elevates the efficiency of AI computations but also distributes the computational workload across the network, making it handle more tasks easily.

The transformative potential of AI and blockchain extends across industries:

1. Healthcare: Blockchain-based AI enhances patient care by securely exchanging and storing medical information, enabling efficient trend analysis and personalized treatments.

2. Supply Chain: Blockchain-based AI improves supply chain management, providing real-time insights to enhance efficiency, reduce counterfeiting, and improve traceability. By using smart contracts and predictive analytics together, companies can check old data and predict what people will want using AI. With Blockchain, the system can automatically adjust how much stuff to keep in stock, order things it needs, and make distribution better using smart contracts.

3. Banking and Finance: The collaboration of blockchain and AI revolutionizes financial services, ensuring efficiency, security, and transparency in transactions. Blockchain makes trustworthy smart contracts, and AI reduces the need for humans to understand emotions and predict future actions. This improves automation and performance.

4. Life Sciences: Blockchain accelerates drug development by securely tracking medications, while AI algorithms analyze data to expedite research, enhance clinical trials, and ensure drug safety.

Example: Secure Medical Data Exchange with Blockchain and AI

Imagine a scenario where a patient undergoes treatment at a hospital and generates a vast amount of medical data, including test results, treatment history, and diagnostic images. Traditionally, accessing and sharing this sensitive information across healthcare providers, researchers, and insurance companies posed significant challenges due to privacy concerns and data security risks.

Blockchain-enabled Data Sharing: Patient records are stored securely on a blockchain, ensuring tamper-proof and transparent access. Each transaction, like adding a new record or granting access, is recorded for traceability. AI-driven Data Analysis: AI algorithms analyze this data to detect patterns and support clinical decisions. For instance, AI can predict health outcomes or personalize treatment plans based on patient history. Enhanced Patient Care: This integration allows for personalized care delivery. For example, AI flags potential health risks, enabling proactive interventions, and improving treatment outcomes.

Research Facilitation: Researchers access anonymized data on the blockchain, accelerating medical research and innovation.

In Conclusion:

The collaboration between blockchain and artificial intelligence (AI) is reshaping our future. This partnership enhances security, transparency, and efficiency across industries. The streamlined transactions, decentralized intelligence, and improved data quality provided by blockchain transform the AI ecosystem. This integration not only benefits sectors like healthcare, supply chain, and finance but also introduces novel concepts like decentralized decision-making and AI for smart contracts.

As a leading blockchain development company, we support businesses on their transformative journey. Our team provides customized solutions that perfectly match your unique needs. Together, let’s explore the limitless possibilities of tomorrow’s technology, where possibilities are boundless, and positive change is inevitable.

For further updates and inquiries, reach out to AYANWORKS and stay tuned for the latest advancements in this exciting field.

FAQ:

1. Why is the combination of AI and Blockchain beneficial for businesses?

The integration of AI and Blockchain offers several advantages for businesses, enhancing security, transparency, and efficiency across industries. The streamlined transactions, decentralized intelligence, and improved data quality provided by blockchain transform the AI ecosystem.

2. How does Blockchain improve data security in AI applications?

Blockchain ensures the secure, real-time recording of data, thereby enhancing AI security. The data remains immutable, as it ensured by blockchain through cryptography methods

3. How does Blockchain improve trust in AI systems?

Blockchain establishes a safe, immutable record of data, which gives trust to AI. The system ensures that the data used by AI is transparent and unchangeable, allowing for open verification.


Ocean Protocol

DF86 Completes and DF87 Launches

Predictoor DF86 rewards available. Passive DF & Volume DF will be retired; airdrop pending. DF87 runs Apr 25— May 2, 2024 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by locking OCEAN, curating data, and making predictions via Predictoor. Ocean Protocol is joining with Fetch and SingularityNET to form the Superintelligence Alliance, wit
Predictoor DF86 rewards available. Passive DF & Volume DF will be retired; airdrop pending. DF87 runs Apr 25— May 2, 2024 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by locking OCEAN, curating data, and making predictions via Predictoor.

Ocean Protocol is joining with Fetch and SingularityNET to form the Superintelligence Alliance, with a unified token $ASI. This Mar 27, 2024 article describes the key mechanisms. This merge was pending a “yes” vote from the Fetch and SingularityNET communities. As of Apr 16, 2024: it was a “yes” from both; therefore the merge is happening.
The merge has important implications for veOCEAN and Data Farming. veOCEAN will be retired. Passive DF & Volume DF rewards have stopped, and will be retired. Each address holding veOCEAN will be airdropped OCEAN in the amount of: (1.25^years_til_unlock-1) * num_OCEAN_locked. This airdrop will happen within weeks after the “yes” vote. The value num_OCEAN_locked is a snapshot of OCEAN locked & veOCEAN balances as of 00:00 am UTC Wed Mar 27 (Ethereum block 19522003). The article “Superintelligence Alliance Updates to Data Farming and veOCEAN” elaborates.

Data Farming Round 86 (DF86) has completed. Passive DF & Volume DF rewards are stopped, and will be retired. Predictoor DF claims run continuously.

DF87 is live today, April 25. It concludes on May 2. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF87 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF85

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF86 Completes and DF87 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Tokeny Solutions

BlackRock’s Influence and the Future of MMFs

The post BlackRock’s Influence and the Future of MMFs appeared first on Tokeny.
April 2024 BlackRock’s Influence and the Future of MMFs

In the world of finance, innovation acceleration often requires the endorsement of industry giants. BlackRock’s embrace of Tokenized Money Market Funds (MMFs) represents a monumental milestone towards the widespread adoption of tokenized securities. This drives financial institutions to kick off the tokenization of real use cases, fueled by a touch of FOMO (Fear of Missing Out).

By leveraging public blockchains, BlackRock not only demonstrates the viability of blockchain technology in finance but also sets the stage for a transformative shift towards decentralized and open financial solutions. This instills greater confidence in institutions to embrace public blockchains.

Furthermore, BlackRock’s BUIDL fund successfully attracted $245 million in its first week of operations, underscoring the robust appetite from the buy side. This success also indicates the appeal of the 24/7/365 availability, a compelling feature for tokenized forms of highly liquid assets like MMFs. For instance, Ondo Finance’s OUSG (Ondo Short-Term US Government Treasuries) token, previously limited to traditional market hours with a T+2 subscription and redemption time, now allows instant settlement by moving $95 million of assets to BlackRock’s BUIDL.

In addition, prominent players in the web3 space are starting to create solutions to support tokenized funds. For example, Circle’s latest smart contract feature allows BUIDL holders to exchange shares for its stablecoin USDC, enabling effortless 24/7 transfers on the secondary market.

Nevertheless, BlackRock initially utilized only one specific marketplace for tokenized MMFs distribution, whereas the future vision for tokenized MMFs and other securities extends far beyond singular centralized platforms. The next frontier is broader distribution across diverse trading platforms and DeFi protocols. As a result, tokenized MMFs can be distributed to any distributor platform and serve as collateral for lending smart contracts or liquidity pool deposits within automated market makers, unlocking accessibility, utility, and ultimately liquidity.

Enabling this expansion requires advanced smart contracts and robust token standards such as ERC-3643 to ensure compliance at the token level. Excitingly, the ERC-3643 standard has gained significant traction through the push of the community-formed non-profit association. For several years at Tokeny and now with the association, we’ve had the privilege of presenting this standard to several regulators, including the SEC (US), CSSF (Luxembourg), BaFin (Germany), DFSA (Dubai), FSRA (Abu Dhabi), and MAS (Singapore). More and more, the framework’s ability to uphold existing security laws is now recognized globally.

With the market readiness and industry-wise recognition of the standard, top-tier institutions are now approaching us for assistance in tokenizing MMFs. Last month, we announced our partnership with Moreliquid to tokenize the HSBC Euro Liquidity Fund using ERC-3643. This is just beginning, and we’re excited to share major announcements with the market very soon. Stay tuned for more updates!

Tokeny Spotlight

PARTNERSHIP

We integrated Telos, greatly enhancing our EVM multi-chain capabilities.

Read More

FEATURE

CEO Luc Falempin was recently featured in The Big Whale report.

Read More

TALENT

Introducing Fedor Bolotnov, our QA Engineer, who shares his story.

Read More

EVENT

We went to Australia to speak at an event co-hosted with SILC.

Read More

PARTNERSHIP

The SILC Group Partners with Tokeny to Pilot Alternative Assets Through Tokenization.

Read More

PRODUCT NEWSLETTER

Introducing Leandexer: Simplifying Blockchain Data Interaction.

Read More Tokeny Events

AIM Congress 
May 7th – 9th , 2024 | 🇦🇪 Dubai

Register Now

Digital Assets Week California
May  21th-22th, 2024 | 🇺🇸 USA

Register Now

Consensus
May  29th-31th, 2024 | 🇺🇸 USA

Register Now ERC3643 Association Recap

Webinar: Diamonds on the Blockchain

In this insightful webinar we delved into the world of diamond fund tokenization, exploring its benefits, and the underlying technology including public blockchain and ERC-3643 standard.

Watch Here

Feature: Fund Tokenization Report

ERC-3643 is highlighted in the Fund Tokenization Report published by The Investment Association in collaboration with the Financial Conduct Authority and HM Treasury.

Read the Report

Coinbase covered ERC-3643 Use Case

Diamonds Arrive on a Blockchain With New Tokenized Fund on Avalanche Network, using ERC-3643.

Read More

Subscribe Newsletter

A monthly newsletter designed to give you an overview of the key developments across the asset tokenization industry.

Previous Newsletter  Apr25 BlackRock’s Influence and the Future of MMFs April 2024 BlackRock’s Influence and the Future of MMFs In the world of finance, innovation acceleration often requires the endorsement of industry giants. BlackRock’s embrace… Mar25 🇭🇰 Hong Kong’s Competitive Leap: Fueling Tokenization Growth Across Asia March 2024 Hong Kong’s Competitive Leap: Fueling Tokenization Growth Across Asia This month, we attended the Digital Assets Week Hong Kong conference and were struck… Feb26 Why Do Asset Managers, Like BlackRock, Embrace Tokenization? February 2024 Why Do Asset Managers, Like BlackRock, Embrace Tokenization? I’m excited to share some exciting news from our side, as we proudly participated in Citi’s… Jan24 Year of Tokeny: 2023’s Milestones & 2024’s Tokenization Predictions January 2024 Year of Tokeny: 2023’s Milestones & 2024’s Tokenization Predictions I hope you kicked off the new year with great energy and enthusiasm. At…

The post BlackRock’s Influence and the Future of MMFs first appeared on Tokeny.

The post BlackRock’s Influence and the Future of MMFs appeared first on Tokeny.


PingTalk

Authorized Push Payment and Social Engineering: How to Fight Back | Ping Identity

When fraud occurs as a result of scams and social engineering, organizations can struggle to stop it. This is because when legitimate customers fall prey to online imposter scams—for instance, in the case of authorized push payment fraud (APP)—the impact from losses can snowball, affecting not just the customer, but the organization at which the fraud took place.   In fact, according to t

When fraud occurs as a result of scams and social engineering, organizations can struggle to stop it. This is because when legitimate customers fall prey to online imposter scams—for instance, in the case of authorized push payment fraud (APP)—the impact from losses can snowball, affecting not just the customer, but the organization at which the fraud took place.

 

In fact, according to the FTC, American consumers reported losing over $2.3B to imposter scams in 2021. Meanwhile, across the pond, UK Finance reported that losses due to authorized push payment fraud rose by 71% in the first half of 2021 in the UK, the same report stating that the amount of money stolen through this type of scam even overtook card fraud losses.

 

Ultimately, financial institutions need to find ways to effectively combat thes massive losses that can come from APP fraud before they’re left footing the bill.

 

Let's start with a couple of definitions.

Wednesday, 24. April 2024

HYPR

Best Practices to Strengthen VPN Security

Virtual private networks (VPNs) form a staple of the modern work environment. VPNs provide an essential layer of protection for employees working remotely or across multiple office locations, encrypting data traffic to stop hackers from intercepting and stealing information. Usage of VPNs skyrocketed in the wake of the COVID-19 pandemic and remains high — 77% of employees use VPN for th

Virtual private networks (VPNs) form a staple of the modern work environment. VPNs provide an essential layer of protection for employees working remotely or across multiple office locations, encrypting data traffic to stop hackers from intercepting and stealing information. Usage of VPNs skyrocketed in the wake of the COVID-19 pandemic and remains high — 77% of employees use VPN for their work nearly every day, according to the 2023 VPN Risk Report by Zscaler.

Their widespread popularity has put VPNs squarely in the crosshairs of malicious actors. The recent ransomware attack on UnitedHealth Group, which disrupted payments to U.S. doctors and healthcare facilities nationwide for a month, has now been linked to compromised credentials on a remote system access application. This follows on the heels of a large-scale brute force attack campaign against multiple remote access VPN services, reported by Cisco Talos.

Unfortunately, these attacks are not an anomaly. The VPN Risk Report found that 45% of organizations experienced at least one attack that exploited VPN vulnerabilities in the last 12 months. So what can organizations do to protect this vulnerable gateway? Here we’ll cover the top VPN security best practices every organization should follow.

How Does a VPN Work?

A VPN creates an encrypted connection between a user’s device and the organization’s network via the internet. Using a VPN, companies can grant remote employees access to internal applications and systems, or establish a unified network across multiple office sites or environments.

An employee typically initiates a VPN connection through a client application installed on their device, connecting to a VPN server hosted within the organization. This connection creates a secure "tunnel" that encrypts all data transmitted between the employee's device and the corporate network. With the VPN connection established, the user's device is virtually part of the organization's internal network and the employee can access internal applications, databases, file shares, and other resources that are typically only accessible within the corporate network. Authentication between VPN clients and servers occurs through the exchange of digital certificates and credentials, with multi-factor authentication (MFA) a means to provide an additional layer of security.

While VPNs provide some measure of remote access security, they also make a soft target for attackers. Moreover, VPN attacks pose an outsized risk — once attackers gain entry through a VPN, they often get direct access to  a broad swath of an organization's networks and data.   

Common VPN Attack Vectors

Before we explore VPN security best practices, it’s important to understand how attackers exploit system vulnerabilities to gain access.

Authentication-Related Attacks

Attacks on VPNs often revolve around authentication and credential-related weaknesses. These include:

Credential Theft and Brute Force Attacks: Attackers target VPN credentials through phishing, keylogging malware, or brute force techniques to gain unauthorized access. Session Hijacking: Hijacking active VPN sessions by stealing session cookies or exploiting session management vulnerabilities allows attackers to impersonate users and access VPN-protected resources. Man-in-the-Middle (MitM) Attacks: Exploiting weak authentication or compromised certificates, attackers intercept and manipulate VPN traffic to eavesdrop or modify data. Vulnerability Exploits

Security flaws in VPN solutions themselves are another common route of attack. A recent analysis by Securin showed that the number of vulnerabilities discovered in VPN products increased 875% between 2020 and 2024. Hackers exploit vulnerabilities in VPN client software or server-side VPN components to gain unauthorized access to VPN endpoints. This can lead to complete compromise of the endpoint or enable attackers to intercept VPN traffic. In fact, the Cybersecurity Infrastructure and Security Agency (CISA) itself was recently breached by hackers exploiting vulnerabilities in the agency’s VPN systems.

Five VPN Security Best Practices

With the growing assault on VPNs, organizations must adopt proactive security strategies to protect this major point of vulnerability. The following measures are recommended best practices to strengthen your VPN security posture.

Regularly Update VPN Software and Components

Patch management is crucial for maintaining a secure VPN infrastructure. Regularly update VPN client software, servers, gateways, and routers with the latest security patches and firmware to mitigate vulnerabilities and defend against emerging threats. Establish procedures for emergency patching to promptly address critical vulnerabilities and ensure the ongoing security of your VPN environment.

Deploy Multi-Factor Authentication

As one of the primary avenues of attacks on VPN, strong authentication protocols are critical. The remote access application that was breached In the UHG attack lacked multi-factor authentication controls. Massive leaks of stolen credentials, and crude but effective techniques such as password spraying and credential stuffing, make it trivial for attackers to gain entry to VPN when only a username and password stand in the way. Organizations should deploy, at the very least, multi-factor authentication (MFA). MFA challenges users to provide something they own (OTP, device, security key) or something they are (face scan, fingerprint) in addition to or instead of something they know (password, PIN). 

Make It Phishing-Resistant MFA

VPN security is vastly improved by using passwordless authentication methods that completely remove shared secrets. This makes it impossible for attackers to guess or steal authentication factors and much harder to spoof identity. Specifically, passwordless authentication based on FIDO standards provides robust defense against phishing, man-in-the-middle (MitM) attacks and hacking attempts by eliminating insecure methods like SMS or OTPs. Moreover, since it’s based on public-key cryptography, it ensures there are no server-side shared secrets vulnerable to theft in case of a breach.

Implement Access Control and Least Privilege:

Apply granular access control policies to restrict VPN access based on user roles, groups, or individual permissions. Ensure that users have access only to the resources necessary for their job functions (principle of least privilege), reducing the impact of potential insider threats or compromised credentials.

Regularly Monitor and Audit VPN Traffic

Enable logging and monitoring of VPN traffic to detect suspicious activities, anomalies, or potential security incidents. Regularly review VPN logs and conduct security audits to identify unauthorized access attempts, unusual patterns, or compliance deviations. This proactive approach helps maintain visibility into VPN usage and ensures prompt response to security incidents.

Leverage known IOCs, shared with the community as well by other vendors. VPN-oriented IOCs usually contain source IPs and Hosting providers which you can block. 

Monitor logs for employee login behavior changes such as location changes (outside of normal locations for your business), login attempts outside regular business hours, as well as attempts with invalid username and password combinations.

Strengthen Your VPN Security With HYPR

Despite the security concerns, VPNs are not going away any time soon. Adhering to VPN security best practices mitigates the technology’s vulnerabilities to safeguard your employees, systems and data. The most essential defense step is to deploy strong authentication systems. And the most robust systems completely remove passwords and all shared secrets from their VPN authentication.

HYPR’s leading passwordless MFA solution allows your workers to securely log into remote access systems, including VPNs, with a friction-free user experience. To find out how HYPR helps secure your networks and users against attacks targeting your VPN, get in touch with our team.


KuppingerCole

Data Governance Is So Much More Than Good Housekeeping

We hear much about AI these days, but AI would be of little use without data to power it. Without a good data input the results form AI will disappoint. All businesses generate valuable data from day-to-day activities, much of it unstructured – that is to say it resides inside office documents, PDFs, email inboxes and social media. This type of data already outstrips traditional structured data to

We hear much about AI these days, but AI would be of little use without data to power it. Without a good data input the results form AI will disappoint. All businesses generate valuable data from day-to-day activities, much of it unstructured – that is to say it resides inside office documents, PDFs, email inboxes and social media. This type of data already outstrips traditional structured data to be found in database applications, but most organizations have little idea of what or where this data lies. That stuff tends to be already ordered and listed but even here, organizations can benefit from governance to stratify this data further. Such Data Governance platforms exist to do this and make sense of unstructured data too. The best Data Governance platforms will not just make sense of data but reveal if privacy laws are being violated or data has passed the point of being useful. But the real value is when you apply Data Governance to  your business goals.

Paul Fisher is a Lead Analyst at KuppingerCole and author of the Leadership Compass report on Data Governance platforms. In this webinar he will explain the growing importance of data governance and how it will assist both business AI and Identity tools in the future. Paul will show this market is characterized by continuous technological advancements and evolving regulatory landscapes. He will take you through some of the key findings of the Leadership Compass and how you can use it to find a Data Governance solution to fit your business needs and how a Data Governance Architecture Blueprint can help maintain data integrity and create business value from any Data Governance platform.




IBM Blockchain

Data privacy examples

Discover the data privacy principles, regulations and risks that may impact your organization. The post Data privacy examples appeared first on IBM Blog.

An online retailer always gets users’ explicit consent before sharing customer data with its partners. A navigation app anonymizes activity data before analyzing it for travel trends. A school asks parents to verify their identities before giving out student information.

These are just some examples of how organizations support data privacy, the principle that people should have control of their personal data, including who can see it, who can collect it, and how it can be used.

One cannot overstate the importance of data privacy for businesses today. Far-reaching regulations like Europe’s GDPR levy steep fines on organizations that fail to safeguard sensitive information. Privacy breaches, whether caused by malicious hackers or employee negligence, can destroy a company’s reputation and revenues. Meanwhile, businesses that prioritize information privacy can build trust with consumers and gain an edge over less privacy-conscious competitors. 

Yet many organizations struggle with privacy protections despite the best intentions. Data privacy is more of an art than a science, a matter of balancing legal obligations, user rights, and cybersecurity requirements without stymying the business’s ability to get value from the data it collects. 

An example of data privacy in action

Consider a budgeting app that people use to track spending and other sensitive financial information. When a user signs up, the app displays a privacy notice that clearly explains the data it collects and how it uses that data. The user can accept or reject each use of their data individually. 

For example, they can decline to have their data shared with third parties while allowing the app to generate personalized offers. 

The app heavily encrypts all user financial data. Only administrators can access customer data on the backend. Even then, the admins can only use the data to help customers troubleshoot account issues, and only with the user’s explicit permission.

This example illustrates three core components of common data privacy frameworks:

Complying with regulatory requirements: By letting users granularly control how their data is processed, the app complies with consent rules that are imposed by laws like the California Consumer Privacy Act (CCPA). Deploying privacy protections: The app uses encryption to protect data from cybercriminals and other prying eyes. Even if the data is stolen in a cyberattack, hackers can’t use it.
  Mitigating privacy risks: The app limits data access to trusted employees who need it for their roles, and employees can access data only when they have a legitimate reason to. These access controls reduce the chances that the data is used for unauthorized or illegal purposes.  

Learn how organizations can use IBM Guardium® Data Protection software to monitor data wherever it is and enforce security policies in near real time.

Examples of data privacy laws

Compliance with relevant regulations is the foundation of many data privacy efforts. While data protection laws vary, they generally define the responsibilities of organizations that collect personal data and the rights of the data subjects who own that data.

Learn how IBM OpenPages Data Privacy Management can improve compliance accuracy and reduce audit time.

The General Data Protection Regulation (GDRP)

The GDPR is a European Union privacy regulation that governs how organizations in and outside of Europe handle the personal data of EU residents. In addition to being perhaps the most comprehensive privacy law, it is among the strictest. Penalties for noncompliance can reach up to EUR 20,000,000 or 4% of the organization’s worldwide revenue in the previous year, whichever is higher.

The UK Data Protection Act 2018

The Data Protection Act 2018 is, essentially, the UK’s version of the GDPR. It replaces an earlier data protection law and implements many of the same rights, requirements, and penalties as its EU counterpart. 

The Personal Information Protection and Electronic Documents Act (PIPEDA)

Canada’s PIPEDA governs how private-sector businesses collect and use consumer data. PIPEDA grants data subjects a significant amount of control over their data, but it applies only to data used for commercial purposes. Data used for other purposes, like journalism or research, is exempt.

US data protection laws

Many individual US states have their own data privacy laws. The most prominent of these is the California Consumer Privacy Act (CCPA), which applies to virtually any organization with a website because of the way it defines the act of “doing business in California.” 

The CCPA empowers Californians to prevent the sale of their data and have it deleted at their request, among other rights. Organizations face fines of up to USD 7,500 per violation. The price tag can add up quickly. If a business were to sell user data without consent, each record it sells would count as one violation. 

The US has no broad data privacy regulations at a national level, but it does have some more targeted laws. 

Under the Children’s Online Privacy Protection Act (COPPA), organizations must obtain a parent’s permission before collecting and processing data from anyone under 13. Rules for handling children’s data might become even stricter if the Kids Online Safety Act (KOSA), currently under consideration in the US Senate, becomes law. KOSA would require online services to default to the highest privacy settings for users under 18.

The Health Insurance Portability and Accountability Act (HIPAA) is a federal law that deals with how healthcare providers, insurance companies, and other businesses safeguard personal health information. 

The Payment Card Industry Data Security Standard (PCI DSS)

The Payment Card Industry Data Security Standard (PCI DSS) is not a law, but a set of standards developed by a consortium of credit card companies, including Visa and American Express. These standards outline how businesses must protect customers’ payment card data.

While the PCI DSS isn’t a legal requirement, credit card companies and financial institutions can fine businesses that fail to comply or even prohibit them from processing payment cards.

Examples of data privacy principles and practices

Privacy compliance is only the beginning. While following the law can help avoid penalties, it may not be enough to fully protect personally identifiable information (PII) and other sensitive data from hackers, misuse, and other privacy threats.

Some common principles and practices organizations use to bolster data privacy include:

Data visibility

For effective data governance, an organization needs to know the types of data it has, where the data resides, and how it is used. 

Some kinds of data, like biometrics and social security numbers, require stronger protections than others. Knowing how data moves through the network helps track usage, detect suspicious activity, and put security measures in the right places. 

Finally, full data visibility makes it easier to comply with data subjects’ requests to access, update, or delete their information. If the organization doesn’t have a complete inventory of data, it might unintentionally leave some user records behind after a deletion request. 

Example

A digital retailer catalogs all the different kinds of customer data it holds, like names, email addresses, and saved payment information. It maps how each type of data moves between systems and devices, who has access to it (including employees and third parties), and how it is used. Finally, the retailer classifies data based on sensitivity levels and applies appropriate controls to each type. The company conducts regular audits to keep the data inventory up to date.

User control

Organizations can limit privacy risks by granting users as much control over data collection and processing as possible. If a business always gets a user’s consent before doing anything with their data, it’s hard for the company to violate anyone’s privacy.

That said, organizations must sometimes process someone’s data without their consent. In those instances, the company should make sure that it has a valid legal reason to do so, like a newspaper reporting on crimes that perpetrators would rather conceal.

Example

A social media site creates a self-service data management portal. Users can download all the data they share with the site, update or delete their data, and decide how the site can process their information.

Data limitation

It can be tempting to cast a wide net, but the more personal data a company collects, the more exposed it is to privacy risks. Instead, organizations can adopt the principle of limitation: identify a specific purpose for data collection and collect the minimum amount of data needed to fulfill that purpose. 

Retention policies should also be limited. The organization should dispose of data as soon as its specific purpose is fulfilled.

Example

A public health agency is investigating the spread of an illness in a particular neighborhood. The agency does not collect any PII from the households it surveys. It records only whether anyone is sick. When the survey is complete and infection rates determined, the agency deletes the data. 

Transparency

Organizations should keep users updated about everything they do with their data, including anything their third-party partners do.

Example

A bank sends annual privacy notices to all of its customers. These notices outline all the data that the bank collects from account holders, how it uses that data for things like regulatory compliance and credit decisions, and how long it retains the data. The bank also alerts account holders to any changes to its privacy policy as soon as they are made.

Access control

Strict access control measures can help prevent unauthorized access and use. Only people who need the data for legitimate reasons should have access to it. Organizations should use multi-factor authentication (MFA) or other strong measures to verify users’ identities before granting access to data. Identity and access management (IAM) solutions can help enforce granular access control policies across the organization.

Example

A technology company uses role-based access control policies to assign access privileges based on employees’ roles. People can access only the data that they need to carry out core job responsibilities, and they can only use it in approved ways. For example, the head of HR can see employee records, but they can’t see customer records. Customer service representatives can see customer accounts, but they can’t see customers’ saved payment data. 

Data security measures

Organizations must use a combination of tools and tactics to protect data at rest, in transit, and in use. 

Example

A healthcare provider encrypts patient data storage and uses an intrusion detection system to monitor all traffic to the database. It uses a data loss prevention (DLP) tool to track how data moves and how it is used. If it detects illicit activity, like an employee account moving patient data to an unknown device, the DLP raises an alarm and cuts the connection.

Privacy impact assessments

Privacy impact assessments (PIAs) determine how much risk a particular activity poses to user privacy. PIAs identify how data processing might harm user privacy and how to prevent or mitigate those privacy concerns.

Example

A marketing firm always conducts a PIA before every new market research project. The firm uses this opportunity to clearly define processing activities and close any data security gaps. This way, the data is only used for a specific purpose and protected at every step. If the firm identifies serious risks it can’t reasonably mitigate, it retools or cancels the research project. 

Data privacy by design and by default

Data privacy by design and by default is the philosophy that privacy should be a core component of everything the organization does—every product it builds and every process it follows. The default setting for any system should be the most privacy-friendly one.

Example

When users sign up for a fitness app, the app’s privacy settings automatically default to “don’t share my data with third parties.” Users must change their settings manually to allow the organization to sell their data. 

Examples of data privacy violations and risks

Complying with data protection laws and adopting privacy practices can help organizations avoid many of the biggest privacy risks. Still, it is worth surveying some of the most common causes and contributing factors of privacy violations so that companies know what to look out for.

Lack of network visibility

When organizations don’t have complete visibility of their networks, privacy violations can flourish in the gaps. Employees might move sensitive data to unprotected shadow IT assets. They might regularly use personal data without the subject’s permission because supervisors lack the oversight to spot and correct the behavior. Cybercriminals can sneak around the network undetected.

As corporate networks grow more complex—mixing on-premises assets, remote workers, and cloud services—it becomes harder to track data throughout the IT ecosystem. Organizations can use tools like attack surface management solutions and data protection platforms to help streamline the process and secure data wherever it resides.

Learn how IBM data privacy solutions implement key privacy principles like user consent management and comprehensive data governance.

AI and automation

Some regulations set special rules for automated processing. For example, the GDPR gives people the right to contest decisions made through automated data processing.

The rise of generative artificial intelligence can pose even thornier privacy problems. Organizations cannot necessarily control what these platforms do with the data they put in. Feeding customer data to a platform like ChatGPT might help garner audience insights, but the AI may incorporate that data into its training models. If data subjects didn’t consent to have their PII used to train an AI, this constitutes a privacy violation. 

Organizations should clearly explain to users how they process their data, including any AI processing, and obtain subjects’ consent. However, even the organization may not know everything the AI does with its data. For that reason, businesses should consider working with AI apps that let them retain the most control over their data. 

Overprovisioned accounts

Stolen accounts are a prime vector for data breaches, according to the IBM Cost of a Data Breach report. Organizations tempt fate when they give users more privileges than they need. The more access permissions that a user has, the more damage a hacker can do by hijacking their account.

Organizations should follow the principle of least privilege. Users should have only the minimum amount of privilege they need to do their jobs. 

Human error

Employees can accidentally violate user privacy if they are unaware of the organization’s policies and compliance requirements. They can also put the company at risk by failing to practice good privacy habits in their personal lives. 

For example, if employees overshare on their personal social media accounts, cybercriminals can use this information to craft convincing spear phishing and business email compromise attacks.

Data sharing

Sharing user data with third parties isn’t automatically a privacy violation, but it can increase the risk. The more people who have access to data, the more avenues there are for hackers, insider threats, or even employee negligence to cause problems.

Moreover, unscrupulous third parties might use a company’s data for their own unauthorized purposes, processing data without subject consent. 

Organizations should ensure that all data-sharing arrangements are governed by legally binding contracts that hold all parties responsible for the proper protection and use of customer data. 

Malicious hackers 

PII is a major target for cybercriminals, who can use it to commit identity theft, steal money, or sell it on the black market. Data security measures like encryption and DLP tools are as much about safeguarding user privacy as they are about protecting the company’s network.

Data privacy fundamentals

Privacy regulations are tightening worldwide, the average organization’s attack surface is expanding, and rapid advancements in AI are changing the way data is consumed and shared. In this environment, an organization’s data privacy strategy can be a preeminent differentiator that strengthens its security posture and sets it apart from the competition.

Take, for instance, technology like encryption and identity and access management (IAM) tools. These solutions can help lessen the financial blow of a successful data breach, saving organizations upwards of USD 572,000 according to the Cost of a Data Breach report. Beyond that, sound data privacy practices can foster trust with consumers and even build brand loyalty.

As data protection becomes ever more vital to business security and success, organizations must count data privacy principles, regulations, and risk mitigation among their top priorities.

Explore Guardium Data Protection

The post Data privacy examples appeared first on IBM Blog.


SC Media - Identity and Access

CoralRaider leverages CDN cache domains in new infostealer campaign

A new CryptBot variant targets password managers and authentication apps in the new campaign.

A new CryptBot variant targets password managers and authentication apps in the new campaign.


IBM Blockchain

Business process reengineering (BPR) examples

Explore some key use cases and customer stories in this blog about business process reengineering (BPR) examples. The post Business process reengineering (BPR) examples appeared first on IBM Blog.

Business process reengineering (BPR) is the radical redesign of core business processes to achieve dramatic improvements in performance, efficiency and effectiveness. BPR examples are not one-time projects, but rather examples of a continuous journey of innovation and change focused on optimizing end-to-end processes and eliminating redundancies. The purpose of BPR is to streamline workflows, eliminate unnecessary steps and improve resource utilization.

BPR involves business process redesign that challenges norms and methods within an organization. It typically focuses on achieving dramatic, transformative changes to existing processes. It should not be confused with business process management (BPM), a more incremental approach to optimizing processes, or business process improvement (BPI), a broader term that encompasses any systematic effort to improve current processes. This blog outlines some BPR examples that benefit from a BPM methodology.

Background of business process reengineering

BPR emerged in the early 1990s as a management approach aimed at radically redesigning business operations to achieve business transformation. The methodology gained prominence with the publication of a 1990 article in the Harvard Business Review, “Reengineering Work: Don’t Automate, Obliterate,” by Michael Hammer, and the 1993 book by Hammer and James Champy, Reengineering the Corporation. An early case study of BPR was Ford Motor Company, which successfully implemented reengineering efforts in the 1990s to streamline its manufacturing processes and improve competitiveness.

Organizations of all sizes and industries implement business process reengineering. Step 1 is to define the goals of BPR, and subsequent steps include assessing the current state, identifying gaps and opportunities, and process mapping.

Successful implementation of BPR requires strong leadership, effective change management and a commitment to continuous improvement. Leaders, senior management, team members and stakeholders must champion the BPR initiative and provide the necessary resources, support and direction to enable new processes and meaningful change.

BPR examples: Use cases Streamlining supply chain management

Using BPR for supply chain optimization involves a meticulous reassessment and redesign of every step, including logistics, inventory management and procurement. A comprehensive supply chain overhaul might involve rethinking procurement strategies, implementing just-in-time inventory systems, optimizing production schedules or redesigning transportation and distribution networks. Technologies such as supply chain management software (SCM), enterprise resource planning (ERP) systems, and advanced analytics tools can be used to automate and optimize processes. For example, predictive analytics can be used to forecast demand and optimize inventory levels, while blockchain technology can enhance transparency and traceability in the supply chain.

Benefits:

Improved efficiency Reduced cost Enhanced transparency Customer relationship management (CRM)

BPR is a pivotal strategy for organizations that want to overhaul their customer relationship management (CRM) processes. Steps of business process reengineering for CRM include integrating customer data from disparate sources, using advanced analytics for insights, and optimizing service workflows to provide personalized experiences and shorter wait times.

BPR use cases for CRM might include:

Implementing integrated CRM software to centralize customer data and enable real-time insights Adopting omnichannel communication strategies to provide seamless and consistent experiences across touchpoints Empowering frontline staff with training and resources to deliver exceptional service

Using BPR, companies can establish a comprehensive view of each customer, enabling anticipation of their needs, personalization of interactions and prompt issue resolution.

Benefits:

360-degree customer view Increased sales and retention Faster problem resolution Digitizing administrative processes

Organizations are increasingly turning to BPR to digitize and automate administrative processes to reduce human errors. This transformation entails replacing manual, paper-based workflows with digital systems that use technologies like Robotic Process Automation (RPA) for routine tasks.

This might include streamlining payroll processes, digitizing HR operations or automating invoicing procedures. This can lead to can significant improvements in efficiency, accuracy and scalability and enable the organization to operate more effectively.

Benefits:

Reduced processing times Reduced errors Increased adaptability Improving product development processes

BPR plays a crucial role in optimizing product development processes, from ideation to market launch. This comprehensive overhaul involves evaluating and redesigning workflows, fostering cross-functional collaboration and innovating by using advanced technologies. This can involve implementing cross-functional teams to encourage communication and knowledge sharing, adopting agile methodologies to promote iterative development and rapid prototyping, and by using technology such as product lifecycle management (PLM) software to streamline documentation and version control.

BPR initiatives such as these enable organizations to reduce product development cycle times, respond more quickly to market demands, and deliver innovative products that meet customer needs.

Benefits:

Faster time-to-market Enhanced innovation Higher product quality Updating technology infrastructure

In an era of rapid technological advancement, BPR serves as a vital strategy for organizations that need to update and modernize their technology infrastructure. This transformation involves migrating to cloud-based solutions, adopting emerging technologies like artificial intelligence (AI) and machine learning (ML), and integrating disparate systems for improved data management and analysis, which enables more informed decision making. Embracing new technologies helps organizations improve performance, cybersecurity and scalability and positioning themselves for long-term success.

Benefits:

Enhanced performance Improved security Increased innovation Reducing staff redundancy

In response to changing market dynamics and organizational needs, many companies turn to BPR to restructure their workforce and reduce redundancy. These strategic initiatives can involve streamlining organizational hierarchies, consolidating departments and outsourcing non-core functions. Optimizing workforce allocation and eliminating redundant roles allows organizations to reduce costs, enhance operational efficiency and focus resources on key priorities.

Benefits:

Cost savings Increased efficiency Focus on core competencies Cutting costs across operations

BPR is a powerful tool to systematically identify inefficiencies, redundancies and waste within business operations. This enables organizations to streamline processes and cut costs.

BPR focuses on redesigning processes to eliminate non-value-added activities, optimize resource allocation, and enhance operational efficiency. This might entail automating repetitive tasks, reorganizing workflows for minimizing bottlenecks, renegotiating contracts with suppliers to secure better terms, or by using technology to improve collaboration and communication. This can enable significant cost savings and improve profitability.

Benefits:

Improved efficiency Lower costs Enhanced competitiveness Improving output quality

BPR can enhance the quality of output across various business processes, from manufacturing to service delivery. BPR initiatives generally boost key performance indicators (KPIs).

Steps for improving output quality involve implementing quality control measures, fostering a culture of continuous improvement, and using customer feedback and other metrics to drive innovation.

Technology can also be used to automate processes. When employees are freed from distracting processes, they can increase their focus on consistently delivering high-quality products and services. This builds customer trust and loyalty and supports the organization’s long-term success.

Benefits:

Higher customer satisfaction Reduced errors Enhanced brand image Human resource (HR) process optimization

BPR is crucial for optimizing human resources (HR) processes. Initiatives might include automating the onboarding process with easy-to-use portals, streamlining workflows, creating self-service portals and apps, using AI for talent acquisition, and implementing a data-driven approach to performance management.

Fostering employee engagement can also help attract, develop and retain top talent. Aligning HR processes with organizational goals and values can enhance workforce productivity, satisfaction and business performance.

Benefits:

Faster recruitment cycles Improved employee engagement Strategic talent allocation BPR examples: Case studies

The following case study examples demonstrate a mix of BPR methodologies and use cases working together to yield client benefits.

Bouygues becomes the AI standard bearer in French telecom

Bouygues Telecom, a leading French communications service provider, was plagued by legacy systems that struggled to keep up with an enormous volume of support calls. The result? Frustrated customers were left stranded in call lines and Bouygues at risk of being replaced by its competitors. Thankfully, Bouygues had partnered with IBM previously in one of our first pre- IBM watsonx™ AI deployments. This phase 1 engagement laid the groundwork perfectly for AI’s injection into the telecom’s call center during phase 2.

Today, Bouygues greets over 800,000 calls a month with IBM watsonx Assistant™, and IBM watsonx Orchestrate™ helps alleviate the repetitive tasks that agents previously had to handle manually, freeing them for higher-value work. In all, agents’ pre-and-post-call workloads were reduced by 30%.1 In addition, 8 million customer-agent conversations—which were, in the past, only partially analyzed—have now been summarized with consistent accuracy for the creation of actionable insights.

Taken together, these technologies have made Bouygues a disruptor in the world of customer care, yielding a USD 5 million projected reduction in yearly operational costs and placing them at the forefront of AI technology.1

Finance of America promotes lifetime loyalty via customer-centric transformation

By co-creating with IBM, mortgage lender Finance of America was able to recenter their operations around their customers, driving value for both them and the prospective home buyers they serve.

To accomplish this goal, FOA iterated quickly on both new strategies and features that would prioritize customer service and retention. From IBM-facilitated design thinking workshops came roadmaps for a consistent brand experience across channels, simplifying the work of their agents and streamlining the application process for their customers.

As a result of this transformation, FOA is projected to double their customer base in just three years. In the same time frame, they aim to increase revenue by over 50% and income by over 80%. Now, Finance of America is primed to deliver enhanced services—such as debt advisory—that will help promote lifetime customer loyalty.2

BPR examples and IBM

Business process reengineering (BPR) with IBM takes a critical look at core processes to spot and redesign areas that need improvement. By stepping back, strategists can analyze areas like supply chain, customer experience and finance operations. BPR services experts can embed emerging technologies and overhaul existing processes to improve the business holistically. They can help you build new processes with intelligent workflows that drive profitability, weed out redundancies, and prioritize cost saving.

Explore IBM Business Process Reengineering services Subscribe to newsletter updates

1. IBM Wow Story: Bouygues Becomes the AI Standard-Bearer in French Telecom. Last updated 10 November 2023.

2. IBM Wow Story: Finance of America Promotes Lifetime Loyalty via Customer-Centric Transformation. Last updated 23 February 2024.

The post Business process reengineering (BPR) examples appeared first on IBM Blog.


Global ID

FUTURE PROOF EP. 23 — Every socity is built on trust

FUTURE PROOF EP. 23 — Every society is built on trust GlobaliD has been flying under the radar this last year, but there’s been a ton of hard work going on behind the scenes, and the app is more feature-rich than ever. In our latest episode of the FUTURE PROOF podcast, GlobaliD co-founder and CEO Mitja Simcic gives us an overview of how we’re rethinking trust in the 21st century while
FUTURE PROOF EP. 23 — Every society is built on trust

GlobaliD has been flying under the radar this last year, but there’s been a ton of hard work going on behind the scenes, and the app is more feature-rich than ever.

In our latest episode of the FUTURE PROOF podcast, GlobaliD co-founder and CEO Mitja Simcic gives us an overview of how we’re rethinking trust in the 21st century while also catching us up on some of the most exciting recent updates.

Download the GlobaliD app GlobaliD on X Mitja on X

FUTURE PROOF EP. 23 — Every socity is built on trust was originally published in GlobaliD on Medium, where people are continuing the conversation by highlighting and responding to this story.


SC Media - Identity and Access

Proposed FTC commercial surveillance rules expected soon

New proposed commercial surveillance regulations are poised to be unveiled by the Federal Trade Commission in the next few months amid concerns of misuse and data security gaps, reports The Record, a news site by cybersecurity firm Recorded Future.

New proposed commercial surveillance regulations are poised to be unveiled by the Federal Trade Commission in the next few months amid concerns of misuse and data security gaps, reports The Record, a news site by cybersecurity firm Recorded Future.


Shyft Network

Guide to FATF Travel Rule Compliance in Indonesia

The minimum threshold for the FATF Travel Rule in Indonesia is set at USD 1,000, but transactions below this amount still require collecting basic information about the sender and recipient. Crypto firms in Indonesia must undergo a regulatory sandbox evaluation starting next year, and those failing to comply will be deemed illegal operators. Indonesia is transitioning its crypto industry regula
The minimum threshold for the FATF Travel Rule in Indonesia is set at USD 1,000, but transactions below this amount still require collecting basic information about the sender and recipient. Crypto firms in Indonesia must undergo a regulatory sandbox evaluation starting next year, and those failing to comply will be deemed illegal operators. Indonesia is transitioning its crypto industry regulation from Bappebti to OJK by 2025, aiming to align with international standards and improve consumer protection and education.

Indonesia, the world’s fourth-most populous nation, is also one of the largest cryptocurrency markets globally. In February 2024, it recorded IDR 30 trillion ($1.92 billion) in crypto transactions, and the number of registered crypto investors hit 19 million, according to Bappebti.

Considering crypto’s growing popularity, the Indonesian government has been taking active steps over the past few years towards crypto regulation, including adopting the FATF Travel Rule.

‍Background of Crypto Travel Rule in Indonesia

‍In 2021, Indonesia adopted international FATF standards to enhance the prevention and eradication of money laundering and terrorism financing in the crypto sector.

Then, in April 2023, the FATF assessment of the country’s request for FATF membership found that Indonesia has a robust legal framework to combat money laundering and terrorist financing.

However, it noted that more needs to be done to improve asset recovery, risk-based supervision, and proportionate and dissuasive sanctions.

The report further noted that virtual asset service providers (VASPs) have taken steps to implement their obligations but are still in the early stages of implementing AML/CFT requirements.

‍Key Features of Crypto Travel Rule

‍Crypto transactions above a certain threshold on exchanges registered with Bappebti must comply with the rules requiring the obtaining and sharing of specific sender and recipient information.

Under Indonesia’s APU and PPT (Anti-Money Laundering and Prevention of Terrorism Financing) programs, a crypto business must meet certain requirements, such as:

Appoint an MLRO or money laundering reporting officer Developing and implementing internal AML policies Conduct regular risk assessments.

Moreover, a crypto business must:

Conduct Customer Due Diligence (CDD), which involves collecting and verifying information (customer’s name, address, and other personal data) Assess associated risks Conduct Simplified Due Diligence (SDD) and Enhanced Due Diligence (EDD), where applicable.

In addition, crypto businesses are also required to monitor transactions, conduct sanctions screening, report suspicious activity and transactions, and keep records.

‍Compliance Requirements

‍In accordance with international standards, Indonesia applies a minimum threshold of USD 1,000 (1,62,15,400 Indonesian Rupiah) for the FATF Travel Rule. However, transactions worth less than USD 1,000 are not entirely excluded, and certain information must still be collected:

Name of both the sender and recipient The wallet address of both the sender and recipient

For any transactions equivalent to $1000 or more than this amount, the information to be collected is:

Name Residential address Wallet address Identification document

Indonesian citizens must provide identity cards, while foreign nationals must provide passports and identity cards issued by their country of origin or a Limited Stay Permit Card (KITAS) in the case of Crypto Asset Customers (KITAP).

The recipients, on the other hand, must provide:

Name Residential address Wallet address Global Context

‍In its report earlier this month, the FATF noted that nearly 70% of its member jurisdictions globally have adopted the FATF Travel Rule. It said that the likes of the US, Austria, France, Germany, Singapore, Japan, and Canada have fully embraced the Crypto Travel Rule with proper checks and systems in place.

Meanwhile, Indonesia is among Mexico, Malaysia, Brazil, Colombia, and Argentina, working towards fully adhering to the FATF recommendations. Malaysia, Brazil, Colombia, and Argentina are working towards fully adhering to the FATF recommendations.

‍Concluding Thoughts

‍Crypto regulation in Indonesia is rapidly evolving, with authorities updating the regulatory framework to clarify rules and incorporate a sandbox approach for testing products.

As crypto adoption grows in Indonesia, these regulatory changes, including adherence to the Crypto Travel Rule, aim to manage the expanding market while maintaining compliance with international standards.

However, all stakeholders, including the government and Virtual Asset Service Providers (VASPs), must ensure that these regulations have minimal impact on end users.

‍FAQs ‍Q1: What is the minimum transaction threshold for the FATF Travel Rule in Indonesia?

The minimum transaction threshold for the FATF Travel Rule in Indonesia is USD 1,000. However, certain sender and recipient information still needs to be collected for transactions below this amount.

For transactions exceeding the USD 1,000 threshold, Indonesian citizens must provide their identity cards, while foreign nationals are required to present passports and identity cards issued by their country of origin or a Limited Stay Permit Card (KITAS) in the case of Crypto Asset Customers (KITAP).

‍About Veriscope

‍Veriscope, the compliance infrastructure on Shyft Network, empowers Virtual Asset Service Providers (VASPs) with the only frictionless solution for complying with the FATF Travel Rule. Enhanced by User Signing, it enables VASPs to directly request cryptographic proof from users’ non-custodial wallets, streamlining the compliance process.

For more information, visit our website and contact our team for a discussion. To keep up-to-date on all things crypto regulations, sign up for our newsletter and follow us on X (Formerly Twitter), LinkedIn, Telegram, and Medium.

Guide to FATF Travel Rule Compliance in Indonesia was originally published in Shyft Network on Medium, where people are continuing the conversation by highlighting and responding to this story.


auth0

What is a Mobile Driver's License (mDL) and How to Start Using Them

Nowadays you could replace your physical driver's license with a digital, cryptographically verifiable one. Let's learn about it and how to start using them.
Nowadays you could replace your physical driver's license with a digital, cryptographically verifiable one. Let's learn about it and how to start using them.

SC Media - Identity and Access

South Korean defense firms subjected to North Korean APT attacks

North Korean state-sponsored advanced persistent threat operations Lazarus Group, Kimsuky, and Andariel were noted by South Korea's National Police Agency to have targeted several South Korean defense industry entities since late 2022 in a bid to obtain intelligence regarding defense technologies, reports Security Affairs.

North Korean state-sponsored advanced persistent threat operations Lazarus Group, Kimsuky, and Andariel were noted by South Korea's National Police Agency to have targeted several South Korean defense industry entities since late 2022 in a bid to obtain intelligence regarding defense technologies, reports Security Affairs.


Ontology

Ontology Weekly Report (April 16th — 22nd, 2024)

Ontology Weekly Report (April 16th — 22nd, 2024) Welcome to another edition of our Ontology Weekly Report. This week has been filled with exciting developments, continued progress on our technical fronts, and dynamic community engagement. Here’s the rundown of our activities and updates: 🎉 Highlights Lovely Wallet Giveaway: Don’t miss out on our ongoing giveaway with Lovely Wallet! G
Ontology Weekly Report (April 16th — 22nd, 2024)

Welcome to another edition of our Ontology Weekly Report. This week has been filled with exciting developments, continued progress on our technical fronts, and dynamic community engagement. Here’s the rundown of our activities and updates:

🎉 Highlights Lovely Wallet Giveaway: Don’t miss out on our ongoing giveaway with Lovely Wallet! Great prizes are still up for grabs. Latest Developments Web3 Wonderings Success: Last week’s Web3 Wonderings session was a major hit! Thank you to everyone who joined and contributed to the engaging discussion. Ontology on Guarda Wallet: We are thrilled by the continued support of Guarda Wallet, making it easier for users to manage their assets. Blockchain Reporter Feature: Our initiative for the 10M DID fund has been featured by Blockchain Reporter, spotlighting our efforts to enhance digital identity solutions. Development Progress Ontology EVM Trace Trading Function: Now at 87%, we continue to make substantial progress in enhancing our trading capabilities within the EVM framework. ONT to ONTD Conversion Contract: Development has advanced to 52%, streamlining the conversion process for our users. ONT Leverage Staking Design: We’ve made further progress, now at 37%, developing innovative staking mechanisms to benefit our community. Product Development AMA with Kita Foundation: Be sure to tune into our upcoming AMA session with the Kita Foundation, where we’ll dive into future collaborations and developments. On-Chain Activity Steady dApp Count: Our network consistently supports 177 dApps on MainNet, reflecting a stable and robust ecosystem. Transaction Activity: This week, we observed an increase of 1,100 dApp-related transactions and a significant uptick of 15,430 in total MainNet transactions, indicating active and growing network utilization. Community Growth Engaging Community Discussions: Our platforms on Twitter and Telegram are continuously abuzz with discussions on the latest developments and community interactions. Your insights and participation are what make our community thrive. Telegram Discussion on Privacy: Led by Ontology Loyal Members, this week’s focus was on “Empowering Privacy with Anonymous Credentials,” exploring advanced solutions for enhancing user privacy. Stay Connected

Stay engaged and updated with Ontology through our various channels. We value your continuous support and are excited to grow together in this journey of blockchain innovation.

Ontology website / ONTO website / OWallet (GitHub)

Twitter / Reddit / Facebook / LinkedIn / YouTube / NaverBlog / Forklog

Telegram Announcement / Telegram English / GitHubDiscord

Ontology Weekly Report (April 16th — 22nd, 2024) was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


Embark on the Ontonauts Odyssey

Unite, Innovate, and Propel Ontology Forward Hello, Ontonauts! We’re excited to share something cool with you — the Ontonauts Odyssey! This is a bunch of quests we’ve put together for our community. It’s all about getting involved, coming up with new ideas, and helping Ontology grow. What is the Ontonauts Odyssey? Think of the Ontonauts Odyssey as a series of fun tasks. Each one is d
Unite, Innovate, and Propel Ontology Forward Hello, Ontonauts!

We’re excited to share something cool with you — the Ontonauts Odyssey! This is a bunch of quests we’ve put together for our community. It’s all about getting involved, coming up with new ideas, and helping Ontology grow.

What is the Ontonauts Odyssey?

Think of the Ontonauts Odyssey as a series of fun tasks. Each one is designed to get you active, give you rewards, and make you feel more a part of the Ontology community. By taking part, you’re helping make Ontology even better.

What’s Waiting for You?

Starting with sharing our message on social media, inviting friends to join us, and moving on to coming up with new ideas, and finding partners for Ontology, every task you complete helps us all move forward. Here’s a quick look at what you can do:

First Task: Share our news on your social media.
Second Task: Bring new friends into our community.
Tasks Three to Seven: From coming up with new ideas to finding partners and expanding our network, each step you take helps us grow.
Why Should You Join?

By joining in, you’re not just helping us; you’re making Ontology better and stronger. We’ve got rewards to thank you for your hard work and ideas. This is your chance to make a difference in our community and the wider world of the web.

We’d Love to Hear from You!

Your thoughts and feedback are important to us. They help us make things better for everyone. You can send us your ideas and suggestions through a form, email, or on our community forums. Together, we can make this experience great for everyone.

Ready to Start?

If you’re ready to get going, here’s what you need to do:

Ontonauts Odyssey #1: Social Media Shoutout MISSION 🌟 What to Do: Like and retweet our big news tweet. Why It Matters: Your support spreads the word and brings more attention to our cause. REWARDS 🏆 Gain: 500 XP for taking action. SUBMISSION How It Works: No need to send anything in. This quest finishes by itself once you do the task! Ontonauts Odyssey #2: Grow Our Crew MISSION What to Do: Bring three friends (or more!) into our Zealy community. They’ve got to finish a quest too, for it to count. Why It Matters: More friends mean more fun and more ideas. Let’s grow together. GUIDE How to Do It: Head to your profile and click “invite friends.” Send your link to friends so they can join us on Zealy and start their own quest journey. Tracking: You can see how many friends have joined thanks to you in your profile. SUBMISSION How It Works: This quest checks itself off when you get a friend to complete their first quest. REWARDS 🏆 Gain: 300 XP for each friend who joins and completes a quest. Ontonauts Odyssey #3: Genius Ideas Wanted MISSION 🚀 What to Do: Got a brilliant idea for making Ontology even better? We want to hear it. No common ideas, please. We’re looking for Einstein-level thoughts! GUIDE 📚 Criteria: It should be unique, doable, and not too expensive. Also, it shouldn’t be something we’re already working on or that someone else has suggested. SUBMISSION 📜 How to Share: Send in your idea, and our team will take a look. REWARDS

🏆 Gain: 300 XP for each idea that meets our criteria.

REQUIREMENTS Must be at least level 4 and have completed Odyssey #2. Ontonauts Odyssey #4: Share Your Story MISSION 🚀 What to Do: Write about your experience with Ontology and your hopes for Web3. Why It Matters: Your stories inspire us and others. Let’s share our visions. GUIDE 📚 How to Share: Make sure to tag @OntologyNetwork and use #ontonauts in your tweets. SUBMISSION 📜 How to Share: Just link us to your thread. REWARDS 🏆 Gain: 300 XP for sharing your story. REQUIREMENTS Finish Ontonauts Odyssey #3 first. Ontonauts Odyssey #5: Create Connections MISSION 🎯 What to Do: Get us featured in newsletters, blogs, podcasts, events, AMAs, or social groups. Aim for quality audiences. GUIDE 📚 Focus: Quality means engaged and real followers. The collaboration could be an article, a mention, or another cool idea you have! SUBMISSION 📝 How to Share: Put proof and details in a public Google Drive folder. REWARDS 🏆 Gain: 300 XP for successful collaborations. Ontonauts Odyssey #6: Get Us Listed MISSION 🎯 What to Do: Add our project to a Web3 listing website. GUIDE 📚 Details: Most information is on ont.io. Ask the team if you need more. SUBMISSION 📝 How to Share: Only submit the listing you’ve made. Double-check to avoid mistakes. Ontonauts Odyssey #7: Bring New Partners MISSION 🎯 What to Do: Introduce a new partner to Ontology from your contacts. GUIDE 📚 Who to Look For: Anyone interested in working with us, like another protocol or media partner. SUBMISSION 📝 How to Share: Use a public Google Docs link to share the contact’s name, email, and any useful info. REWARDS 🏆 Gain: 500 XP for each new partner introduced.

Your involvement makes all the difference. Each quest you complete brings new energy and ideas into our community. Let’s make Ontology stronger, together!

Embark on the Ontonauts Odyssey was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


Lockstep

Talking Digital ID with NAB

I was delighted to appear on the latest NAB Digital Next podcast in conversation with Alysia Abeyratne, NAB Senior Manager for Digital Policy. We drilled into the history of verifiable credentials and the recent awareness that identification doesn’t need so much identity. Hence NAB’s terminology has switched from “digital identity” to Digital ID — something... The post Talking Digital ID with NA

I was delighted to appear on the latest NAB Digital Next podcast in conversation with Alysia Abeyratne, NAB Senior Manager for Digital Policy. We drilled into the history of verifiable credentials and the recent awareness that identification doesn’t need so much identity.
Hence NAB’s terminology has switched from “digital identity” to Digital ID — something that’s much more familiar and concrete.

NAB realises that identification processes need better data — IDs and credentials from the real world packaged in improved digital formats that make them more reliable online than plaintext IDs, and less vulnerable to theft.

The Australian federal government’s Digital ID Bill embodies this paradigm shift.

Individuals don’t need new numbers or any new “digital identity”; they need better ways to handle their existing IDs online. And it’s exactly the same with businesses; the best digital technologies conserve the rules and relationships that are working well in the analogue world.

After reviewing the historical language of “digital identity” and the state of the art in verifiable credentials, I went on to discuss how better verification of all important data is an urgent need, in the broader context of AI and the wicked problems of Deep Fakes.

Here are some edited extracts from the podcast.

Some History

At the dawn of ecommerce, in 1995, Australia was leading in e-signatures, e-authentication and PKI (public key infrastructure) even before we were buying much online. Around then Australia passed its technology neutral electronic signature law.

PKI was dominated by Defence thinking thanks to national security perspectives. Led to onerous operational standards, some still with us today in TDIF.

Trying to help people think about new digital concepts, we had naive metaphors for identity, such as “passports”, which we hoped would let us freely go around cyberspace and prove who we are. It turned out to be really hard to have a general-purpose proof of identity.

About 15 years ago, the digital industry got a little more focused, by looking at specific assertions, attributes and claims. These boil down to what do you need to know about somebody, from application to application.

And what do you need to know about a credential?

Verifiable Credentials and What Do You Really Need to Know?

Sophisticated verifiable credentials today let you know where a credential has come from, reference its terms and conditions, and can even convey how a credential has been carried (so we can tell the difference, for example, between device-bound Passkeys and synced Passkeys).

Instead of identity, we can ask better design questions, about the specifics that enable us to transact with others. When it’s important and you can’t rely on trust, then you need to know where a counterparty’s credentials have come from.

Provenance matters for devices too and data in general. The subjects of verifiable credentials can be non-humans, or indeed intangible items such as data records.

In almost all cases, we need to ask: Where does a subject come from? How do you know that a subject is fit for purpose? And where will you get these quality signals?

The same design thinking pattern recurs throughout digital credentials, the Internet of Things, software supply chains, and artificial intelligence. We need data in everything we do, and we need to know the story behind the data.

The importance of language

We habitually talk about “identity” but what do you really need to know about somebody?

When put it like that, we all know intuitively that the less you know about me, the better!

Technically that’s called data minimisation. In privacy law, it’s sometimes called purpose specification; in security it’s the good old need-to-know principle.

What do you really need to know about me? It’s almost never my identity, as we saw at the first NAB roundtable (PDF).

So, if identity is not necessarily our objective, we should not call this thing “digital identity”. It’s as simple as that.

Digital identity makes uneven progress

The Digital Identity field is infamously slow moving.  The latest Australian legislation is the third iteration in four years, and the government’s “Trusted Digital Identity Framework” (TDIF) dates back to 2016 (PDF).

We’ve made Digital Identity hard by using bad metaphors – especially “identity” itself. There is wider appreciation now that the typical things we need to know online about people (and many other subjects) is not “identity” but instead it’s credentials, specific properties, facts and figures.

But meanwhile we have made great progress on verifiable credentials standards and solutions. White label verifiable credentials are emerging; the data structures can be customised to an enterprise’s needs, issued in bulk from a cloud service, and loaded into different digital wallets and devices.

Enterprises will be able to convert their employee IDs from analogue to digital; colleges and training organisations will do the same for student IDs and qualifications. The result will be better security and privacy as users become able to prove exactly what they need to know about each other in specific contexts.

Governance of Digital ID and beyond

A major potential game changer is happening at home in Australia. The Digital ID Bill and resulting Australian Government Digital ID System (AGDIS) makes the problem simpler by making the objective smaller. Instead of any new and unfamiliar “digital identity”, AGDIS conserves the IDs we are used to, and introduces a governance regime for digitising them.

The IDs we are familiar with are just database indexes. And we should conserve that. The Australian Digital ID Bill recognises that ID ecosystems exist and we should be able to govern the digitising of IDs in a more secure way. So, the AGDIS is a more careful response to the notorious data breaches in recent years.

The plaintext problem

The real problem exposed by data breaches is the way we all use plaintext data.

Consider my driver licence number. That ID comprises six numbers and two letters, normally conveyed on a specially printed card, and codifies the fact I am licensed to drive by the state of New South Wales. My status as a driver in turn is a proxy for my good standing in official government systems, so it has become a token of my existence in the community. Along with a few other common “ID documents” the driver licence has become part of a quasi-standard grammar of identification.

Historically, IDs are presented in person; the photo on a licence card proves the credential belongs to the person presenting it. Relative to the core ID, the photo is a type of metadata; it provides an extra layer of evidence that associates the ID with the holder.

When we moved identification online, we maintained the grammar but we lost the supporting metadata. Online, businesses ask for a driver’s licence number but have none of the traditional signals about the quality of the ID. Simply knowing and quoting an ID doesn’t prove anything; it’s what geeks call a “shared secret”, and after a big data breach, it’s not much of a secret anymore.

Yet our only response to data breaches is to change the IDs and reissue everybody’s driver’s licences. The new plaintext is just as vulnerable as it was before. It’s ridiculous.

But let’s look carefully at the problem.

The driver licence as a proxy for one’s standing is still valid; the licence does provide good evidence that a certain human being physically exists. But knowing the ID number is meaningless. We need to move away from plaintext presentation of IDs — as Lockstep submitted to the government in the 2023 consultations on Digital ID legislation.

Crucially, some 15 years ago, banks did just that. The banks transitioned from magnetic stripe credit cards, which encode cardholder data as plaintext, to chip cards.

The chip card is actually a verifiable credential, albeit a special purpose one, dedicated to conveying account details. In a chip card, the credit card number is digitally signed by the issuing bank, and furthermore, every time you dip your card or tap it on a merchant terminal, the purchase details are countersigned by the chip.

Alternatively, when you use a digital wallet, a special secure chip in your mobile phone does the same thing: it countersigns the purchase to prove that the real card cardholder was in control.

Mimicking modern credit card security for Digital IDs

That’s the pattern that we now need to pivot from plaintext IDs to verifiable IDs.

The Australian Competition and Consumer Commission (ACCC) has the role of Digital ID regulator. As it did with another important digital regime, the Consumer Data Right (CDR), the ACCC is expected now to convene technical working groups to develop detailed rules and adopt standards for governing Digital ID.

If the rules adopt hardware-based digital wallets and verifiable credentials, then the presentation of any ID can be as secure, private and simple as a modern payment card. That will be a true game changer.

 

The post Talking Digital ID with NAB appeared first on Lockstep.

Tuesday, 23. April 2024

Indicio

Decentralized identity — driving digital transformation in banking and finance

The post Decentralized identity — driving digital transformation in banking and finance appeared first on Indicio.
From managing deepfakes to creating reusable KYC, decentralized identity’s ability to easily implement verifiable identity and data without direct integration provides a powerful path for improved efficiency, better fraud protection, and a new level of personalized account service and data privacy.

By Tim Spring

Over the next few weeks Indicio will look at how decentralized identity and verifiable credential technology can transform banking and finance. The unique way this technology handles authentication — such as the identity of an account holder — is a powerful solution to the challenges of identity fraud, while also being a better way to manage customer experience (no more passwords, no need for multi-factor authentication).

But it doesn’t stop there — we can authoritatively know who we are talking to online, and verify the integrity of their data, leading to seamless operational processes and providing a starting point for creating better products and services.

Here’s a taste of what we’ll be looking at.

Re-use costly KYC

To open a checking account at most banks, you need to provide government-issued identification with your photo, your Social Security card or Taxpayer Identification Number, and proof of your address. Gathering this information can be difficult, time consuming, and frustrating. KYC can take anywhere from 24 hours to three weeks, and costs the average bank $60 million per year.

How many times should you need to do this? Once, with a verifiable credential. And once you’ve done it the information can easily be shared both internally and with partners, resulting in smoother workflows and more opportunities for customers.

Fraud

In 2023, 145,206 cases of bank fraud were reported in the US alone. But the headline loss of money isn’t the only problem here: For every $1 lost to fraud, $4.36 is lost in related expenses, such as legal fees and recovery. This means that the estimated $1.6 billion lost to fraudulent payments in 2022 cost almost $7 billion.

Decentralized identity provides a better way to tackle this — and it doesn’t require banks to embark on a massive new IAM system.

Phishing

Phishing happens when you think the email or SMS message you just received is from your bank and you absolutely must login to the link provided or face disaster. It works. 22% of all data breaches involve phishing.

We’ll explain how verifiable credentials provide a way for you to always know — and know for certain — that you are talking to your bank and, if you are the bank, that you’re talking to a real customer.

Frictionless processes keep customers coming back

44% of consumers face medium to high friction when engaging with their digital banking platform. This means that almost half of people trying to access online banking have a hard time, and with friction being attributed as the cause for 70% of abandoned digital journeys, customers are very likely to give up and leave if they face frustration.

We’ll explain how verifiable credentials save customers (and you) from the costs of friction.

Passwordless Login

No one likes passwords. They are the universal pain point of digital experience. And that pain can be costly: 30% of users have experienced a data breach due to weak passwords. Verifiable credentials make all this go away and enable seamless, passwordless login. Imagine, never having to remember or recreate a password again or follow up with multi factor authentication.

We’ll also look at improving financial inclusion and crystal ball the near future — simplified payments, countering buyback fraud and verifiable credentials for credit cards.

For those not familiar with verifiable credentials, it might help to prepare with our Beginner’s Guide to Decentralized Identity or watch one of our demonstrations.

And, of course, if you have questions or would like to discuss specific use cases or issues your organization is facing please get in touch with our team.

####

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post Decentralized identity — driving digital transformation in banking and finance appeared first on Indicio.


auth0

Proof Key for Code Exchange (PKCE) in Web Applications with Spring Security

Implementing OpenID Connect authentication in Java Web Applications with Okta Spring Boot Starter and Spring Security support for Authorization Code Flow with PKCE
Implementing OpenID Connect authentication in Java Web Applications with Okta Spring Boot Starter and Spring Security support for Authorization Code Flow with PKCE

Infocert

Electronic Signature Software: What is it and What is it for?

What is and what is the purpose of digital signature software An electronic signature software is a tool that allows documents and contracts to be digitally signed with full legal validity, manage all signing processes, and affix time stamps to multiple files and folders. By using digital signature software, individuals, professionals and companies can manage […] The post Electronic Signature So
What is and what is the purpose of digital signature software

An electronic signature software is a tool that allows documents and contracts to be digitally signed with full legal validity, manage all signing processes, and affix time stamps to multiple files and folders.

 

By using digital signature software, individuals, professionals and companies can manage signature processes using the latest cryptographic technologies, which guarantee the authenticity and integrity of the document and ensure that it is not subsequently altered by unauthorized modifications. Its use is now widespread due to its ability to facilitate operations and processes that would otherwise require more time and resources. In fact, digital signature applications make it possible to digitize, automate and speed up processes, avoiding the use of paper and decreasing Co2 emissions.

 

Using e-signature software it is possible to sign agreements, transactions, contracts and business documents by managing approval and signing processes more efficiently. These digital solutions also streamline and optimize workflows, eliminating costs related to document printing, mailing and paper filing.

How electronic signature software works

These IT solutions integrate advanced encryption and authentication technologies that ensure the security and legal validity of signatures affixed to documents. Initially, it is necessary to choose a signature certificate (simple, advanced, or qualified), which guarantees the identity of the signer and the authenticity of his or her signature.

 

After completing the personal identity recognition required to obtain the digital certificate, and after installing the electronic signature software, the user follows an authentication procedure, often based on a combination of username, password, and sometimes additional identifying factors. From this point, e-signature software can be used on one’s device: the user loads a document into the software and signs it using the chosen digital certificate.

 

These IT solutions allow documents to be signed in electronic formats (PDF, P7M or XML) and can vary depending on the operating system or the specific needs of the user. In addition, a time stamp can be affixed, providing proof of the exact moment the document was signed, offering an additional level of security and reliability.

 

Cutting-edge e-signature software includes features dedicated to document organization, integration with enterprise cloud storage and ERP services, and management of deadlines, urgencies and notification modes. One example is InfoCert’s GoSign the advanced, easy-to-use electronic signature platform that enables the digitization of transactions and approval processes while ensuring the full legal value of signed documents.

The post Electronic Signature Software: What is it and What is it for? appeared first on infocert.digital.


IDnow

Consolidate, Share, Sustain—What’s propelling the mobility industry?

In Fluctuo’s recent European Shared Mobility Annual Review, in which IDnow sponsored and contributed, three significant trends are revealed as to what is driving mobility as seen through city and operator movements based on data collected from 115 European cities. One could say 2023 was the year of unexpected surprise within the mobility industry after […]
In Fluctuo’s recent European Shared Mobility Annual Review, in which IDnow sponsored and contributed, three significant trends are revealed as to what is driving mobility as seen through city and operator movements based on data collected from 115 European cities.

One could say 2023 was the year of unexpected surprise within the mobility industry after the Paris e-scooter ban turned many heads and required not only cities across Europe but operators as well to re-consider their services and plans. The Paris ban kicked off a tightening of regulations across Europe within the mobility sector causing cities such as Rome, Berlin and Brussels to significantly reduce the number of operators and e-scooters.

However, before these changes started taking effect, e-scooters were the favorite among shared mobility services. Between 2019-2022, Fluctuo reported that e-scooters lead the market during this time, overshadowing the use of bikes. But now, the tables, or should we say direction, has turned.

Seeing the need to change direction, within shared and micromobility services, both users and operators headed toward the next-best, and perhaps healthier, mode of transport—bicycles.

European Shared Mobility Index 2023 – by Fluctuo. Download to discover insights into the future of shared mobility, including a country-specific break-down of mobility trends and the increasing importance of identity verification technology. Get your copy Are bikes the new e-scooters?

With the need to enter new markets, operators spun their wheels and put more time and effort into new offers, specifically dockless bikes. And their efforts were not in vain. 2023 saw dockless bike fleets up 50% and ridership up 54% compared to previous yeas in which e-scooters dominated the market. And it wasn’t only dockless bikes which saw an increase in usage but station-based bikes as well.

The after-effects from Paris made scooter operators realize that city authorities prefer shared bikes rather than e-scooters. This was clearly seen as the city of Paris topped the list at 45 million for station-based bike ridership and came in second after London for dockless bikes. Though it may seem that the two services should complement one another rather than compete, it would appear that dockless bikes are the preferred choice. Despite this, both bike services are expected to grow in 2024, with station-based bikes growing more steadily perhaps due to more affordable end-user pricing.

Even though bike sharing is picking up in Northern Europe, that does not mean scooters have been kicked to the curb. On the contrary, the popularity of scooters remains and grows in Eastern Europe.

I feel the need, the need for… reduction.

Okay, it may not have been what you were thinking but unfortunately speed is not the answer here. After Paris decided to go forward with banning e-scooters, many did not know how it would affect other major cities. Most probably thought that it would create a domino effect and other cities would follow suit, banning e-scooters left and right. But this did not come to pass.

Instead, other cities decided to cut scooter fleet sizes rather than banning them completely. This however, was felt on the operator side who went into survival mode. Seeing the need to make smart economic decisions in order to stay in the game, mobility operators had to reduce costs, exit markets (i.e. scooters) and in some cases merge with another operator as seen with Tier and Dott. Consolidation became the name of the game.

Now, with the limited number of spots available in cities for scooter operators, companies must appeal in order to stay active or risk the chance of not being able to operate in that location any longer.

But despite what sounds like grim news, the scooter fleets that have been reduced in major cities due to these new regulations are being moved to smaller cities and other cities without a number cap, resulting in fleet sizes remaining stable. Even better is the fact that fleets have grown 33% in Eastern Europe with Poland being an exceptionally large market for scooters.

Sharing is caring.

Bikes and scooters were not the only shared services that saw changes last year. Mopeds, for example, faced challenges due to cases of vandalism and theft in Eastern Europe. Safety concerns also arose in which the Netherlands now requires users to wear a helmet on mopeds capped at 25km/h. Nevertheless, the moped market remained stable.

One sharing service which did perform well last year and seems to continue to do so is free-floating car sharing. After a 39% increase in rentals last year, car-sharing is seeing a growing popularity in short-term rentals (2-3 hours) compared to an entire day. Cities leading the way are mostly German to include Berlin, Hamburg and Munich.

As cities and remaining operators start accepting regulations and gaining financial stability within the market, shared mobility services will continue to develop providing cities and their inhabitants with greater benefits than before.

Going green.

As car sharing services gain greater popularity after continual success, this mobility option is one that breathes life into the growing e-mobility movement. With some car sharing operators already providing e-cars, these services not only decrease the volume of vehicles on the road since there is less need for personal vehicles, but also allows for reallocating space in urban areas for public benefit.

Benefitting further from this movement is the integration of car sharing services with other sustainable transport options such as public transport, walking, biking, etc. By combining all options, this creates a more ecological way of living and a more convenient and flexible way for people to travel. But in order for this initiative to be successful, operators and cities must work together and invest in the necessary infrastructure.

IDV—the key to your transport services.

IDnow jumps on the train here as an important key in this necessary infrastructure. As regulations increase within major cities, safety requirements are implemented and theft rises, operators realize the importance of identifying their customers before allowing them to use their services. From age and driver’s license verification to digital signatures, our automated identity verification solution allow operators to verify their users within seconds.

We drive trust, not frustration, with our services, providing a safe and secure experience for mobility operators and their customers. With fast, 24/7 remote onboarding, transport services can offer their users a frictionless and convenient way to travel while operators can rest-assured that they are meeting regulatory needs and fighting fraud upfront with our use of biometrics.

Thanks to our wide range of document coverage (types of documents and origin) with up to 99% acceptance rate globally as well as a choice of automated or even expert-led verification services, operators can scale with confidence.

Tap into document and biometric verification for seamless mobility experiences.

Want to know more about the future of mobility? Discover the major trends in the mobility industry, the innovative models and solutions available to you to design a seamless user experience. Get your free copy now

By

Kristen Walter
Jr. Content Marketing Manager
Connect with Kristen on LinkedIn


Verida

Verida Announces 200M VDA Token Long-Term Reward Program

Verida Announces 200M VDA Token Long-Term Reward Program The Verida Foundation is preparing for its upcoming TGE and its related multi-tiered airdrop program. This program is intended to reward early adopters to the network and encourage newcomers to discover Verida’s features and capabilities. Both short and long-term incentives are on offer for participants. Read the previous announcements r
Verida Announces 200M VDA Token Long-Term Reward Program

The Verida Foundation is preparing for its upcoming TGE and its related multi-tiered airdrop program. This program is intended to reward early adopters to the network and encourage newcomers to discover Verida’s features and capabilities. Both short and long-term incentives are on offer for participants. Read the previous announcements regarding these programs on March 28th and April 18th.

As part of this campaign, Verida is sharing more information on its planned long-term rewards and incentive programs, including details on dedicated funding for those programs. The central element of Verida’s longer-term growth reward programs is a dedicated pool of 200 million VDA tokens, representing 20% of the overall VDA supply, that will be distributed over a multi-year period through several dedicated programs.

Network Growth Rewards Explained

As described in the Verida Whitepaper, Verida’s token economics specifies that 20% of the overall token supply (200M tokens) will be allocated to Network Growth Rewards.

The Verida Network Growth token pool will be distributed to end users and application developers to incentivize long-term network growth. These token distributions will focus on the following key areas:

Connect your data: Earn tokens by pulling your personal data from web3 applications into your Verida identity Connect a node: Earn tokens by operating infrastructure on the Verida Network Connect a friend: Earn tokens by referring friends to join the Verida Network Connect an app: Earn tokens by building a web3 application that leverages the Verida Network

This Network Growth Rewards pool unlocks monthly over a multi-year period and will allow the foundation to maintain several long-term reward programs backed by more than three million monthly tokens.

The Network Growth Rewards pool will support ongoing programs including; referral rewards, incentives for members to import additional datasets into Verida, and to connect with dApps built on Verida. Additional reward programs will continue to be developed, and are anticipated to be presented to the Verida community in the months following the token launch.

VDA Launch-Related Airdrop and Incentive Programs

In addition to the long-term reward allocations from the Network Growth Reward pool, the Verida Foundation has developed a series of targeted near-term airdrops and reward programs coinciding with the launch of the network and the listing of the VDA Storage Credit Token on several centralized and decentralized exchanges.

This multi-stage airdrop campaign will distribute a minimum of 5 million VDA tokens across a series of targeted reward programs. Although each individual airdrop event within the larger campaign is planned to reward specific activities within the network, it is also expected that many Verida supporters and early adopters will qualify for rewards from several, and in some cases potentially all, of the planned airdrops.

The Verida Foundation’s strategy of a number of multiple, smaller, targeted airdrops (including its inaugural airdrop announced on March 28th) is a deliberate effort to address the shortcomings that often impact hastily conceived airdrop programs, where an excessive portion of the dedicated token reward pool too often finds its way into the hands of casual users and airdrop farmers. Another announcement, from April 21 described the second installment of Verida’s planned airdrop campaign, also with a target program focused on a specific group of ecosystem followers.

By undertaking a series of carefully targeted rewards Verida believes it can increase the percentage of rewards distributed its many enthusiastic supporters. This increases the value of airdrop rewards for actual Verida users, and multiplies the support and incentives for active users.

Verida looks forward to sharing further announcements related to ongoing and planned community recognition and network growth rewards programs as final details of those programs are settled.

Stay tuned for more news on our TGE and listing process! For all questions regarding Verida airdrops, please see our community Airdrops FAQ.

About Verida

Verida is a pioneering decentralized data network and self-custody wallet that empowers users with control over their digital identity and data. With cutting-edge technology such as zero-knowledge proofs and verifiable credentials, Verida offers secure, self-sovereign storage solutions and innovative applications for a wide range of industries. With a thriving community and a commitment to transparency and security, Verida is leading the charge towards a more decentralized and user-centric digital future.
Verida Missions | X/Twitter | Discord | Telegram | LinkedInLinkTree

Verida Announces 200M VDA Token Long-Term Reward Program was originally published in Verida on Medium, where people are continuing the conversation by highlighting and responding to this story.


This week in identity

E50 - BeyondTrust and Entitle / Cisco Duo breach and Hypershield launch / CSPM+NHI / SecureAuth new CEO

This week hosts Simon and David review a range of topical news events in the global identity and access management space. First up BeyondTrust have a definitive agreement with Entitle to combine up PAM and IGA. Cisco appear twice..once regarding a breach on Duo MFA service and another regarding their new solution launch - the Hypershield. A discussion on definitions before a quick comment on the n

This week hosts Simon and David review a range of topical news events in the global identity and access management space. First up BeyondTrust have a definitive agreement with Entitle to combine up PAM and IGA. Cisco appear twice..once regarding a breach on Duo MFA service and another regarding their new solution launch - the Hypershield. A discussion on definitions before a quick comment on the new CEO at SecureAuth.


YeshID

Access Management Made Easy

Editor’s note: Thanks to our Customer Success Engineer, Thilina, for authoring this week’s post on the woes (and the solution!) for access management. I used to sit next to the... The post Access Management Made Easy appeared first on YeshID.

Editor’s note: Thanks to our Customer Success Engineer, Thilina, for authoring this week’s post on the woes (and the solution!) for access management.

I used to sit next to the IT SysAdmin of a small but rapidly expanding organization.  I love to people-watch, and one of the things I would see them do–always accompanied by grumbling– (I used to people-listen, too) was handling access requests.

One day after a particularly loud and animated grumble, I asked:

“An access request again hey? What is it this time?”

“Oi! Can’t get enough of my work, eh mate??” (They were British, so they said “Oi” not “Oy.”)

“But yes..it’s another access request for [they mentioned a sensitive system], and it’s the fifth one today – I swear if they ask again…” 

Eventually, the profanity stopped, and I understood why it was so upsetting.

The company had a list of applications that required access to be granted (or revoked) in a recorded and auditable way. Auditable is key here. My Admin friend was the admin of all the applications because managing them required tech skills. But the admin was not always not the “owner” or “approver,” the key decision maker who is supposed to vet requests. As a result, when someone wanted access, the admin couldn’t just grant it. They had to pass the request (via email or chat message) to the approver. And then wait. And sometimes, wait. And then wait some more. And nag the approver. And get nagged by the user. And when you get the approval back, they needed to record it to make sure the spreadsheets were up to date for that quarterly compliance nonsense. No fun!

It is the second decade of the 21st century, and people are still doing this. There’s got to be a better way.

And with YeshID – there is!

1. Enter Your Applications & Their Owners

With YeshID you can add your applications and specify the application administrators – the owners or approvers I talked about earlier.

When someone wants access or is onboarded or offboarded, or there’s any other activity that concerns the owner’s applications, YeshID notifies them. This means less shoulder tapping on the admin and notifications going to the right place at the right time. And there’s an audit trail for compliance.

To get started quickly with your applications, YeshID provides two ways to add the admin (and login URL):

If you have a lot of apps that you’d like to get imported into YeshID, you can use a CSV file that has your list of apps and their owners.

And upload them to YeshID to quickly import your applications.

Or you can enter them one by one or edit them this way:

2. Update the Access Grid for your Apps

Once your applications are added, you can check out the Access Grid to see the current record of app-to-user memberships.

From here, you can go in and quickly check off boxes to mark which users already have access to which apps.

An even quicker way to update an app’s access, especially if you have many users, is to import a CSV of users per app. 

When you click into an app, you can import a CSV of email addresses and Yesh will take care of the rest.

YeshID will finish by showing you the differences so you can review the changes being made.

3. Let your Users and App Owners take care of their own Access Requests.

Now, since you’ve already done the hard work of:

Letting YeshID know of your Apps; and Updating the access for your Apps

You and your users are now able to do the following:

My-Applications

Since YeshID is integrated into your Google Workspace, any of your users can navigate to app.yeshid.com/my-applications where they will see a grid of applications they already have access to. (No more wondering: “Wait, which URL was it again?”)

Request Access

Now, when one of your users requires access to one of your organization’s apps, they can navigate to “All Managed Apps” and Request Access to their app of choice. 

They can fill in details to provide reasons for their request.

After they submit the request, YeshID will notify the Application Owner about a pending request.

If you’re an Application Owner, you’ll be notified with a link to a page where you can see the request and choose to either Confirm or Reject.

If you confirm, YeshID will generate a task will be generated for the admin, and once granted, the user will see the newly granted application the next time they click on their My-Applications grid.

And just like that, a world of shoulder tapping, lost conversations, and requests falling off the side of a desk is avoided through the use of smart technology and engineering by your friends at YeshID.

4. Use Yesh to Ace your Access Audits

With YeshID ingrained into your employee lifecycle, audits and Quarterly Access Reviews (QAR’s) become a breeze.

Simply go to your Access Grid and click on “Download Quarterly Report,” which will produce a spreadsheet created for access audits. 

Review the details (there’s a sheet per app!), fill in any additional comments, and just like that – your Quarterly Access Review is done.

Conclusion

Ready to reclaim your sanity? By automating access requests and approvals, YeshID empowers admins and users. Users gain self-service access requests, and admins are freed from the time-consuming manual process of nagging app owers and updating spreadsheets.

Sign up for a free YeshID trial today and see how easy access management can be. 

The post Access Management Made Easy appeared first on YeshID.


TBD

Beyond Blockchain: How Web5 Enables Fully Decentralized Apps

Why Web5 enables the decentralized apps blockchain made you dream - or have nightmares - about

When blockchain technology was first introduced a decade and a half ago, it made the possibility of a decentralized web seem viable to a mass audience. Since then, blockchains have proven themselves to be a poor solution to the majority of data storage problems outside of bearer assets - i.e. Bitcoin - because blockchains grow continuously without any ability to delete data, which just doesn’t make sense for things like personal data. But ‘blockchain’ isn’t synonymous with decentralization, and just because a blockchain isn’t the best solution for personal data storage doesn’t mean your app can’t be decentralized.

There are two common problems when trying to port a traditional centralized application over to the blockchain: data efficiency and data custody. In this post we’ll discuss why those two problems are blockers for most applications, and how Web5 components allow developers to build performant decentralized applications in a way that wasn't possible before.

Where Blockchain Doesn’t Fit Data Storage & Efficiency

It’s possible to make the argument that data such as GIS coordinates, historical data, or widely-used and immutable data are good candidates to be stored on the blockchain because there’s no need for deletion and a requirement for complete history. While blockchain isn’t the only storage method that meets the needs of these systems, those systems are the ones best suited to utilize blockchain. Despite that fact that those data are good candidates for blockchain, however, does not mean that blockchain is the best data storage solution for that data.

However, blockchains are continuous ledgers of data that provide the benefit of immutability at the cost of storage inefficiency, which means that they aren’t great for storing large amounts of data or data sets in which there are large subsections of trivial or ignorable data. Firstly, their read times are slower than traditional data storage because the entire blockchain needs to be traversed for a read, and write times require settlement onto the blockchain which is out of an individual’s hands. As a result, the cost per transaction will likely be significantly higher on the blockchain when compared to traditional data storage, and that cost can grow over time.

Imagine an on-chain photo storage app that allows for users to encrypt their data. While this framework would technically allow for the replacement of large-scale cloud-based photo services that are run by centralized corporations, the end user experience would be tarnished by much slower speeds than what users have come to expect. That’s because blockchains are designed to store an immutable ledger of transactions (in this case uploads of photos), and knowing what photos belong to who would involve traversing the entirety of the blockchain. Queries to this kind of data storage are slow and expensive, as would be the proof-of-work or other verification method needed to add new blocks to the chain. As a result, while a photo sharing app, or any app that requires storage of large files, is theoretically possible on-chain, it isn’t very scalable or performant.

Data Custody

A key tenant to the decentralized web is the idea of custody, or data ownership. Custody, or the ability to hold an asset in which only you have full access to it, breaks down into a handful of concerns including:

Physical storage ownership Deletion prevention Censorship resistance Speed of use Data interoperability

A truly decentralized web makes it easy for each user to custody their own data, regardless of their reasoning for doing so. No matter a customer’s “why,” however, custody isn’t simply a matter of personal beliefs - it’s a feature that enables better user experience.

With blockchain technology, wallets don’t actually store your data - instead they address to your data on the blockchain via the public/private keys they hold and represent. This means, for example, that a crypto wallet doesn’t actually “store” your Bitcoins; it’s an address that is used as the identifier on transactions recorded on-chain, which can then be used to determine your wallet’s Bitcoin holdings.

In order to take custody of your on-chain data then you would need to run your own node on the blockchain network in question, which may not be a good solution for the average consumer. Additionally, in order to host a wallet that is available on all of your devices, that wallet will likely be hosted on a centralized third party’s servers.

It’s important to note that while the issues of wallet storage and running nodes aren’t unsolvable problems, they are inherently outside of the definitions of what a typical blockchain wallet does today.

Building With Web5

Bearing in mind the limitations of custody and efficiency, it becomes clear that not a lot of data makes sense to be stored on-chain. However, just because you might not want to store important, large data like photos, or small, unimportant data like favicon history on-chain doesn’t mean that it isn’t possible to store that data in a decentralized way that offers custody and efficiency.

Key Web5 technologies like Decentralized Identifiers (DIDs), Decentralized Web Nodes (DWNs), and Verifiable Credentials (VCs) in supplement or in place of blockchain provide a framework for an internet that is both decentralized AND solves the issues we’ve discussed that blockchain alone can’t solve. It’s important to note that while the integration of these technologies with existing blockchain ecosystems isn’t unheard of, they are able to solve the issues with blockchain discussed above. Cumulatively, they offer a way to efficiently address data, store and replicate data in a decentralized manner, and maintain identity.

Custody

A purely blockchain-focused wallet, as discussed previously, is simply a key pairing with an address that references ownership over transactions on the blockchain but doesn’t actually manage local or cloud storage. This means that any on-chain data, while decentralized, isn’t user-custodied unless they set up a node on the network in which they’re participating, which is something that in a blockchain context should happen in a separate app from the wallet.

In Web5, however, wallets are addressed via DIDs and are able to store data via DWeb Nodes - which can live on device and replicate storage remotely - rather than relying on storing everything on-chain or on the wallet developer’s cloud storage option. Because Web5 wallets are more robust than simply a wallet that connects to a blockchain, your Web5 wallet could theoretically manage running a blockchain node for you in addition to performing more standard “wallet” tasks like you might expect.

Additionally, because DWNs are able to replicate data across instances on different devices, it makes for self-custodying your data across devices much easier than with a blockchain wallet. While blockchain wallets require re-creating the private key pairings on another device and can pose a security risk, DIDs also offer ways to easily and securely port your DID between devices to replicate data and maintain a consistent user experience. You can imagine how great this is in the case of a photo storage app that is backed by a DWN that replicates data across your devices and makes sign-in on those devices easy!

While blockchain wallets and Web5 wallets aren’t mutually exclusive - Web5 wallets can very well interact with blockchains and blockchain wallets may use concepts like DIDs and VCs -, what matters is that when users take advantage of wallet apps using DIDs, DWNs, and VCs, they can self-custody all their off-chain data and even their on-chain data should they be able to run a node locally.

File Storage

Ledgers like the blockchain don’t make for performant databases in a lot of use cases, which is why DWNs are a breakthrough in combining decentralized web technologies with centralized web performance. While blockchains require ledger consensus and redundant storage to hold any type of data, DWNs are replicable nodes that can run any of the traditionally performant database technologies you’re used to - think SQL, MongoDB, etc. - without being tied to a centralized server.

As mentioned in our discussion about custody, you could run a DWN on your laptop, your phone, on your own server, or on rented server space from a cloud provider, and all of them can be synchronized to provide the kind of redundancy that we love about blockchain. As a result, DWNs are able to solve the problems of large file storage and off-chain storage.

Final Thoughts

A performant decentralized web that offers users custody of their own data has been a tantalizing dream for almost two decades, but in that same timeframe blockchain technology has proven to not be the way to make that vision a reality. If you find yourself or your customers valuing decentralization, privacy, flexibility, and self-custodial apps, then Web5 provides the framework to achieve exactly those goals. Your Web5 app may very well leverage blockchain technology - a participating financial institution on the the tbDEX protocol we’re developing is a great example of an app that uses Web5 tech to connect to blockchains, but there are lots of ways to build dApps with Web5.

If you want to try building your own decentralized applications, go ahead and check out our Web5 SDKs, join our Discord community, and connect with us on GitHub!

Monday, 22. April 2024

1Kosmos BlockID

The Recent Change Healthcare Ransomware Attack: Lessons Learned and How to Prevent Similar Breaches

The recent ransomware attack on Change Healthcare, a major healthcare technology company, has once again highlighted the critical importance of robust identity verification and authentication measures in safeguarding sensitive data and systems. While the details of the attack are still unfolding, the preliminary investigation has revealed that the root cause was the absence of multi-factor … Conti

The recent ransomware attack on Change Healthcare, a major healthcare technology company, has once again highlighted the critical importance of robust identity verification and authentication measures in safeguarding sensitive data and systems. While the details of the attack are still unfolding, the preliminary investigation has revealed that the root cause was the absence of multi-factor authentication on a remote access application used by Change Healthcare’s staff.

This lapse in security best practices allowed cybercriminals to compromise employee credentials and gain unauthorized access to the company’s networks. The attackers then spent nine days lurking within the systems before launching the ransomware attack, which disrupted critical healthcare services across the United States. To be fair, it is hard to say if properly implemented MFA would have even helped as bad actors are getting increasingly good at attacks like SIM Swapping, social engineering, stealing OTPs, etc. and these attacks have seen a dramatic increases in the last few years.

The Change Healthcare incident is a sobering reminder that even large, well-established organizations can fall victim to cyber threats when they fail to implement the necessary identity and access controls. In an era where remote work and cloud-based services have become the norm, the attack surface for malicious actors has expanded significantly, making it crucial for organizations to re-evaluate their security posture and adopt robust identity management solutions.

This is where the 1Kosmos platform can play a pivotal role in preventing similar breaches. Our solution offers a comprehensive approach to identity verification and authentication, addressing the key vulnerabilities that were exploited in the Change Healthcare attack.

Robust Identity Proofing

The 1Kosmos platform provides a secure and frictionless way to verify user identities, ensuring that only legitimate individuals can access sensitive systems and data. Our identity proofing capabilities, which are certified to NIST 800-63-3 and other industry standards, can detect and prevent the use of stolen or synthetic identities, a common tactic employed by cybercriminals.

Passwordless Multi-Factor Authentication

The absence of multi-factor authentication was a critical factor in the Change Healthcare breach. The 1Kosmos platform offers a range of passwordless authentication methods, including biometrics, push notifications, and hardware tokens, to provide a robust and user-friendly way to verify user identities and prevent unauthorized access.

Decentralized Identity Management

Unlike traditional identity management systems, the 1Kosmos platform leverages a private, permissioned blockchain to store and manage user identities. This decentralized approach eliminates the risk of a centralized “honeypot” of information that can be targeted by hackers, as seen in the Change Healthcare incident.

Audit Trail and Compliance

The 1Kosmos platform maintains a detailed, immutable audit trail of all identity-related events, including login attempts, access requests, and data sharing activities. This level of visibility and transparency not only helps organizations detect and respond to security incidents but also ensures compliance with industry regulations and standards.

Benefits of a Single Authentication Platform

The lessons learned from the Change Healthcare ransomware attack serve as a stark reminder that even the most well-established organizations are vulnerable to cyber threats when they fail to prioritize identity and access management. By adopting a comprehensive identity management solution like the 1Kosmos platform, organizations can significantly reduce the risk of similar breaches and ensure the security and integrity of their sensitive data and systems.

The post The Recent Change Healthcare Ransomware Attack: Lessons Learned and How to Prevent Similar Breaches appeared first on 1Kosmos.


liminal (was OWI)

Rethinking Identity Management: Solutions for a Secure Digital Future

In this episode of State of Identity, host Cameron D’Ambrosi welcomes Alex Bovee, co-founder and CEO of ConductorOne, to explore the evolving challenges and solutions in the digital identity space. Learn what’s driving the rise of identity-based security risks and how ConductorOne is tackling these issues through centralized identity governance and access controls. The discussion […] The post Re

In this episode of State of Identity, host Cameron D’Ambrosi welcomes Alex Bovee, co-founder and CEO of ConductorOne, to explore the evolving challenges and solutions in the digital identity space. Learn what’s driving the rise of identity-based security risks and how ConductorOne is tackling these issues through centralized identity governance and access controls. The discussion explores various aspects of identity management, such as access control, multifactor authentication, and the challenge of balancing security with productivity. It provides perspectives on how businesses can manage identity-related risks and improve user experience.

The post Rethinking Identity Management: Solutions for a Secure Digital Future appeared first on Liminal.co.


Northern Block

Problems Worth Solving in SSI Land (with Daniel Hardman)

Daniel Hardman challenges traditional separation of personal & organizational identity. Explore managing roles, relationships & trust in SSI systems. The post Problems Worth Solving in SSI Land (with Daniel Hardman) appeared first on Northern Block | Self Sovereign Identity Solution Provider. The post <strong>Problems Worth Solving in SSI Land</strong> (with Daniel Hardman)

🎥 Watch this Episode on YouTube 🎥
🎧   Listen to this Episode On Spotify   🎧
🎧   Listen to this Episode On Apple Podcasts   🎧

About Podcast Episode

Is there truly a clear separation between personal and organizational identity? This fundamental question lies at the heart of our most recent conversation on The SSI Orbit podcast between host Mathieu Glaude and identity expert Daniel Hardman.

In this conversation, you’ll learn:

Why the traditional separation of personal and organizational identity is a flawed mental model that limits our understanding of identity The importance of recognizing the intertwined nature of individual and organizational identities in the enterprise context Strategies for managing the complexities of roles, relationships, and identity facets within organizations Insights into empowering individuals and enabling trust through effective identity management approaches Perspectives on key challenges like managing identifiers, versioning, and building trust within self-sovereign identity systems

Don’t miss out on this opportunity to gain valuable insights and expand your knowledge. Tune in now and start exploring the possibilities!

Key Insights: The limitations of the term “governance” in identity systems and the need for a more empowering, user-centric approach The inextricable link between personal and organizational identity, and the importance of understanding roles, relationships, and context The challenge of managing the proliferation of identifiers and the need for software-driven solutions to help users navigate them The critical role of versioning and historical record-keeping in identity management, especially when analyzing trust and accountability Strategies: Leveraging the “who, role, and context” framework to better manage identities and their associated aliases Exploring the use of versioning and metadata to track the evolution of identities over time Developing software that helps users understand and manage their identifiers, rather than relying solely on credentials or wallets Chapters: 00:00 Introduction and Learnings in SSI 03:01 Reframing Governance as Empowerment 08:42 The Intertangled Nature of Organizational and Individual Identity 15:30 Managing Relationships and Roles in Organizational Identity 25:19 Versioning and Trust in Organizational Identity Additional resources: Episode Transcript Big Desks and Little People KERI – Key Event Receipt Infrastructure DIDComm Messaging v2.1 Editor’s Draft About Guest

Daniel Hardman is the CTO and CISO at Provenant and a Hyperledger Global Ambassador. With an M.A. in computational linguistics, an M.B.A., and a cybersecurity specialization, he brings multidisciplinary expertise to the identity space. Hardman has worked in research at the intersection of cybersecurity and machine learning, led development teams in enterprise software, and is a prominent contributor to several key specifications driving self-sovereign identity, including the Hyperledger Aries RFCs, W3C’s Verifiable Credentials, and Decentralized Identifiers. His diverse background and deep involvement in shaping industry standards offer unique perspectives on the complexities of identity management, especially within organizational contexts.

LinkedIn: linkedin.com/in/danielhardman/

  The post Problems Worth Solving in SSI Land (with Daniel Hardman) appeared first on Northern Block | Self Sovereign Identity Solution Provider.

The post <strong>Problems Worth Solving in SSI Land</strong> (with Daniel Hardman) appeared first on Northern Block | Self Sovereign Identity Solution Provider.


OWI - State of Identity

Rethinking Identity Management: Solutions for a Secure Digital Future

In this episode of State of Identity, host Cameron D’Ambrosi welcomes Alex Bovee, co-founder and CEO of ConductorOne to explore the evolving challenges and solutions in the digital identity space. Learn what’s driving the rise of identity-based security risks and how ConductorOne is tackling these issues through centralized identity governance and access controls. The conversation focuses on needi

In this episode of State of Identity, host Cameron D’Ambrosi welcomes Alex Bovee, co-founder and CEO of ConductorOne to explore the evolving challenges and solutions in the digital identity space. Learn what’s driving the rise of identity-based security risks and how ConductorOne is tackling these issues through centralized identity governance and access controls. The conversation focuses on needing a more flexible approach to identity management, addressing common concerns like access control, multifactor authentication, and the ongoing struggle to balance security with productivity. It also offers insights on how businesses can better manage identity-related risks while ensuring a seamless user experience.

 


Entrust

Biometrics: A Flash Point in AI Regulation

According to proprietary verification data from Onfido (now a part of Entrust), deepfakes rose 3000%... The post Biometrics: A Flash Point in AI Regulation appeared first on Entrust Blog.

According to proprietary verification data from Onfido (now a part of Entrust), deepfakes rose 3000% from 2022 to 2023. And with the increasing availability of deepfake software and improvements in AI, the scale and sophistication of these attacks are expected to further intensify. As it becomes more difficult to discern legitimate identities from deepfakes, AI-enabled biometrics can offer consumers, citizens, and organizations some much-needed protection from bad actors, while also improving overall convenience and experience. Indeed, AI-enabled biometrics has ushered in a new era for verification and authentication. So, with such promise, why is biometrics such a flash point in AI regulatory discussions?

Like the proverb that warns “the road to Hell is paved with good intentions,” the unchecked development and use of AI-enabled biometrics may have unintended – even Orwellian – consequences. The Federal Trade Commission (FTC) has warned that the use of AI-enabled biometrics comes with significant privacy and data concerns, along with the potential for increased bias and discrimination. The unchecked use of biometric data by law enforcement and other government agencies could also infringe on civil rights. In some countries, AI and biometrics are already being used for mass surveillance and predictive policing, which should alarm any citizen.

The very existence of mass databases of biometric data is sure to attract the attention of all types of malicious actors, including nation-state attackers. In a critical election year with close to half the world’s population headed to the polls, biometric data is already being used to create deepfake video and audio recordings of political candidates, swaying voters and threatening the democratic process. To help defend against these and other concerns, the pending EU Artificial Intelligence Act has banned certain AI applications, including biometric categorization and identification systems based on sensitive characteristics and the untargeted scraping of facial images from the web or CCTV footage.

The onus is on us … all of us

Legal obligations aside, biometric solution vendors and users have a duty of care to humanity to help promote the responsible development and use of AI. Crucial is the maintenance of transparency and consent in the collection and use of biometric data at all times. The use of diverse training data for AI models and regular audits to help mitigate the risk of unconscious bias are also vital safeguards. Still another is to adopt a Zero Trust strategy for the collection, storage, use, and transmission of biometric data. After all, you can’t replace your palm print or facial ID like you could a compromised credit card. The onus is on biometric vendors and users to establish clear policies for the collection, use, and storage of biometric data and to provide employees with regular training on how to use such solutions and how to recognize potential security threats.

It’s a brave new world. AI-generated deepfakes and AI-enabled biometrics are here to stay. Listen to our podcast episode on this topic for more information on how to best navigate the flash points in AI and biometrics.

The post Biometrics: A Flash Point in AI Regulation appeared first on Entrust Blog.


Microsoft Entra (Azure AD) Blog

Enforce least privilege for Entra ID company branding with the new organizational branding role

Hello friends,      I’m pleased to announce General Availability (GA) of the organizational branding role for Microsoft Entra ID company branding.    This new role is part of our ongoing efforts to implement Zero Trust network access by enforcing the principle of least privilege for users when customizing their authentication user experience (UX) via Entra ID company br

Hello friends,   

 

I’m pleased to announce General Availability (GA) of the organizational branding role for Microsoft Entra ID company branding. 

 

This new role is part of our ongoing efforts to implement Zero Trust network access by enforcing the principle of least privilege for users when customizing their authentication user experience (UX) via Entra ID company branding. 

 

Previously, users wanting to configure Entra ID company branding required the Global Admin role. This role, though, has sweeping privileges beyond what’s necessary for configuring Entra ID company branding.  

 

The new organizational branding role limits its privileges to the configuration of Entra ID company branding, significantly improving security and reducing the attack surface associated with its configuration. 

 

To assign the role to a user, follow these steps: 

 

1. Log on to Microsoft Entra ID and select Users. 

 

 

 

2. Select and open the user to assign the organizational branding role. 

 

 

 

3. Select Assigned roles and then Add assignments.  

 

 

 

4. Select the Organizational Branding Administrator role and assign it to the user. 

 

 

Once the settings are applied, the user will be able to configure the authentication UX via Entra ID Company Branding.  

 

Learn more about how to configure your company branding and create a consistent sign-in experience for your users.

 

James Mantu 

Sr. Product Manager, Microsoft identity  

LinkedIn: jamesmantu | LinkedIn 

  

 

Learn more about Microsoft Entra: 

Related Articles:  Add company branding to your organization's sign-in page - Microsoft Entra | Microsoft Learn   See recent Microsoft Entra blogs   Dive into Microsoft Entra technical documentation   Join the conversation on the Microsoft Entra discussion space and Twitter   Learn more about Microsoft Security   

Ontology

Ontology’s $10 Million Boost for Decentralized Identity Innovation

Hello, Ontology community! 🤗 We’re thrilled to announce a massive $10 million fund aimed at fueling the innovation and adoption of Decentralized Identity (DID) through ONT & ONG tokens. This initiative is designed to empower, educate, and evolve our ecosystem in exciting new ways! 🚀 🎓 Empowering Education on DID We’re committed to spreading knowledge about the power of decentralize

Hello, Ontology community! 🤗 We’re thrilled to announce a massive $10 million fund aimed at fueling the innovation and adoption of Decentralized Identity (DID) through ONT & ONG tokens. This initiative is designed to empower, educate, and evolve our ecosystem in exciting new ways! 🚀

🎓 Empowering Education on DID

We’re committed to spreading knowledge about the power of decentralized identity. We’re calling all creatives and educators to help us demystify the world of DID. Whether you’re a writer, a filmmaker, or an event organizer, there’s a place for you to contribute! We’re supporting all kinds of content to help everyone from beginners to experts understand and utilize DID more effectively.

🛠️ Step-by-Step Tutorials on ONT ID

Dive deep into our flagship ONT ID with tutorials that range from beginner guides to advanced technical manuals. These comprehensive resources are designed to make it easy for everyone to understand and implement ONT ID, enhancing both user and developer experiences.

🔗 Integration and Partnership Opportunities

We’re looking to expand the reach of ONT ID by integrating it across various platforms and forming strategic partnerships. If you have a project that could benefit from seamless identity verification or if you’re looking to innovate within your current platform, we want to support your journey.

🌟 Innovate with ONT ID

Got a groundbreaking idea? We’re here to help turn it into reality. Projects that utilize ONT ID in innovative ways are eligible for funding to bring fresh and sustainable solutions to the market. Let’s build the future of digital identity together!

🤝 Community Involvement and Support

Your voice matters! Community members have a say in project selection, and successful applicants will receive milestone-based funding along with continuous support in idea incubation, technical resources, and market validation.

📣 Get Involved!

This is your chance to make a mark in the digital identity landscape. We encourage everyone with innovative ideas or projects to apply. Let’s use this opportunity to shape the future of decentralized identity. Submit your proposals HERE and join us in this exciting journey!

🔗 Stay Connected Keep up with the latest from Ontology and share your thoughts and feedback through our social media channels. Your insights are crucial for our continuous innovation and growth. Follow us at linktr.ee/OntologyNetwork 🌟

🌐 Ontology’s $10 Million Boost for Decentralized Identity Innovation 🌐 was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


PingTalk

Pekin Insurance Continues to Deliver Highest Levels of Security with Optimal Experience | Ping Identity

Since 1921, Pekin Insurance has been providing its customers with the best possible service in some of the most difficult points in their lives. This philosophy is infused in Pekin Insurance’s Enterprise Security team. I recently had the pleasure of chatting with Ray Lewis, Director of Enterprise Security at Pekin Insurance, as he walked me through how the company is using identity to help de

Since 1921, Pekin Insurance has been providing its customers with the best possible service in some of the most difficult points in their lives. This philosophy is infused in Pekin Insurance’s Enterprise Security team. I recently had the pleasure of chatting with Ray Lewis, Director of Enterprise Security at Pekin Insurance, as he walked me through how the company is using identity to help deliver a secure yet pleasant experience to its employees and agents, with the experience soon to be offered to customers, as well.

 

Ray has been with Pekin Insurance since 2019. “I’ve been in the technology field for nearly 30 years and insurance for over six. Pekin Insurance is just simply one of the best companies I’ve worked for. It has a terrific culture–it’s very community-oriented and does a lot for the city of Pekin,” Ray said. “And everyone is working toward the same mission: We are all always trying to help people and insure people, even and especially, at some of the worst times in their lives.” Indeed, the company’s motto is Beyond the expected®, while offering financial protection for autos, homes, lives, and businesses in 22 states. In order to accomplish these goals, Pekin Insurance is leveraging identity to empower its 700+ employees and more than 7,000 agents.

Sunday, 21. April 2024

KuppingerCole

Analyst Chat #211: From Founding to Future - Celebrating 20 Years of KuppingerCole Analysts

Matthias celebrates the 20th anniversary of KuppingerCole Analysts by interviewing the three of the members of the first hour: Martin Kuppinger, Joerg Resch, and Alexei Balaganski. They discuss the early days of the company, the evolution of their work, and the milestones they have achieved. They also talk about the importance of collaboration, the future of KuppingerCole, and their contributions

Matthias celebrates the 20th anniversary of KuppingerCole Analysts by interviewing the three of the members of the first hour: Martin Kuppinger, Joerg Resch, and Alexei Balaganski. They discuss the early days of the company, the evolution of their work, and the milestones they have achieved. They also talk about the importance of collaboration, the future of KuppingerCole, and their contributions to the industry.




Northern Block

A Summary of Internet Identity Workshop #38

Highlights from IIW38, which took place between April 16th and April 18th at the Computer History Museum in Mountain View, California. The post A Summary of Internet Identity Workshop #38 appeared first on Northern Block | Self Sovereign Identity Solution Provider. The post A Summary of Internet Identity Workshop #38 appeared first on Northern Block | Self Sovereign Identity Solution Provider.

(Cover image courtesy of the Decentralized Identity Foundation)

Below are my personal highlights from the Internet Identity Workshop #38, which took place between April 16th and April 18th at the Computer History Museum in Mountain View, California.


#1 – Yet another new DID Method?

Image courtesy of James Monaghan

On day one, I participated in a session hosted by Stephen Curran from the BC government, where we discussed the new DID method they’ve been working on: did:tdw.

It’s essentially an extension of did:web, drawing on learnings from the Trust over IP did:webs initiative but simplifying it by removing some of the components.

One of the interesting aspects is their ability to incorporate historicity of DID Documents without relying on ledgers. They’ve also developed a linked verifiable presentation (i.e. when I resolve the DID I can get a proof), pre-rotation capability, and portability, which are crucial features for real business applications of DID Web.

They view this method as particularly suitable for public organizations and have indicated that similar implementations could be applied to other DID methods. They already have some running code for this, which is promising.

This session was significant for us because these business features are essential as we deploy DIDs in production with our customers. It also reinforced how our work on High Assurance DID with DNS complements theirs, adding an extra layer of security and integrity. I’m excited about the potential of a proof of concept where we can see both the TDW and the High Assurance DID Web in action together.


#2 – Bootstrapping DIDComm connections through OpenID4VC flows

I attended a session by Sam Curren who represented some recent work done by IDUnion to demonstrate how a DIDComm connection could be bootstrapped through an OpenID4VC flow, in a very light touch manner.

By leveraging OAuth 2.0 authentication in these flows, they’ve developed a method to pass a DIDComm connection request seamlessly. This is particularly interesting because the European Union has decided to use OpenID for verifiable credentials in issuing high assurance government digital credentials, leading to widespread adoption.

However, OpenID for verifiable credentials has limitations that DIDComm can address. DIDComm serves as a bilateral messaging platform between entities, enabling tasks like credential revocation notices that OpenID for verifiable credentials cannot handle. DIDComm also offers greater flexibility and modularity, allowing for secure messaging and interaction with various protocols.

IDUnion in Germany aims to leverage the OpenID for VC specification to establish DIDComm connections between issuers and holders, enabling a broader range of functionalities. They have running code and a demo for this, which we plan to implement at Northern Block in the near future.

The work is under discussion for transfer to the DIF for further work.

I also found out about where to get DIDComm swag!


#3 – Apple and Google’s Cross Platform Demo of Digital Credential API

In the first session of day two, representatives from both Apple and Google held a demo to showcase interoperability between Apple Wallet and Google Wallet with a browser, drawing a large crowd. Demonstrations by major platform players like these always mark significant progress in where we are in the adoption cycle of a new industry. 

My main takeaway is that their demonstration questions the value of third-party wallets. The trend is that government-issued high-assurance citizen credentials are increasingly issued into government-based wallets, both in North America and Europe. While government-provided wallets may be the norm for high-assurance government-issued credentials, for other types of identity credentials, direct exchange from the device via a third-party application seems to offer the best user experience. This raises questions about the future role of vendor wallets, particularly for personal use or specific utility-focused applications.


#4 – Content Authenticity 201: Identity Assertion Technical Working 

Content authenticity is a pressing real-world issue, especially with the rise of generative AI, which blurs the lines between human-generated and machine-generated content. This challenge has been exacerbated by the difficulty in tracing the origin of content, leading to concerns about integrity, manipulation, and misinformation. The Content Authenticity Initiative aims to address this problem by bringing together industry partners, including hardware and software providers, as well as media outlets, to establish standards for tagging media. Led by Eric Scouten, founder of the initiative from Adobe, they have successfully developed a standard for tagging media. However, questions remain regarding how to manage identity behind content, which varies depending on the type of content creator involved. Whether it’s media outlets or individual creators, maintaining integrity in the provenance of media assets requires trust in the identity process. Discussions around creator assertions and identity management are ongoing, with active participation encouraged through the initiative’s working group discussions. For those interested, here’s a link to a podcast where Eric Scouten and I discuss these topics, as well as a link to the Creator Assertions Working Group homepage (here) for further engagement.


#5 – Trust Registry FACE OFF!! 

I co-hosted a session with Sam Curren, Andor Kesselman, and Alex Tweeddale on trust registries. The aim was to explore various projects in this space and identify opportunities for convergence or accelerated development. The conversation began with an overview of how X.509 certificates are currently used on the web to establish trust in secure connections. I then introduced Northern Block’s trust registry solution, which offers features to enhance integrity in the trust registry process (https://trustregistry.nborbit.ca/).

We then delved into different standards:

EBSI Trust Chains: This standard tracks “Verifiable Accreditations” and is used by cheqd. It involves a governing authority for the ecosystem with a DID on a blockchain, tracking DIDs authorized for specific actions. Trust over IP Trust Registry Protocol v2: Version 2 is under implementor’s review as of April 2024. It offers a RESTful API with a query API standardizing how to query which entities are authorized to do what in which context. OpenID Federation: This standard, particularly OpenID Federation 1.0, is already used in systems worldwide, including university networks and Brazil’s open banking. It allows each entity to provide trust lists, including common trust anchors with other lists. Credential Trust Establishment 1.0: This standard, part of the DIF Trust Establishment specification, is a data model rather than a protocol or interaction model. It involves creating a document and hosting it behind a URI, with no centralization. It allows roles for each participant and is complementary to VC-based decentralized trust.

The session was dynamic, with significant interest, especially regarding roots of trusts, a topic gaining traction at the Internet Identity Workshop. We’re excited about our ongoing work in this field.


#6 – High-Assurance did:web Using DNS

I hosted a session to showcase our work with the High Assurance did:web using DNS. Jesse Carter from CIRA and Tim Bouma from the Digital Governance Council of Canada joined me in the presentation.

We demonstrated to the group that, without new standards or specifications, but simply by leveraging existing internet infrastructure, we could significantly enhance the assurance behind a decentralized identifier.

The feedback we received was positive, and all of our presentations so far have been well-received. We believe that organizations with robust operational practices around DNS infrastructure can integrate the security and integrity of DNS into decentralized identifiers effectively. This approach should align well with the planned proof-of-concept using the HA DID Spec in conjunction with did:tdw’s verifiable presentation feature, offering both technical and human trust in one process.

Slides | Draft RFC

#7 – AnonCreds in W3C VCDM Format

I attended an engaging session led by Stephen Curran from the British Columbia government, discussing their project to align AnonCreds credentials with the W3C verifiable credential data model standard. It was insightful to learn about British Columbia’s commitment to preserving privacy by leveraging AnonCreds, particularly highlighting the unlinkability feature that prevents the generation of super cookies. While acknowledging concerns about potential correlation of unique identifiers in other digital identity programs globally, Stephen addressed market friction from those seeking W3C-aligned verifiable credentials. He outlined the innovative steps taken to ensure compatibility, including leveraging their procurement program to fund multiple companies for various aspects of the project, including implementations. Once again, the British Columbia Government showcased remarkable innovation in the Digital Trust space.

Slides: https://bit.ly/IIWAnonCredsVCDM

#8 – A Bridge to the Future: Connecting X.509 and DIDs/VIDs

Diagram with X.509 and DID/VC comparison

I participated in a great discussion about the potential connection between X.509 certificates and decentralized identifiers (DIDs). Drummond Reed provided an exceptional overview of what DIDs entail, offering the clearest explanation I’ve encountered. The genesis of this discussion stemmed from the Content Authenticity Initiative’s endeavour to establish a trust infrastructure for content providers, with a notable push for X.509 certificates due to existing investments by large enterprises. We delved into how X.509 certificates are utilized by organizations like the CA/Browser Forum and browsers, as well as their role in trust registries. However, a fundamental distinction emerged between the two: X.509 certificates are intricately woven into a governance process with a one-to-one correspondence, while DIDs can be self-asserted and are not necessarily tied to specific governance structures. This contrast prompted exploration into leveraging current X.509 processes to facilitate linkage with DIDs, enabling broader utility within the same context. Overall, the discussion shed light on the interconnectedness of roots of trust, trust registries, and the evolving landscape of digital trust.

#9 – State of eIDAS  + German eIDAS Wallet Challenge

Screenshot taken from deck linked below

In my final session of note before heading to the airport on day three, we engaged in a discussion regarding the state of eIDAS, alongside updates on Germany’s eIDAS wallet consultation project and challenge. While the discussion didn’t introduce anything particularly groundbreaking, the notable turnout underscored the widespread interest in developments within the European digital identity landscape. Throughout IIW, numerous sessions delved into the technical specifications mandated by the European Union’s architectural reference framework to align with eIDAS 2.0. For those interested, I’ve participated in several podcasts covering this topic (1, 2, 3). The ongoing momentum surrounding eIDAS 2.0 promises to be a focal point in future IIWs.

Slides

I look very much forward to IIW39 this October, 2024!

–end–

The post A Summary of Internet Identity Workshop #38 appeared first on Northern Block | Self Sovereign Identity Solution Provider.

The post A Summary of Internet Identity Workshop #38 appeared first on Northern Block | Self Sovereign Identity Solution Provider.

Friday, 19. April 2024

Finema

vLEI Demystified Part 3: QVI Qualification Program

Authors: Yanisa Sunanchaiyakarn & Nuttawut Kongsuwan, Finema Co. Ltd. vLEI Demystified Series: Part1: Comprehensive Overview Part 2: Identity Verification Part 3: QVI Qualification Program This blog is the third part of the vLEI Demystified series. The previous two, vLEI Demystified Part 1: Comprehensive Overview and vLEI Demystified Part 2: Identity Verification, have explained th

Authors: Yanisa Sunanchaiyakarn & Nuttawut Kongsuwan, Finema Co. Ltd.

vLEI Demystified Series:

Part1: Comprehensive Overview Part 2: Identity Verification Part 3: QVI Qualification Program

This blog is the third part of the vLEI Demystified series. The previous two, vLEI Demystified Part 1: Comprehensive Overview and vLEI Demystified Part 2: Identity Verification, have explained the foundation of the pioneering verifiable Legal Entity Identifier (vLEI) ecosystem as well as its robust Identity Verification procedures. In this part, we will share with you our journey through the Qualification of vLEI Issuers, called Qualified vLEI Issuers (QVIs), including the requirements and obligations that QVIs have to fulfill once they are authorized by GLEIF to perform their roles in the ecosystem.

The Qualification of vLEI Issuers is the evaluation process conducted by the Global Legal Entity Identifier Foundation (GLEIF) to assess the suitability of organizations aspiring to serve as Qualified vLEI Issuers within the vLEI ecosystem. GLEIF has established the Qualification Program for all interested organizations, which can either be the current LEI Issuers (Local Operating Units: LOUs) or new business partners who wish to explore the emerging vLEI ecosystem. The organizations that complete the Qualification Program under the GLEIF vLEI Ecosystem Framework (EGF) are authorized to perform verification, issuance, and revocation of vLEI credentials to legal entities seeking the credentials and also their representatives.

Photo by Nguyen Dang Hoang Nhu on Unsplash Step 1: Start the Qualification Program

To kick start the qualification process, the organizations interested in becoming QVIs must first review Appendix 2: vLEI Issuer Qualification Program Manual, which provides an overview of the required Qualification Program, and the vLEI Ecosystem Governance Framework (vLEI EGF) to make sure that they understand how to incorporate the requirements outlined in the framework to their operations. Once they have decided to proceed, the interested organizations may initiate the Qualification Program by sending an email to qualificationrequest@gleif.org along with a Non-Disclosure Agreement (NDA) signed by an authorized signatory of the interested organization.

Unless GLEIF has a mean to verify the signatory’s authority by themselves, the interested organization may be required to submit proof of the signatory’s authority. In the case where the NDA signer’s authority is delegated, the power of attorney may also be required.

After GLEIF reviews the qualification request, they will countersign the NDA and organize an introductory meeting with the interested organization, now called a candidate vLEI issuer, to discuss the next step of the Qualification Program.

Step 2: Implement the Qualification Program Requirements

To evaluate if a candidate vLEI issuer has both the financial and technical capabilities to perform the QVI role, the candidate vLEI issuer is required to implement the Qualification Program Requirements, which consist of business and technical qualifications. Throughout this process, the candidate vLEI issuer may schedule up to two meetings with GLEIF to clarify program issues and requirements.

Complete the Qualification Program Checklist

A candidate vLEI issuer is required to complete Appendix 3: vLEI Issuer Qualification Program Checklist to demonstrate that they are capable of actively participating in the vLEI ecosystem as well as being in good financial standing. The checklist and supporting documents can be submitted via online portals provided by GLEIF.

The Qualification Program Checklist is divided into 12 sections from Section A to Section L. The first five sections (Section A to Section E) focus mainly on the business aspects while the last seven sections (Section F to Section L) cover the technical specifications and relevant policies for operating the vLEI issuer Services.

Note: vLEI Issuer Services are all of the services related to the issuance, management, and revocation of vLEI credentials provided by the QVI.

Section A: Contact Details:

This section requires submission of the candidate vLEI issuer’s general information as well as contact details of the key persons involved in the vLEI operation project, namely: (1) Internal Project Manager, (2) Designated Authorized Representative (DAR), (3) Key Contact Operations, and (4) Key Contact Finance

Section B: Entity Structure

This section requires submission of the candidate vLEI issuer’s organization structure, regulatory internal and external audit reports, operational frameworks, and any third-party consultants that the candidate vLEI issuer has engaged with regarding their business and technological evaluation.

Section C: Organization Structure

This section requires submission of the current organization chart for all vLEI operations and a complete list of all relevant third-party service providers that support the vLEI operations.

Section D: Financial Data, Audits & General Governance

This section requires submission of financial and operational conditions of the candidate vLEI issuer’s business, including:

Audited financial statements for the prior year Financial auditor reports Formal vLEI Issuer Operation Budget

Section E: Pricing Model

In this section, the candidate vLEI issuer outlines their strategy to generate revenue from the vLEI operations and ensure that they are committed to managing the funding and monetization of the services they plan to offer. This includes the pricing model and business plan regarding the vLEI issuer services

Section F: vLEI Issuer Services

In this section, the candidate vLEI issuer shall outline their detailed plans and processes related to the issuance and revocation of vLEI credentials, including:

Processes for receiving payments from legal entity (LE) clients. Processes for identity verification in accordance with the vLEI EGF. Processes for validating the legal identity of official organization role (OOR) persons as well as using GLEIF API to choose the correct OOR code. Processes for calling the vLEI Reporting API for each issuance of LE and OOR vLEI credentials Processes for verifying the statuses of legal entity clients’ LEI. The clients must be notified 30 days before their LEI expires. Processes for revoking all vLEIs issued to the legal entity client whose LEI has lapsed Processes for monitoring compliance with the Service Level Agreement (Appendix 5) Processes for monitoring witnesses for erroneous or malicious activities

Section G: Records Management

In this section, the candidate vLEI issuer provides their internal Records Management Policy that defines the responsibilities of the personnel related to records retention to ensure that the records management processes are documented, communicated, and supervised.

Section H: Website Requirements

In this section, the candidate QVI’s websites are required to display the following items:

QVI Trustmark Applications, contracts, and required documents for legal entities to apply for vLEI credentials.

Section I: Software

In this section, the candidate vLEI issuer provides their internal policy for the Service Management Process including:

Processes for installing, testing, and approving new software Processes for identifying, tracking, and correcting software errors/bugs Processes for managing cryptographic keys Processes for recovering from compromise

The candidate vLEI issuer must also specify their policies and operations related to management for private keys and KERI witnesses as follows:

Processes and policies for managing thresholded multi-signature scheme, where at least 2 out of 3 qualified vLEI issuer authorized representatives (QARs) are required to approve issuance or revocation of vLEI credentials Processes for operating KERI witnesses, where at least 5 witnesses are required for the vLEI issuer services

Section J: Networks and Key Event Receipt Infrastructure (KERI)

In this section, the candidate vLEI issuer describes their network architecture including KERI witnesses and the details of third-party cloud-based services as well as a process monitoring of the vLEI Issuer-related IT infrastructure. The candidate vLEI issuer must also provide the following internal policies:

Disaster Recovery and/or Business Continuity Plan Backup Policies and Practices The vetting process for evaluating the reliability of third-party service providers

Section K: Information Security

In this section, the candidate vLEI issuer provides their internal Information Security Policy that includes, for example, formal governance, revision management, personnel training, physical access policies, incident reports, and remediation from security breaches.

Section L: Compliance

QVI candidates must declare that they will abide by the general and legal requirements as a vLEI Issuer by:

Execute a vLEI Issuer Qualification Agreement with GLEIF Execute a formal contract, of which the template follows the Agreement requirements, with a Legal Entity before the issuance of a vLEI credential Comply with the requirements for Qualification, vLEI Ecosystem Governance Framework, and any other applicable legal requirements Respond to Remediation (if any)

After the candidate vLEI issuer has submitted the qualification program checklist and supporting documents through online portals, GLEIF will review the submission and provide the review results and remediation requirements, if any. Subsequently, the candidate vLEI issuer must respond to the remediation requirements along with corresponding updates to their qualification program checklist and supporting documents.

Undergo Technical Qualification

After the qualification program checklist has been submitted, reviewed, and remediated, the candidate vLEI issuer then proceeds to the technical part of the qualification program. GLEIF and the candidate vLEI issuer then organize a dry run to test that the candidate vLEI issuer is capable of:

Performing OOBI sessions and authentication Generating and managing a multi-signature group AID Issuing, verifying and revoking vLEI credentials

The purpose of the dry run is to make sure that the candidate vLEI issuer has the technical capability to operate as a QVI as well as identify and fix any technical issue that may arise. A dry run may take multiple meeting sessions if required.

After the candidate vLEI issuer completes the dry run, they may proceed to the official technical qualification, which repeats the process during the dry run. vLEI credentials issued during the official session are official and may be used in the vLEI ecosystem.

Step 3: Sign the Qualification Agreement

Once the vLEI candidates have completed all of the business and technical qualification processes, GLEIF will notify the organization regarding the result of the Qualification Program. The approval of the qualification application will result in the candidate vLEI Issuer signing the vLEI Issuer Qualification Agreement with GLEIF. The candidate vLEI Issuer will then officially become a QVI.

Beyond the Qualification Program

Once officially qualified, the QVI must ensure strict compliance with the vLEI EGF and the requirements that they completed in the Qualification Program Checklist. For example, their day-to-day operations must comply with Appendix 5: Qualified vLEI Issuer Service Level Agreement (SLA) as well as comply with their internal policies such as the Records Management Policy and Information Security Policy. They must also continuously monitor their services and IT infrastructure including the witnesses.

Annual vLEI Issuer Qualification

The QVI is also subject to the Annual vLEI Issuer Qualification by GLEIF to ensure that they continue to meet the requirements of the vLEI Ecosystem Governance Framework. If the QVI has made significant changes to their vLEI issuer services, IT infrastructure, or internal policies, the QVI must document the details of the changes and update corresponding supporting documentation. GLEIF will then review the changes and request for remediation actions, if any.

Conclusion

The processes of the QVI Qualification Program are designed to be extensively rigorous to ensure the trustworthiness of the vLEI ecosystems as QVIs play a vital role in maintaining trust and integrity among the downstream vLEI stakeholders. We at Finema are committed to promoting the vLEI ecosystem, and would be delighted to assist should you be interested in embarking on your journey to participate in the ecosystem.

vLEI Demystified Part 3: QVI Qualification Program was originally published in Finema on Medium, where people are continuing the conversation by highlighting and responding to this story.


Ockto

Al je gegevens in de ID-wallet? Daar geloven wij niet in | Data Sharing Podcast

Het artikel is gebaseerd op een aflevering van de Data Sharing Podcast:

Het artikel is gebaseerd op een aflevering van de Data Sharing Podcast:


liminal (was OWI)

Weekly Industry News – Week of April 15

Liminal members enjoy the exclusive benefit of receiving daily morning briefs directly in their inboxes, ensuring they stay ahead of the curve with the latest industry developments for a significant competitive advantage. Looking for product or company-specific news? Log in or sign-up to Link for more detailed news and developments. Week of April 15, 2024 […] The post Weekly Industry News – Week

Liminal members enjoy the exclusive benefit of receiving daily morning briefs directly in their inboxes, ensuring they stay ahead of the curve with the latest industry developments for a significant competitive advantage.

Looking for product or company-specific news? Log in or sign-up to Link for more detailed news and developments.

Week of April 15, 2024

Here are the main industry highlights of this week.

➡ Innovation and New Technology Developments Clemson Launches Pilot with Intellicheck to Curb Underage Drinking

Intellicheck and the city of Clemson, South Carolina, have initiated a 12-month pilot program to combat underage drinking by enhancing fake ID detection in local bars, convenience stores, and liquor stores. This program utilizes Intellicheck’s identity verification technology, which authenticates IDs via mobile devices or scanners. Given Clemson’s large student population from Clemson University, this technology is crucial for addressing fake IDs that are challenging to detect through conventional methods.

Read the full article on www.biometricupdate.com Snap to Watermark AI-Generated Images for Transparency and Safety

Snap Inc. will now watermark AI-generated images on its platform to enhance transparency and safety. The logo and a sparkle emoji will mark such pictures as AI-created. The watermark applies to images exported or saved from the app. Snap also continues implementing safety measures and managing challenges with its AI chatbot. These efforts are part of Snap’s broader strategy to ensure safe and equitable use of AI features. 

Read the full article on techcrunch.com NSW Launches Australia’s First Trial of Digital Birth Certificates to Enhance Identity Security

NSW is offering digital birth certificates to complement ongoing digital identity developments. Parents of over 18,000 children can now access a digital alternative with the same legal standing as traditional paper documents. The digital version offers enhanced security features, including holograms and timestamping. The initiative includes specific accessibility features for individuals with visual impairments.

Read the full article on themandarin.com.au OpenAI Launches First Asian Office in Tokyo, Aligning with Microsoft’s $2.9 Billion Investment in Japan

OpenAI has opened its first office in Asia, located in Tokyo, Japan. The move is part of OpenAI’s strategy to form partnerships with Japanese entities and leverage AI technology to address challenges like labor shortages. Microsoft also plans to invest $2.9 billion in cloud and AI infrastructure in Japan.

Read the full article on reuters.com Google to Discontinue VPN by Google One Service Later This Year in Strategic Shift

Google is shutting down its VPN by Google One service, which was introduced in October 2020. The service will be discontinued later this year as part of a strategic shift to focus on more in-demand features within the Google One offerings. A more formal public shutdown announcement is expected this week.

Read the full article on theverge.com Sierra Leone Extends Digital ID Registration, Aiming for Enhanced Access to Services

Sierra Leone is implementing a digital ID system managed by NCRA. The MOSIP platform is being used to improve government and private sector service access. The registration deadline has been extended to June 28, 2024, to promote greater inclusion and improve service delivery and security for its citizens.

Read the full article on biometricupdate.com ➡ Investments and Partnerships EnStream and Socure Partner to Tackle Synthetic Identity Fraud in Canada

EnStream LP and Socure have announced a partnership to better efforts to combat synthetic identity fraud in Canada. EnStream, known for its real-time mobile intelligence services, will integrate its data sets with Socure’s fraud solution, enhancing its capabilities in verifying identities and preventing fraud. This collaboration will add mobile attributes powered by EnStream’s machine learning models to Socure’s system to improve consumer profiles’ accuracy throughout customer interactions.

Read the full article on finance.yahoo.com Microsoft Invests $1.5 Billion in UAE’s G42 to Enhance AI Services and Address Security Concerns

Microsoft has invested $1.5 billion in G42, an AI company based in the UAE, to expand AI technology in the Middle East, Central Asia, and Africa. The partnership aims to provide advanced AI services to global public sector clients, using Microsoft’s Azure cloud platform for hosting G42’s AI services. Both companies have established an Intergovernmental Assurance Agreement to ensure high standards of security, privacy, and responsible AI deployment. The collaboration also marks a strategic alignment with the UAE, enhancing Microsoft’s influence in the region.

Read the full article on qz.com Stripe Raises $694.2 Million in Tender Offer to Provide Liquidity and Delay IPO Plans

Stripe, a financial infrastructure platform, recently raised $1.2 million through a tender offer. The funds were partly used to repurchase shares to mitigate the dilution effects of its employee equity compensation programs. Stripe plans to use the proceeds to provide liquidity to its employees and strengthen its financial position. The company also continues to expand its services, such as its recent integration with Giddy, to enhance crypto accessibility.

Read the full article on thepaypers.com Finmid Emerges with €35 Million to Transform SMB Financial Services, Partners with Wolt

Berlin-based fintech startup finmid has raised €35 million in early-stage equity funding, led by UK-based VC Blossom Capital with support from Earlybird and N26 founder Max Tayenthal. Finmid aims to provide tailored financial services to SMBs, especially in retail and restaurants and plans to expand into core markets, localize operations, and enhance financing options for better platform integration and user experience. It has also partnered with Finnish food delivery platform Wolt to create ‘Wolt Capital’, a cash advance feature for merchants on the Wolt platform.

Read the full article on fintechfutures.com Salesforce Advances in Bid to Acquire Data Giant Informatica for $11.4 Billion

Salesforce is in talks to acquire Informatica, a data management services provider. The company has been valued at $11.4 billion and has seen a rise of almost 43% in its shares this year. The proposed acquisition price was lower than its closing share price of $38.48 last Friday. If the acquisition goes through, it will be another large-scale acquisition by Salesforce, following the purchase of Slack Technologies for $27.7 billion in 2020 and Tableau Software for $15.7 billion in 2019. 

Read the full article on reuters.com U.S. Government Awards Samsung $6.4 Billion to Boost Semiconductor Manufacturing in Texas

Samsung Electronics has received up to $6.4 billion from the U.S. government to expand its semiconductor manufacturing in Texas. This funding will help Samsung invest approximately $45.0 billion in a second chip-making facility, advanced chip packaging unit, and R&D capabilities. The project aims to start producing advanced 4-nanometer and 2-nanometer chips between 2026 and 2027, creating jobs and strengthening U.S. competitiveness in semiconductor manufacturing while reducing reliance on Asian supply chains.

Read the full article on wsj.com Cybersecurity Giant Cyderes Acquires Ipseity Security to Boost Cloud Identity Capabilities

Cybersecurity provider Cyderes has acquired Canadian firm Ipseity Security. The acquisition will enhance Cyderes’ cloud identity and access governance capabilities, bolstering its presence in the rapidly growing IAM market.

Read the full article on channele2e.com ➡ Legal and Regulatory  Illinois Woman Sues Target for Biometric Privacy Violations Under BIPA

An Illinois woman has filed a class action lawsuit against Target for unlawfully collecting and storing her biometric data without consent, violating Illinois’ Biometric Information Privacy Act. The lawsuit claims Target failed to provide necessary disclosures and obtain written consent before collecting biometric data, such as facial recognition information, posing a significant risk of identity theft if compromised. BIPA requires explicit consent and detailed information on data use, retention, and destruction to be provided to consumers, which the lawsuit alleges Target did not comply with.

Read the full article on fox32chicago.com Bulgarian Fraud Ring Steals £53.9 Million from UK Universal Credit System

A group of five Bulgarian nationals stole £53.9 million from the UK’s Universal Credit system by making thousands of fraudulent claims over four and a half years. They used makeshift “benefit factories” to process and receive payments illegally, which were then laundered through various bank accounts. The case highlights the need for enhanced document verification and biometric identity checks with liveness detection to prevent similar fraudulent activities in the future.

Read the full article on biometricupdate.com HHS Replaces Login.gov with ID.me Following $7.5 Million Theft from Grantee Payment Platform

The FTC order revealed that Avast unlawfully collected and sold sensitive information to over 100 third parties through its browser extensions and antivirus software; the company allegedly deceived customers by falsely claiming its products would block third-party tracking. Avast’s subsidiary, Jumpshot, rebranded as an analytics company, sold the data without sufficient notice and consent. Avast must inform affected consumers, delete transferred data, and implement a privacy program as part of their remediation measures.

Read the full article on biometricupdate.com Russian-Linked Hackers Suspected in Cyberattack on Texas Water Facility

A Russian government-linked hacking group is suspected of executing a cyberattack on a Texas water facility, leading to an overflow of a tank in Muleshoe. Similar suspicious cyber activities in other North Texas towns are also under investigation. Urgent appeals have been issued for water facilities nationwide to bolster their cyber defenses in response to increasing attacks on such critical infrastructure. The attackers exploited easily accessible services amidst ongoing investigations linking these activities to Russia’s GRU military intelligence unit. 

Read the full article on edition.cnn.com Jamaican Parliament Reviews Draft Legislation for National Digital ID System

The Jamaican parliament is set to review draft legislation for the National Identification System (NIDS) to establish a digital ID framework in Jamaica. This move is part of the government’s commitment to addressing public concerns and enhancing digital transformation. The draft legislation emphasizes strong security measures to build trust among Jamaicans, who are skeptical about digital IDs. The legislation will enable individuals to receive notifications when an authorized entity verifies their identity.

Read the full article on biometricupdate.com Temu Faces Stricter EU Regulations as User Count Surpasses 45 Million

Temu, a competitor of Alibaba Group, has surpassed 45 million monthly users in Europe, which triggers enhanced regulation under the EU’s Digital Services Act. The European Commission is considering designating Temu as a “very large online platform,” subjecting the company to stricter regulations. Meanwhile, Shein, a Chinese fast-fashion company, is also engaging with the EU regarding potential DSA designation.

Read the full article on finance.yahoo.com Roku Announces Second Data Breach of the Year, Affecting Over Half a Million Accounts

Roku experienced a data breach impacting 576,000 accounts due to credential stuffing. Unauthorized purchases were made in fewer than 400 cases, and no complete payment information was exposed. Roku reset passwords for all affected accounts and mandated two-factor authentication for all users.

Read the full article on wsj.com

The post Weekly Industry News – Week of April 15 appeared first on Liminal.co.


Ontology

Ontology Weekly Report (April 9th — 15th, 2024)

Ontology Weekly Report (April 9th — 15th, 2024) Welcome to another vibrant week at Ontology, where we continue to break new ground and foster community connections. Here’s what’s been happening: 🎉 Highlights Insights from PBW: We’ve gathered incredible insights from our participation at PBW. These learnings are guiding our path forward in the blockchain space. Lovely Wallet Giveaway: O
Ontology Weekly Report (April 9th — 15th, 2024)

Welcome to another vibrant week at Ontology, where we continue to break new ground and foster community connections. Here’s what’s been happening:

🎉 Highlights Insights from PBW: We’ve gathered incredible insights from our participation at PBW. These learnings are guiding our path forward in the blockchain space. Lovely Wallet Giveaway: Our new campaign with Lovely Wallet has kicked off! Join in for a chance to win exciting prizes. Latest Developments Twitter Space Success: Last week’s Twitter space was a hit, drawing a great crowd. Make sure you tune in next time to join our live discussions! Token2049 Dubai Ticket Draw: Congratulations to the lucky winner of a ticket to Token2049 Dubai! Stay tuned for more opportunities. Development Progress Ontology EVM Trace Trading Function: Now at 85%, we are closer than ever to enhancing our EVM capabilities significantly. ONT to ONTD Conversion Contract: We’ve hit the 50% milestone, streamlining the conversion process for improved user experience. ONT Leverage Staking Design: Progress has advanced to 35%, bringing us closer to launching this innovative staking option. Product Development TEAMZ Web3/AI Summit 2024: We’re pumped to be part of the upcoming summit in Tokyo. UQUID on ONTO APP: You can now access UQUID directly through the ONTO app, simplifying your digital transactions. Top 10 dApps on ONTO: Our latest list highlights the most popular and impactful dApps in the Ontology ecosystem. On-Chain Activity Stable dApp Count: We maintain a strong portfolio of 177 dApps on MainNet, demonstrating robust ecosystem health. Transaction Growth: This week saw an increase of 3,313 dApp-related transactions and a substantial rise of 27,993 in total transactions on MainNet, reflecting vibrant network activity. Community Growth Engaging Community Discussions: Our platforms on Twitter and Telegram are buzzing with the latest developments. Join us to stay connected and contribute to the conversations. Special Telegram Discussion: Led by Ontology Loyal Members, this week’s discussion on “Ontology’s EVM Testnet Unlocks New Horizons in Blockchain Innovation” was particularly enlightening. Stay Connected

We invite you to follow us on our official social media channels for continuous updates and community engagement. Your participation is crucial to our joint success in navigating the exciting world of blockchain and decentralized technologies.

Ontology website / ONTO website / OWallet (GitHub)

Twitter / Reddit / Facebook / LinkedIn / YouTube / NaverBlog / Forklog

Telegram Announcement / Telegram English / GitHubDiscord

Ontology Weekly Report (April 9th — 15th, 2024) was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


Aergo

Aergo Set to Penetrate Enterprise Blockchain Market with its Hybrid Structure and EVM Integration

Welcome to the latest installment in our ongoing series on Aergo’s strategic advancements and initiatives. In this edition, we’ll explore how Aergo is leveraging its unique features to address the evolving needs of enterprise clients, focusing specifically on its compatibility with the EVM and its structural design. In a nutshell: Aergo’s integration with the Ethereum Virtual Machine (EVM), alon

Welcome to the latest installment in our ongoing series on Aergo’s strategic advancements and initiatives. In this edition, we’ll explore how Aergo is leveraging its unique features to address the evolving needs of enterprise clients, focusing specifically on its compatibility with the EVM and its structural design.

In a nutshell:

Aergo’s integration with the Ethereum Virtual Machine (EVM), alongside its hybrid architecture and tailored focus on enterprise requirements, establishes a strong foothold. Particularly attractive to enterprises in the financial sector, Aergo provides inventive ways to simplify blockchain implementation and operation, particularly for RWAs and STOs.

Rising Demand and EVM

Since 2018, Aergo and Blocko have implemented and sophisticated the idea of a decentralized physical infrastructure network for the customers based on the enterprise-focused blockchain platform, Aergo Enterprise. Various applications have emerged, ranging from shipment asset management to energy-related asset trading systems. These developments formed the foundation of what can be termed as today’s DePin(Decentralized Physical Infrastructure Networks) model — a decentralized network driven by blockchain technology and incentives. Early implementations had limitations due to restricted participation and minimized token incentives as they were primarily tailored for consortia or affiliates.

However, Aergo stands out in its ability to engage in diverse projects with major domestic and international companies and public institutions. This is attributed to the need for a player in the market capable of delivering the level of service demanded by enterprise customers. As Ethereum leads the way in blockchain technological standards, demand arises for more adaptable enterprise blockchain solutions. Aergo’s compatibility with the Ethereum Virtual Machine (EVM) provides enterprises with a robust foundation for constructing decentralized solutions capable of revolutionizing multiple facets of their operations.

From asset management to financial transactions and data management, Aergo’s integration with the EVM opens doors to transformative possibilities. Furthermore, Aergo accommodates various sizes and industries with its hybrid architecture, gaining traction through EVM integration. Such hybrid structures are crucial in the enterprise market, where participant identity verification and safeguarding sensitive information are paramount. EVM integration holds significance for enterprise clients seeking to transition to web3, especially as private blockchains lose traction, and integration with public ecosystems gains prominence.

EVM and STOs

When integrating blockchain solutions into various financial projects, including Security Token Offerings (STOs) and Real World Assets (RWAs), project managers and decision-makers inevitably reference the performance and achievements of Ethereum-based DeFi models along with other models built on the EVM. Utilizing open-sourced EVMs from various implementations becomes crucial for effectively meeting clients' needs. We have successfully built and operated multiple projects with Blocko, striving for implementations that recognize the demands and prioritize EVM features essential for enterprises. Simultaneously, our collaboration underscores our dedication to identifying these requirements and pursuing efficient implementations that prioritize the necessary EVM features for enterprise applications. The essential features offered by Aergo Enterprise include tokenization and asset backing, regulatory compliance, and secondary market trading capabilities.

Despite the recent hype surrounding them, DePin, RWAs, and STOs operate within the same business framework. The key takeaway is the necessity for a platform capable of representing a diverse range of assets through fractional ownership, tailored to various sectors and scales. Aergo’s commitment to enterprise market growth, driven by its hybrid structure and technical adaptability via EVM, positions it favorably. Collaborations with Blocko, including entry into the RWA and STO markets, underscore the ongoing development efforts, while initiatives like Booost expand its business and consumer reach based on geolocation services.

With STO-related regulations in Korea nearing finalization, we anticipate forthcoming news soon. Also, a detailed article on fee structure will be uploaded if they were any updates.

For further information on Aergo’s emphasis on the enterprise blockchain market and its structure, please refer to:

Aergo 2.0: A Decentralized Infrastructure for Web3

For information on current fee structures and voting rewards, please refer to the articles listed below: Aergo’s Fee 2.0 Aergo Network Voting Reward

Aergo Set to Penetrate Enterprise Blockchain Market with its Hybrid Structure and EVM Integration was originally published in Aergo blog on Medium, where people are continuing the conversation by highlighting and responding to this story.


PingTalk

Ping’s Cloud: Four Tips for Migration Success | Ping Identity

Ping Identity is interested in the best deployment solution for you, our customers.    We partner with some of the largest enterprises in the world as they undergo their digital transformations, so we know what’s needed to be successful when it comes to where and how to deploy your Ping services. If you haven’t even considered consuming PingFederate or Access Management as a service

Ping Identity is interested in the best deployment solution for you, our customers. 

 

We partner with some of the largest enterprises in the world as they undergo their digital transformations, so we know what’s needed to be successful when it comes to where and how to deploy your Ping services. If you haven’t even considered consuming PingFederate or Access Management as a service in the cloud, that’s ok too. No matter your situation, Ping will help you choose a deployment strategy that solves current pain points while leaving the door open for future growth. 

 

With Ping’s platform, you have your choice of deployment options, not just self-managed software. In fact, Ping can help no matter where you are on your digital transformation journey–regardless of your current IT environment. 

 

We've compiled four tips for developing a successful migration strategy to help streamline and simplify your migration of Ping software to Ping’s cloud (PingOne).


BlueSky

How to embed a Bluesky post on your website or blog

Share Bluesky posts on other sites, articles, newsletters, and more.
How do I embed a Bluesky post?

For the post you'd like to embed, click the dropdown menu on desktop. Then, select Embed Post. Copy the code snippet.

Alternatively, you can visit embed.bsky.app and paste the post's URL there for the same code snippet.

The embedded post is clickable and can direct your readers to the original conversation on Bluesky. Here's an example of what an embedded post looks like:

logging on for my shift at the posting factory

— Emily 🦋 (@emilyliu.me) Jul 3, 2023 at 11:11 AM
Your Own Website

Directly paste the code snippet into your website's source code.

WordPress

Insert a HTML block by typing /html or pressing the + button.

Paste the code snippet. When you switch the toggle to "Preview," you'll see the Bluesky post embed.

Ghost

Insert a HTML block by typing /html or pressing the + button. Paste the code snippet.

Below is what the block will look like. Then, click Preview on your blog draft to see the Bluesky post embed formatted.

Substack

Currently, Substack does not support custom CSS or HTML in the post editor. We recommend taking a screenshot of Bluesky posts and linking the post URL instead.

Other Sites

For your site of interest, please refer to their help center or documentation to learn how to embed Bluesky posts.

Thursday, 18. April 2024

Anonym

The Surprising Outcome of Our Comparison of Two Leading DI Governance Models

Our Chief Architect, Steve McCown, recently compared two of decentralized identity’s leading governance models—trust registries (from Trust Over IP or ToIP) and trust establishment (from the Decentralized Identity Foundation or DIF)—and published his findings in our latest white paper.  Skip straight to the white paper, “Comparing Two of Decentralized Identity’s Leading Governance Models.”&nb

Our Chief Architect, Steve McCown, recently compared two of decentralized identity’s leading governance models—trust registries (from Trust Over IP or ToIP) and trust establishment (from the Decentralized Identity Foundation or DIF)—and published his findings in our latest white paper. 

Skip straight to the white paper, “Comparing Two of Decentralized Identity’s Leading Governance Models.” 

We know that decentralized identity (DI) as a new approach to identity management on the internet offers many benefits over traditional centralized systems, such as greater privacy, increased security, and better fault tolerance. It also offers a novel approach to system governance. 

What Steve describes in his comparison of the two leading government models, trust registries and trust establishment, is that while the two approaches appear to compete with each other, their features and capabilities actually make them rather serendipitous, and users may find them mutually beneficial. 

The trust registry model from ToIP’s Governance Stack Working Group creates a governance framework that guides organizations in creating their own governance model more than specifying exactly what rules and descriptions a governance model must contain. In other words, it is a process for creating a governance model rather than a pre-existing governance model to be applied.  

“Quite often, teams creating DI systems don’t know where to start when defining governance for their systems and the ToIP model is an excellent roadmap,” Steve says. 

While ToIP’s governance framework processes appear best suited for enterprise-level ecosystem efforts, the trust establishment (TE) processes that DIF is creating are intended to be much simpler. According to the Trust Establishment 1.0 document, “This specification describes only the data model of trust documents and is not opinionated on document integrity, format, publication, or discovery.” 

Steve says that rather than presenting a series of processes by which a governance framework can produce a governance model, the DIF specification provides a single “lightweight trust document” that produces a governance data model. 

Since the TE does not require a particular data format, it can be embodied in many formats.  

“In one instance, it can be used through an internet-accessible API as is specified for the ToIP trust registry/governance model solution. However, it is most commonly described as a cryptographically signed and JSON-formatted document that can be downloaded from a website, immutable data source, or a provider’s own service,” Steve says. 

The TE is a newly emerging specification and will likely undergo many enhancements and updates. See the whitepaper for more detail of both models. 

Steve’s comparison of DI governance models is important because enterprises are facing mounting pressure from customers and regulators to rapidly deliver greater security and interoperability in software and services.  

More than 62 per cent of US companies plan to incorporate a decentralized identity (DI) solution into their operations, with 74 per cent likely to do so within a year, according to a recent survey

Read: 7 Benefits to Enterprises from Proactively Adopting Decentralized Identity 

Want more on decentralized identity? 

Can Decentralized Identity Give You Greater Control of Your Online Identity?  Simple Definitions for Complex Terms in Decentralized Identity  17 Industries with Viable Use Cases for Decentralized Identity  How You Can Use Sudo Platform Digital Identities and Decentralized Identity Capabilities to Rapidly Deliver Customer Privacy Solutions  What our Chief Architect said about Decentralized Identity to Delay Happy Hour  Our whitepapers 

Learn more about Anonyome Labs decentralized identity offerings 

The post The Surprising Outcome of Our Comparison of Two Leading DI Governance Models appeared first on Anonyome Labs.


Tokeny Solutions

Tokeny Enhances Multi-Chain Capabilities with Integration of Telos EVM

The post Tokeny Enhances Multi-Chain Capabilities with Integration of Telos EVM appeared first on Tokeny.

Luxembourg, Dubai, April 18, 2024 – Tokeny, the leading tokenization platform, announces its latest strategic integration with Telos, bolstering its multi-chain capabilities and offering tokenized securities issuers enhanced flexibility in tokenization. This collaboration underscores Tokeny’s commitment to providing seamless and secure solutions for issuing, managing, and distributing tokenized securities across multiple blockchain networks.

The integration introduces Telos EVM (Ethereum Virtual Machine) to Tokeny’s platform, complementing its existing ecosystem of supported chains. Telos EVM, renowned for its remarkable transaction throughput of 15,200 transactions per second (TPS), empowers institutions with unparalleled speed and efficiency in tokenization processes.

By integrating Telos EVM, Tokeny expands its reach and enables issuers to tokenize assets with ease while benefiting from Telos’ advanced blockchain technology. This synergy enhances efficiency, reduces costs, and offers institutions greater flexibility in choosing the most suitable blockchain network for their tokenization needs.

Our solutions are designed to be compatible with any EVM chain, allowing our clients to seamlessly navigate the ever-expanding blockchain landscape. We identified Telos as a promising ecosystem poised for growth. As a technology provider, our mission is to ensure that our clients have the flexibility to choose any chain they desire and switch with ease. Luc FalempinCEO Tokeny The Tokeny team's unwavering commitment to excellence and leadership in the field of tokenization is truly commendable. With Tokeny's best-in-class technology and expertise, coupled with Telos' high-performance infrastructure, we anticipate a significant acceleration in tokenization projects coming onto the Telos network. Together, we are poised to set new standards of speed and efficiency in the tokenization space, driving innovation and fostering growth for our ecosystem and beyond. Lee ErswellCEO of Telos Foundation About Telos

Telos is a growing network of networks (Layer 0) enabling Zero Knowledge technology for massive scalability and privacy to support all industries and applications. The expanding Telos ecosystem includes over 1.2 million accounts, hundreds of partners, and numerous dApps. Launched in 2018 Telos is known for its impeccable five-year record of zero downtime and is home to the world’s fastest Ethereum Virtual Machine, the Telos EVM. Telos is positioned to lead enterprise adoption into the world of borderless Web3 technology and decentralized solutions.

About Tokeny

Tokeny provides a compliance infrastructure for digital assets. It allows financial actors operating in private markets to compliantly and seamlessly issue, transfer, and manage securities using distributed ledger technology. By applying trust, compliance, and control on a hyper-efficient infrastructure, Tokeny enables market participants to unlock significant advancements in the management and liquidity of financial instruments.

The post Tokeny Enhances Multi-Chain Capabilities with Integration of Telos EVM first appeared on Tokeny.

The post Tokeny Enhances Multi-Chain Capabilities with Integration of Telos EVM appeared first on Tokeny.


Shyft Network

Guide to FATF Travel Rule Compliance in Canada

The minimum threshold for the FATF Travel Rule in Canada is $1000. Crypto businesses must also mandatorily submit a Large Virtual Currency Transaction Report ($10,000 and above) to FINTRAC. The country has enacted several laws for crypto transaction transparency and asset protection. The FATF Travel Rule, also called Crypto Travel Rule informally, came into force in Canada on June 1
The minimum threshold for the FATF Travel Rule in Canada is $1000. Crypto businesses must also mandatorily submit a Large Virtual Currency Transaction Report ($10,000 and above) to FINTRAC. The country has enacted several laws for crypto transaction transparency and asset protection.

The FATF Travel Rule, also called Crypto Travel Rule informally, came into force in Canada on June 1st, 2021. It laid out the requirements for a virtual currency transfer to remain under the legal ambit of the Proceeds of Crime (Money Laundering) and Terrorist Financing Act (PCMLFTA).

Key Features of the Canadian Travel Rule

In Canada, the FATF Travel Rule applies to electronic funds and virtual currency transfers. The term Virtual Currency has a wider meaning in the Canadian context. It can be a digital representation of value or a private key of a cryptographic system that enables a person or entity to access a digital representation of value.

The Crypto Travel Rule guides financial entities, money service businesses, foreign money service businesses, and casinos. These institutions must work per the information disclosure requirements inscribed in the Travel Rule.

Compliance Requirements

In Canada, the Travel Rule applies to any virtual currency transactions exceeding $1,000. For these transactions to be compliant, the parties involved must share their personally identifiable information (PII) with the originator and beneficiary exchanges.

PII to be shared for Travel Rule compliance includes the requester’s name and address, the nature of their principal business or occupation, and, if the requester is an individual, their date of birth. This is consistent whether it is shared with a Virtual Asset Service Provider (VASP) inside or outside of Canada.

On a related note, entities receiving large virtual currency transactions must report it to FINTRAC. The authorities consider a VC transaction to be large if it is equivalent to US$10,000 or more in a single transaction.

A similar report is also mandatory when the provider receives two or more amounts of virtual currency, totaling $10,000 or more, within a consecutive 24-hour window, and the transactions are conducted by the same person or entity on behalf of the same person or entity or for the same beneficiary.

These reports can be submitted to FINTRAC electronically through the FINTRAC Web Reporting System or FINTRAC API Report Submission. The reporting required for a large virtual currency transaction form includes general information, transaction details, and actions from start to completion.

General information might cover the reporting entity and the review period for aggregate transactions over 24 hours. The remaining sections must include information about how each transaction is being reported, how the transaction started, and how it was completed.

Impact on Cryptocurrency Exchanges and Wallets

Crypto service providers must have a well-laid-out compliance program with policies and procedures etched out in the smallest details. Ideally, they should have a person to assess the transactions even when an automated system detects when they have reached a threshold amount.

Merely having a large transaction reporting system in place is not enough. A system capable of reporting suspicious transactions to FINTRAC is also necessary.

Another set-up that is needed is robust record-keeping. If the provider has submitted a large virtual currency transaction report to FINTRAC, it must keep a copy for at least five years from the date the report was created.

Providers are also obligated to verify the identity of persons and entities accurately and timely, following FINTRAC’s sector-specific guidance. Identification is also crucial for determining whether a person or entity is acting on behalf of another person or entity. Providers must also be fully aware of requirements issued under ministerial directives.

FINTRAC emphasizes shared responsibility in compliance reporting. It allows providers to voluntarily self-declare non-compliance upon identifying such instances.

Concluding Thoughts

The FATF Travel Rule in Canada imposes stringent compliance demands on cryptocurrency exchanges and wallets, emphasizing transparency and security for transactions over $1,000. This regulation aims to mitigate financial crimes, requiring detailed record-keeping and reporting to uphold a secure digital financial marketplace.

FAQs on Crypto Travel Rule Canada Q1: What is the minimum threshold for the Crypto Travel Rule in Canada?

Canada has set a $10,000 threshold for providers to submit a Large Virtual Currency Transaction Report to FINTRAC.

Q2: Who needs to register with FINTRAC in Canada?

Financial Entities, Money Service Businesses, and Foreign Money Service Businesses must register under FINTRAC and report the travel rule information when they send VC transfers.

‍About Veriscope

‍Veriscope, the compliance infrastructure on Shyft Network, empowers Virtual Asset Service Providers (VASPs) with the only frictionless solution for complying with the FATF Travel Rule. Enhanced by User Signing, it enables VASPs to directly request cryptographic proof from users’ non-custodial wallets, streamlining the compliance process.

For more information, visit our website and contact our team for a discussion. To keep up-to-date on all things crypto regulations, sign up for our newsletter and follow us on X (Formerly Twitter), LinkedIn, Telegram, and Medium.

Guide to FATF Travel Rule Compliance in Canada was originally published in Shyft Network on Medium, where people are continuing the conversation by highlighting and responding to this story.


liminal (was OWI)

Generative AI and Cybersecurity: Navigating Technology’s Double-Edged Sword

Generative AI (GenAI) represents a significant technological frontier, impacting cybersecurity landscapes in two primary ways. Its advanced capabilities enable malicious actors to craft sophisticated and convincing fraudulent schemes, from phishing attacks to synthetic audio and video content designed to deceive and exploit. Conversely, GenAI also powers robust fraud detection systems, generating
Generative AI (GenAI) represents a significant technological frontier, impacting cybersecurity landscapes in two primary ways. Its advanced capabilities enable malicious actors to craft sophisticated and convincing fraudulent schemes, from phishing attacks to synthetic audio and video content designed to deceive and exploit. Conversely, GenAI also powers robust fraud detection systems, generating synthetic data that mimics complex fraud scenarios and enhances organizational capabilities to identify and neutralize threats proactively. As businesses navigate the duality of GenAI, advancing cybersecurity measures and regulatory frameworks to harness its potential for good while mitigating risks is crucial.

The widespread availability and affordability of GenAI tools, combined with global macroeconomic instability, are key drivers behind the significant increase in the volume and velocity of fraud attacks targeting consumers and businesses. The rapid adoption of tools like ChatGPT and FraudGPT, accessible relatively cheaply, has facilitated their use in malicious activities such as creating malware and phishing attacks. This surge in fraudulent activities is further exacerbated by economic downturns, which increase financial pressures and lead to cuts in fraud detection resources within organizations. Recent statistics show a drastic increase in phishing incidents and overall fraud complaints, highlighting the growing challenge of GenAI-enabled fraud in an economically unstable environment.

Insights into the Impact of Generative AI and Related Fraud Attacks ChatGPT adoption surpassed 1 million users within 5 days of its launch, now boasting approximately 100 million weekly active users. One-third of cybersecurity experts identified the increase in attack volume and velocity as a top GenAI-related threat. Malicious emails and credential phishing volumes have increased more than 12-fold and nearly tenfold in the past 18 months. Fraud complaints rose from 847,000 in 2021 to 1.1 million in 2022, escalating total losses from $4.2 billion to $10.5 billion. FraudGPT, a GenAI tool capable of generating malicious code and designing phishing pages, is available for as low as $90 monthly. 93% of financial institutions plan to invest in AI for fraud prevention in the next 2-5 years GenAI Market Trends and Challenges 

GenAI’s dual nature significantly influences cybersecurity, with fraudsters using it to enhance their fraudulent schemes, including phishing and social engineering, by creating more convincing and sophisticated attacks. Tools like FraudGPT can write malicious code, generate malware, and create phishing pages, dramatically increasing phishing activities.

Solution providers are also adopting GenAI to strengthen defenses against such threats. Its ability to generate synthetic datasets allows for better training and improvement of risk and decision models, as evidenced by the growing demand among financial institutions planning substantial investments in AI for fraud prevention.

Despite its growing adoption, GenAI faces challenges like the unpredictable nature of new disruptive technologies, accessibility issues, a lack of regulatory oversight, and insufficient education and awareness about its capabilities and risks. These factors complicate the management of GenAI’s impact on fraud detection and prevention.

Strategies for Strengthening Defenses Against GenAI-driven Fraud

To effectively combat GenAI-driven fraud, businesses can adopt advanced AI and ML technologies for anomaly detection, implement device-bound authentication for added security, utilize multi-factor authentication to verify user identities, and apply behavioral analytics to monitor unusual activity. Continuous monitoring and regular updates of security measures are also essential to keep pace with evolving fraud tactics, ensuring robust protection even as regulations develop. Customers and members can access Liminal’s research in Link for detailed information on the headwinds and tailwinds shaping effective responses to these threats. New customers can sign up for a free account to view this report and access much more. 

What is GenAI?

Generative AI (GenAI) is a type of artificial intelligence that can autonomously create new content such as audio, images, text, and videos. Unlike traditional AI, which follows strict rules or patterns for specific tasks, GenAI uses neural networks and deep learning to produce original content based on patterns in data it has learned. This capability is increasingly integrated into everyday applications. Still, it poses risks, such as enhancing the effectiveness of phishing and social engineering attacks by scammers exploiting GenAI to create convincingly fake communications.GenAI also offers tools for enhancing fraud prevention. It enables the creation of synthetic data samples that mimic real-life fraud scenarios, helping to train and improve fraud detection systems, thus preparing solution providers to better recognize and react to emerging fraudulent techniques.

Related Content: Market & Buyer’s Guide to Customer Authentication Link Index for Transaction Fraud Monitoring in E-Commerce (paid content) Bot Detection: Fighting Sophisticated Bots

The post Generative AI and Cybersecurity: Navigating Technology’s Double-Edged Sword appeared first on Liminal.co.


KILT

Unchaining Identity: Decentralized Identity Provider (DIP) Enables Cross-Chain Solutions

The KILT team has completed all milestones of the Polkadot Treasury Grant for developing the Decentralized Identity Provider (DIP), and DIP is now ready for use. Using DIP, any chain can become an identity provider, and any parachain (and, in the future, external chains) can integrate KILT and / or other identity providers for their identity needs. The Decentralized Identity Provider (DIP) e

The KILT team has completed all milestones of the Polkadot Treasury Grant for developing the Decentralized Identity Provider (DIP), and DIP is now ready for use. Using DIP, any chain can become an identity provider, and any parachain (and, in the future, external chains) can integrate KILT and / or other identity providers for their identity needs.

The Decentralized Identity Provider (DIP) enables a ground-breaking cross-chain decentralized identity system inspired by the functionality of OpenID. This means that parachains requiring an identity solution don’t need to build their own infrastructure. Instead, they can leverage the infrastructure DIP provides. DIP is open-source, and you can integrate it with existing Polkadot-compatible runtimes with minimal changes and without affecting the fee model of the relying party.

DIP Actors

DIP has three key roles: the identity provider, the relying party or consumer, and the user.

The identity provider is any blockchain with an identity system that makes it available for other chains, e.g., KILT Protocol, Litentry, etc.

The relying party or “consumer” is any blockchain that has chosen to delegate identity management to the provider, thus relieving it of needing to maintain its identity infrastructure.

The user is an entity with an identity on the provider chain and wants to use it on other chains without setting up a new identity on each.

The process begins with a user setting up their identity on an identity provider chain, for instance, KILT, by making a transaction. Once an identity completes that transaction, they can share that identity with any relying party chain that uses that provider, eliminating the need for further interaction with the identity provider unless changes are made to the user’s identity information.

Relying parties (e.g., parachains) can choose one or more identity providers. As in the case of accepting multiple social logins such as Google and Facebook, this allows them to access the information graph that each identity provider has previously built.

The Tech

DIP provides a suite of components available for integration:

A set of pallets for deployment on any chain that wants to act as an identity provider. These allow accounts on the consumer chain to commit identity information, storing such representation in the provider chain’s state. A set of pallets to deploy on any chain that wants to act as an identity-relaying or consumer party. These take care of validating cross-chain identity proofs provided by the subject and dispatch the actual call once the proof is verified. A set of support crates, suitable for use within a chain runtime, for types and traits the provider and relying partys can use.

These components enable the use of state proofs for information sharing between chains.

Identity on KILT is built around W3C-standard decentralized identifiers (DIDs) and Verifiable Credentials. Using KILT as an example, the following is a streamlined version of the process for using DIP:

Step 1. A user sets up their identity on KILT by generating their unique DID and anchoring it on the KILT blockchain.

Step 2. Credentials issued to that user contain their DID. The user keeps their credentials on their device, and the KILT blockchain stores a hash of each credential.

Step 3. To use the services of the relying party (in this example, any chain using KILT as their identity provider), the user prepares their identity via a transaction that results in their identity information committed to the chain state of KILT. After this point, the user doesn’t need to interact with KILT for each operation.

Step 4. The relying or “consumer” party can verify the identity proofs provided by the User. Once verified, the relaying party can dispatch a call and grant the user access to their services.

Advantages of DIP

DIP offers several significant advantages, including:

Portability of Identities
Traditionally, users would need to create a separate identity for each application. However, with DIP, identities become portable. This means someone can use a single identity across multiple applications or platforms. This simplifies the user experience and maintains consistency of user identity across different platforms. Focus on core competencies
Blockchain networks can focus on their core functionalities and strengths instead of investing resources into developing and maintaining an identity system. Instead, they can delegate identity management to other chains that specialize in it, effectively increasing efficiency. Simplified Management of Identity for Users
Users can manage and update their identity in a single place, i.e., via their identity provider, even though the system is decentralized. This simplifies identity management for users, as they do not have to update their information on each platform separately. Decoupling of Identities and Accounts
With many systems, a user’s identity is closely tied to their account, potentially enabling the tracking or profiling of users based on their account activity. Because DIP is linked to the user’s DID — the core of their identity — rather than their account address, DIP allows for identities to be separate from accounts, increasing privacy and flexibility. The user can then choose which accounts to link their identity to (if any) across several parachains and ecosystems, retaining control over their information disclosure. KILT as an Identity Provider

KILT Protocol is consistently aligned with the latest standards in the decentralized identity space.

On top of these, additional KILT features such as web3names (unique, user-friendly names to represent a DID) and linked accounts make it easier for users to establish a cross-chain identity.

Users may also build their identity by adding verifiable credentials from trusted parties.

By integrating KILT as an identity provider, the relying party gains access to all the identity information shared by the user while giving the user control over their data. This ensures a robust and comprehensive identity management solution.

Start Integrating

Relying party:

Decide on the format of your identity proofs and how verification works with your identity provider Add the DIP consumer pallet as a dependency in your chain runtime Configure the required Config trait according to your needs and the information agreed on with the provider Deploy it on your chain, along with any additional pallets the identity provider requires.

(read KILT pallet documentation)

Identity provider:

Check out the pallet and traits Agree on the format of your identity proofs and how verification works with your relying party Customize the DIP provider pallet with your identity primitives and deploy it on your chain For ease of integration, you may also customize the DIP consumer pallet for your consumers. What’s next?

Now that DIP is up and running, in the next stages, the team will continue to refine privacy-preserving ways to make KILT credentials available to blockchain runtimes. These will include improvements in proof size and proof verification efficiency and support for on-chain credential verification (or representation thereof). With DIP in the hands of the community, DIP’s users and community will guide future development.

About KILT Protocol

KILT is an identity blockchain for generating decentralized identifiers (DIDs) and verifiable credentials, enabling secure, practical identity solutions for enterprises and consumers. KILT brings the traditional process of trust in real-world credentials (passport, driver’s license) to the digital world while keeping data private and in possession of its owner.

Unchaining Identity: Decentralized Identity Provider (DIP) Enables Cross-Chain Solutions was originally published in kilt-protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Ocean Protocol

DF85 Completes and DF86 Launches

Predictoor DF85 rewards available. Passive DF & Volume DF will be retired; airdrop pending. DF86 runs Apr 18 — Apr 25, 2024 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by locking OCEAN, curating data, and making predictions via Predictoor. Ocean Protocol is joining with Fetch and SingularityNET to form the Superintelligence Alliance, w
Predictoor DF85 rewards available. Passive DF & Volume DF will be retired; airdrop pending. DF86 runs Apr 18 — Apr 25, 2024 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by locking OCEAN, curating data, and making predictions via Predictoor.

Ocean Protocol is joining with Fetch and SingularityNET to form the Superintelligence Alliance, with a unified token $ASI. This Mar 27, 2024 article describes the key mechanisms. This merge was pending a “yes” vote from the Fetch and SingularityNET communities. As of Apr 16, 2024: it was a “yes” from both; therefore the merge is happening.
The merge has important implications for veOCEAN and Data Farming. veOCEAN will be retired. Passive DF & Volume DF rewards have stopped, and will be retired. Each address holding veOCEAN will be airdropped OCEAN in the amount of: (1.25^years_til_unlock-1) * num_OCEAN_locked. This airdrop will happen within weeks after the “yes” vote. The value num_OCEAN_locked is a snapshot of OCEAN locked & veOCEAN balances as of 00:00 am UTC Wed Mar 27 (Ethereum block 19522003). The article “Superintelligence Alliance Updates to Data Farming and veOCEAN” elaborates.

Data Farming Round 85 (DF85) has completed. Passive DF & Volume DF rewards are stopped, and will be retired. Predictoor DF claims run continuously.

DF86 is live today, April 18. It concludes on Apr 25. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF86 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF85

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF85 Completes and DF86 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


KuppingerCole

Oracle Access Governance

by Nitish Deshpande Oracle Access Governance is a cloud-native IGA solution which runs in Oracle Cloud Infrastructure (OCI). Oracle Access Governance can also run alongside Oracle Identity Governance in a hybrid deployment model to provide identity analytics from the cloud for Oracle Identity Governance customers. It serves as a one-stop shop for identity orchestration, user provisioning, access

by Nitish Deshpande

Oracle Access Governance is a cloud-native IGA solution which runs in Oracle Cloud Infrastructure (OCI). Oracle Access Governance can also run alongside Oracle Identity Governance in a hybrid deployment model to provide identity analytics from the cloud for Oracle Identity Governance customers. It serves as a one-stop shop for identity orchestration, user provisioning, access review, access control, compliance, and multi cloud governance. It offers a mobile-friendly, cloud-native governance solution. It can detect and remediate high-risk privileges by enforcing internal access audit policies to identify orphaned accounts, unauthorized access, and privileges. This helps improve compliance with regulatory requirements. With the capacity to manage millions of identities, Oracle suggests it is suitable for enterprise level organizational needs.

User Provisioning

Setting up this governance solution has two options. It can be done through connectors for systems easily accessible and for disconnected applications that cannot directly connect with Oracle Access Governance or are behind firewall systems, Oracle provides a one-time connector which can be downloaded by the administrator. The connector establishes the integration with the target system to securely sends and receives encrypted data to Oracle Access Governance. The connector continuously polls for access remediation decisions from Oracle Access Governance. The user interface provides detailed status updates for each connected system, including data load status and duration. In the latest update, Oracle Access Governance introduces new capabilities that focus on provisioning, identity orchestration, identity reconciliation and prescriptive analytics.

Figure 1: Identity provisioning and reconciliation

Oracle Access Governance’s identity orchestration makes use of identity provisioning and reconciliation capabilities along with schema handling, correlation, and transformations. The update provides comprehensive features for account provisioning by allowing users to create accounts by leveraging outbound transformation rules and assigning them appropriate permissions to access downstream applications and systems. The Access Governance platform can also perform reconciliation by synchronizing user accounts and their permissions from integrated applications and systems. Oracle suggests this will also support handling ownership to reduce orphan and rogue accounts effectively. Oracle suggests business owners can either manually address these orphaned accounts or allocate orphaned accounts to specific identities, followed by regular review cycles for these assigned accounts. Additionally, event-based reviews can be set up to automatically assess rogue and orphaned accounts as soon as they are detected within an integrated application or system.

Oracle’s Access Governance platform can also support authoritative source reconciliation from systems such as HRMS, AD, LDAP for onboarding, updating, and deleting identities through identity reconciliation. This solution combines identity provisioning and reconciliation capabilities, supported by robust integration. Whether it's for on-premises or cloud-based workloads, Oracle Access Governance offers a reliable framework for managing identity and access effectively.

Access reviews

Oracle Access Governance offers intelligent access review campaigns using AI and ML driven prescriptive analytics for periodic and micro certifications. These analytics provide insights and recommendations to proactively manage access governance and ensure compliance effectively.

Oracle Access Governance offers a robust suite of features for access review for management of user permissions. The solution has manual access review campaigns that provides admins with a wizard-based interface for campaign creation. Oracle has also leveraged machine learning for managing reviews by providing deep analytics and recommendations. Oracle suggests that this will simplify the approval and denial of access. The platform also offers flexibility of scheduling periodic access review campaigns for compliance purposes. Oracle mentions this will streamline the process of auditing user permissions at regular intervals. Event-based micro certifications are also supported for limiting the certification of affected identities. Oracle has incorporated pre-built and customizable codeless workflows which are based on a simple wizard.

Moreover, administrators can set up ad hoc or periodic access review campaigns. The platform provides a granular approach for selecting criteria for access reviews. The identities can configure workflows according to their requirements or leverage AI and machine learning algorithms to suggest workflows based on certification history of related identities. The user interface for admins is modern and has features to review, download, and create reports on access review campaigns.

Conclusion

Oracle Access Governance continues to reinforce its identity and access management capabilities. With the ability to conduct micro-certifications instead of traditional certifications every six months, Oracle suggests their platform is well placed for streamlining governance procedures.

By leveraging cloud infrastructure, Oracle Access Governance is on track to support operations as well as facilitating integration with applications such as Cerner for auditing and compliance purposes. They plan monthly release cycle to their access governance platform  with the latest features and enhancements. Oracle wants to provide visibility into access permissions across the enterprise using their dashboards which can be tailored based on requirements of business users. Furthermore, Oracle suggests this platform can be useful for CISOs by offering top-down or bottom-up consolidated views of access permissions across the enterprise.


Verida

How Web3 and DePIN Solves AI’s Data Privacy Problems

The emergence of Decentralized Physical Infrastructure Networks (DePIN) are a linchpin for providing privacy preserving decentralized infrastructure to power the next generation of large language models. How Web3 and DePIN Solves AI’s Data Privacy Problems Written by Chris Were (Verida CEO & Co-Founder) and originally published on tahpot: Web3 on the edge, this post is Part 2 of a Pr

The emergence of Decentralized Physical Infrastructure Networks (DePIN) are a linchpin for providing privacy preserving decentralized infrastructure to power the next generation of large language models.

How Web3 and DePIN Solves AI’s Data Privacy Problems

Written by Chris Were (Verida CEO & Co-Founder) and originally published on tahpot: Web3 on the edge, this post is Part 2 of a Privacy / AI series and continues from Part 1: Top Three Data Privacy Issues Facing AI Today.

Artificial intelligence (AI) has become an undeniable force in shaping our world. From personalized recommendations to medical diagnosis, AI’s impact is undeniable. However, alongside its potential lies a looming concern: data privacy. Traditional AI models typically rely on centralized data storage and centralized computation, raising concerns about ownership, control, and potential misuse.

See part 1 of this series Top Three Data Privacy Issues Facing AI Today, for a breakdown of key privacy issues which we will explain how web3 can help alleviate these problems.

The emergence of Decentralized Physical Infrastructure Networks (DePIN) are a linchpin for providing privacy preserving decentralized infrastructure to power the next generation of large language models (LLMs).

At a high level, DePINs can provide access to decentralized computation and storage resources that are beyond the control of any single organization. If this computation and storage can be built in such a way that it is privacy preserving; ie: those operating the infrastructure have no access to underlying data or computation occurring, this is an incredibly robust foundation for privacy preserving AI.

Let’s dive deeper into how that would look, when addressing the top three data privacy issues.

Privacy of user prompts

Safeguarding privacy of user prompts has become an increasingly critical concern in the world of AI.

An end user can initiate a connection with a LLM hosted within a decentralized privacy-prserving compute engine called a Trusted Execution Environment (TEE), which provides a public encryption key. The end user encrypts their AI prompts using that public key and sends the encrypted prompts to the secure LLM.

Within this privacy-preserving environment, the encrypted prompts undergo decryption using a key only known by the TEE. This specialized infrastructure is designed to uphold the confidentiality and integrity of user data throughout the computation process.

Subsequently, the decrypted prompts are fed into the LLM for processing. The LLM generates responses based on the decrypted prompts without ever revealing the original, unencrypted input to any party beyond the authorized entities. This ensures that sensitive information remains confidential and inaccessible to any unauthorized parties, including the infrastructure owner.

By employing such privacy-preserving measures, users can engage with AI systems confidently, knowing that their data remains protected and their privacy upheld throughout the interaction. This approach not only enhances trust between users and AI systems but also aligns with evolving regulatory frameworks aimed at safeguarding personal data.

Privacy of custom trained AI models

In a similar fashion, decentralized technology can be used to protect the privacy of custom-trained AI models that are leveraging proprietary data and sensitive information.

This starts with preparing and curating the training dataset in a manner that mitigates the risk of exposing sensitive information. Techniques such as data anonymization, differential privacy, and federated learning can be employed to anonymize or decentralize the data, thereby minimizing the potential for privacy breaches.

Next, an end user with a custom-trained Language Model (LLM) safeguards its privacy by encrypting the model before uploading it to a decentralized Trusted Execution Environment.

Once the encrypted custom-trained LLM is uploaded to the privacy-preserving compute engine, the infrastructure decrypts it using keys known only to TEE. This decryption process occurs within the secure confines of the compute engine, ensuring that the confidentiality of the model remains intact.

Throughout the training process, the privacy-preserving compute engine facilitates secure communication between the end user’s infrastructure and any external parties involved in the training process, ensuring that sensitive data remains encrypted and confidential at all times. In a decentralized world, this data sharing infrastructure and communication will likely exist on a highly secure and fast protocol such as the Verida Network.

By adopting a privacy-preserving approach to model training, organizations can mitigate the risk of data breaches and unauthorized access while fostering trust among users and stakeholders. This commitment to privacy not only aligns with regulatory requirements but also reflects a dedication to ethical AI practices in an increasingly data-centric landscape.

Private data to train AI

AI models are only as good as the data they have access to. The vast majority of data is generated on behalf of, or by, individuals. This data is immensely valuable for training AI models, but must be protected at all costs due to its sensitivity.

End users can safeguard their private information by encrypting it into private training datasets before submission to a LLM training program. This process ensures that the underlying data remains confidential throughout the training phase.

Operating within a privacy-preserving compute engine, the LLM training program decrypts the encrypted training data for model training purposes while upholding the integrity and confidentiality of the original data. This approach mirrors the principles applied in safeguarding user prompts, wherein the privacy-preserving computation facilitates secure decryption and utilization of the data without exposing its contents to unauthorized parties.

By leveraging encrypted training data, organizations and individuals can harness the power of AI model training while mitigating the risks associated with data exposure. This approach enables the development of AI models tailored to specific use cases, such as utilizing personal health data to train LLMs for healthcare research applications or crafting hyper-personalized LLMs for individual use cases, such as digital AI assistants.

Following the completion of training, the resulting LLM holds valuable insights and capabilities derived from the encrypted training data, yet the original data remains confidential and undisclosed. This ensures that sensitive information remains protected, even as the AI model becomes operational and begins to deliver value.

To further bolster privacy and control over the trained LLM, organizations and individuals can leverage platforms like the Verida Network. Here, the trained model can be securely stored, initially under the private control of the end user who created it. Utilizing Verida’s permission tools, users retain the ability to manage access to the LLM, granting permissions to other users as desired. Additionally, users may choose to monetize access to their trained models by charging others with crypto tokens for accessing and utilizing the model’s capabilities.

About Chris

Chris Were is the CEO of Verida, a decentralized, self-sovereign data network that empowers individuals to control their digital identity and personal data. Chris is an Australian-based technology entrepreneur who has spent over 20 years developing innovative software solutions, most recently with Verida. With his application of the latest technologies, Chris has disrupted the finance, media, and healthcare industries.

How Web3 and DePIN Solves AI’s Data Privacy Problems was originally published in Verida on Medium, where people are continuing the conversation by highlighting and responding to this story.


Embark on the Journey to VDA Token Launch Campaign on Galxe

Verida x Galxe: Introducing the Journey to VDA Token Launch Campaign We’re thrilled to announce the launch of the Journey to VDA Token Launch Campaign on Galxe, as we head towards the Token Generation Event (TGE). Get ready for an adventure as we explore the Verida ecosystem and learn how the VDA token powers the private data economy. Campaign Overview Through this campaign, you’ll lear
Verida x Galxe: Introducing the Journey to VDA Token Launch Campaign

We’re thrilled to announce the launch of the Journey to VDA Token Launch Campaign on Galxe, as we head towards the Token Generation Event (TGE). Get ready for an adventure as we explore the Verida ecosystem and learn how the VDA token powers the private data economy.

Campaign Overview

Through this campaign, you’ll learn about Verida’s decentralized data network, the significance of the VDA token, explore IDO launchpads, and prepare for the upcoming Token Generation Event (TGE).

Buckle up for a 5-week adventure, with new quests dropping each week with fresh challenges and surprises.

Join the Verida community, learn about Verida‘s technology partners, get hands on experience with the Verida Wallet, explore the Verida ecosystem and more to get points and climb the leaderboard. Maximise points by liking/retweeting content and referring friends to join the network.

Week 1: Journey to VDA Token Launch

This week, we’re kicking things off with a bang as we dive deep into the world of Verida and ignite the flames of social media engagement. Get ready to learna bout Verida’s decentralized data network and join the conversation on Twitter and Discord. But that’s not all — showcase your knowledge with quizzes as we delve deeper into the heart of Verida.

To make this journey even more exciting, we’ve prepared six task groups, each worth 50 points. Rack up a total of 300 points by completing tasks across all groups.

Follow Verida (50 points): Join our socials and help us with promoting on Twitter by liking/retweeting our posts. Spread the word and let the world know about our mission to help you own your data.

Verida Wallet (50 points): Learn more about the Verida Wallet by reading our informative article and then test your knowledge with our quiz.

Verida Missions (50 points): Explore the Verida Missions page and discover a world of opportunities. Don’t forget to check out our user guide to maximize your experience.

Verida Network (50 points): Dive deep into the Verida Network with our comprehensive article. Then, put your knowledge to the test with our quiz.

Verida Token (50 points): Learn everything there is to know about the Verida token with our enlightening article. Help us spread the word by liking and retweeting our announcement on Twitter.

Refer Friends (50 points): Share the excitement with your friends and earn points by referring them to join the journey. Refer 5 friends and earn an additional 50 points. The more, the merrier!

Hint for Week 2

Get ready for Week 2, to discover Verida’s IDO launchpad partners as we prepare for the upcoming Token Generation Event (TGE).

Ready to take the plunge?

Head over to the Journey to VDA Token Launch Campaign on Galxe now to embark on this epic journey!

About Verida

Verida is a pioneering decentralized data network and self-custody wallet that empowers users with control over their digital identity and data. With cutting-edge technology such as zero-knowledge proofs and verifiable credentials, Verida offers secure, self-sovereign storage solutions and innovative applications for a wide range of industries. With a thriving community and a commitment to transparency and security, Verida is leading the charge towards a more decentralized and user-centric digital future.

Verida Missions | X/Twitter | Discord | Telegram | LinkedInLinkTree

Embark on the Journey to VDA Token Launch Campaign on Galxe was originally published in Verida on Medium, where people are continuing the conversation by highlighting and responding to this story.


paray

File Your Beneficial Ownership Information Report

Found in the nearly 1,500-page National Defense Authorization Act of 2021, is the 21-page Corporate Transparency Act (“CTA”), 31 U.S.C. § 5336.  The CTA currently requires most entities incorporated or doing business under State law to disclose personal stakeholder information to the Treasury Department’s criminal enforcement arm, Financial Crimes Enforcement Network (“FinCEN”), including Tax
Found in the nearly 1,500-page National Defense Authorization Act of 2021, is the 21-page Corporate Transparency Act (“CTA”), 31 U.S.C. § 5336.  The CTA currently requires most entities incorporated or doing business under State law to disclose personal stakeholder information to the Treasury Department’s criminal enforcement arm, Financial Crimes Enforcement Network (“FinCEN”), including Tax ID … Continue reading File Your Beneficial Ownership Information Report →

YeshID

Upgrade your Checklist to a YeshList: Identity & access management done right

For the past month, we have been working closely with ten customers who have been helping us build something that solves their Identity & Access Management (IAM) problems. We call... The post Upgrade your Checklist to a YeshList: Identity & access management done right appeared first on YeshID.

For the past month, we have been working closely with ten customers who have been helping us build something that solves their Identity & Access Management (IAM) problems. We call them our Lighthouse customers. 

These are smart folks at companies who are either “Unexpected Google Admin”, Solo-IT team, and/or HR. We have been working with them to figure out how they can move away from manual checklists and spreadsheets that manage their: onboarding, offboarding (or provisioning, de-provisioning), and access requests.

(If this sounds like you, you might qualify to climb into the next Lighthouse group.)

We’re working with them to replace their checklists and spreadsheets with something smarter – YeshLists.

A YeshList template is a pattern for a smart checklist. It’s kind of like a task template–the kind that you might create in Asana, Notion, or Google Sheets, but smarter. It does some of the automation and orchestration of getting the task list done for you. 

You make a Yesh template by listing the steps for an activity–say onboarding or offboarding, YeshID can automate tasks within Google Workspaces like ”Create a new Google Account” or “Lock Workspace.” Or they can be automated outside Google Workspaces, like “Send welcome email.” Or they can be delegated, like “Have the Slack Admin set this person up for the Slack channels needed for a new hire in marketing.” Or manual like “Order a Yubikey” or “Send them a welcome swag box.” 

Here’s an example of an Onboarding template. Notice that the YeshID template is smart enough to make the dates relative to the start date.

Here’s what a YeshList template looks like:

Once you’ve got a template customized to your organization–or even to a particular department–and someone is ready to start you put in the person’s name, start date, and some other information, and YeshID will create a YeshList from a template. 

And then, it will  RUN the template for you. If a task is automated (like some of the Google tasks we mentioned above), YeshID will make it happen when it’s supposed to happen. So think “I don’t need to sit in front of the computer at exactly 5 pm and suspend Joe from Google Workspace.” You can trust that YeshID will do it for you. 

If we cannot automate a task–like reclaiming a license or de-provisioning–we route the request to the person responsible for the task and ask them to respond when it is completed. And when they respond, we mark it as done.

But wait, there’s more! In addition to helping you ensure someone is offboarded or onboarded properly, we will automatically update our access grid so that you can use it for compliance purposes.

Finally, we have an end-user view that lets your employees see what applications they have access to and request access to apps they don’t have. This will help you track access for compliance purposes and make sure they are properly offboarded from the apps they have access to upon departure from the company.

We are looking for anyone who:

Uses Google Workspace Works at a company between 10-400 employees Holds the responsibility of IT, Security, HR, compliance (or some combination thereof) in their job description (not requirement, but bonus) Have SOC2 or other compliance requirements

..to work with us to setup YeshID in your environment. We’d love to show you how you can be more efficient, secure, and compliant with us!

If you are interested, please reach out to support@yeshid.com. Of course, you are always welcome to sign up in your own time here

The post Upgrade your Checklist to a YeshList: Identity & access management done right appeared first on YeshID.

Wednesday, 17. April 2024

KuppingerCole

Road to EIC 2024: Generative AI

Security concerns in open-source projects are undeniable. This session will deepen into strategies for ensuring data integrity and safeguarding against vulnerabilities. This knowledge is crucial for anyone looking to utilize Generative AI technology responsibly and effectively. Additionally, it will prepare you for the bootcamp's exploration of scaling and optimizing your AI stack, touching on the

Security concerns in open-source projects are undeniable. This session will deepen into strategies for ensuring data integrity and safeguarding against vulnerabilities. This knowledge is crucial for anyone looking to utilize Generative AI technology responsibly and effectively. Additionally, it will prepare you for the bootcamp's exploration of scaling and optimizing your AI stack, touching on the challenges of scalability, performance optimization, and the advantages of community collaboration in open-source projects.

By attending this webinar, you will gain the essential background to not only follow but actively participate in the bootcamp on the 4th of June. Whether you are a business leader or a technical professional, this session will ensure you are ready to explore how to build, scale, and optimize a Generative AI tech stack, setting a solid foundation for your journey into the future of technology.

acquire knowledge about the fundamentals of Generative AI and its potential to reshape industries, creating a groundwork for advanced discussions in the upcoming "Constructing the Future" bootcamp. investigate the importance of open-source technology in AI development, focusing on the architecture and capabilities of the Mixtral 8x7b Language Model, crucial for constructing a flexible and secure tech stack. gain insights into essential strategies ensuring data integrity and protection against vulnerabilities in open-source projects, empowering you to responsibly and effectively use Generative AI technology. acquire insights into the hurdles of scaling and optimizing your AI stack, covering performance optimization and showcasing the advantages of community collaboration within open-source projects.


Holochain

Designing Regenerative Systems to Nurture Collective Thriving

#HolochainChats with Ché Coelho & Ross Eyre

As interdisciplinary designers Che and Ross explain, prevailing technology too often serves narrow corporate interests rather than the empowerment of communities. Yet lessons from diverse, decentralized ecosystems demonstrate that more holistic models for catalyzing economics are aligned with collective thriving.

Just as living systems are circular, we must redesign digital infrastructure to nurture regeneration rather than extracting from the system. 

In our interview with Che and Ross, we learned that by applying principles of interdependence and circularity, technology can shift from concentrating power to cultivating mutually beneficial prosperity. 

Ingredients for Regenerative Value Flows

To build technology capable of empowering communities, we must look to the wisdom found in living systems that have sustained life on this planet for billions of years. As Ross explains:

“Regenerative systems are living systems. And living systems tend to be characterized by things like circular value flows. It's like recycling, nutrients and resources, and energy. They tend to be diverse, and there’s a diversity of forms that allows it to adapt to complex, changing environments. And evolution in that, I think, is learning about information and feedback loops.”

These properties allow ecosystems to be “both intensely resilient and creatively generative at the same time — maintaining integrity through shifts while allowing novelty to emerge.

Taken together, these key ingredients include:

Diversity: A variety of components and perspectives allows greater adaptability. Monocultures quickly become fragile. Interdependence: Rather than isolated parts, living systems work through symbiotic relationships where waste from one process becomes food for the next in closed nutrient loops. Circularity: Resources cycle continuously through systems in balanced rhythms, more akin to the water cycle than a production line. Renewable inputs and outputs avoid depletion. Feedback loops: Mechanisms for self-correction through learning. Information flows enable adaptation to dynamic conditions.

Technology typically pursues narrow aims without acknowledging the repercussions outside corporate interests. However, by studying ecological patterns and the deeper properties that sustain them, we can envision digital infrastructure aligned with collective prosperity across interconnected systems. 

Starting the Shift to Regenerative Models

The extractive practices prevalent in mainstream economics have fundamentally different outcomes compared to the circular, regenerative flows seen in natural systems. Commons-based peer production is one way to more closely align with natural systems, yet shifting the entrenched infrastructure rooted in exploitation presents an immense challenge.

As Che recognizes, the tension of, “incubating Commons-oriented projects” lies in “interfacing with a capital-driven system without the projects being subsumed by that system, in a way that suffocates the intuitions of the commons.”

Technology builders, of course, face a choice: will new solutions concentrate power further into existing hierarchies of control or distribute agency towards collective empowerment? Each application encodes certain assumptions and values into its architecture.

Creating regenerative systems therefore requires what some refer to as “transvestment” — deliberately rechanneling resources out of extractive systems into regenerative alternatives aligned with the common good.

As Ross points out: 

“Capital actually needs to be tamed, utilized, because that's where all sorts of value is stored. And that's how we get these projects started and going. But if you're not able to sort of turn them under new Commons-oriented logics, then it escapes.”

Grassroots projects cultivating local resilience while connecting to global knowledge flows demonstrate this paradigm shift. For example, Farm Hack uses open source collaboration to freely share sustainable agriculture innovations.

So as solutions centered on human needs gain traction, the tide may turn towards nurturing collective prosperity.

Nurturing Collective Knowledge

As Che and Ross explained, extractive technology dumps value into corporate coffers while frequently compromising user privacy and autonomy. In contrast, thriving systems empower broad participation at individual and collective levels simultaneously.

For instance, public data funneled through proprietary algorithms often fuels asymmetric AI rather than equitably enriching shared understanding. "Data is being locked up,” Che explains.

“And that puts a lot of power in the hands of a very small group."

Yet human instincts lean towards cooperation and collective learning. Wikipedia stands out as a remarkable example of voluntary collaboration in service of the commons.

“It demonstrates how willing and how fundamental it is for humans to share, to share knowledge and information and contribute towards the commons,” Ross notes. Rather than reduce users to passive consumers, it connects personal growth to universal betterment.

At their best, technologies can thus amplify innate human capacities for cumulative innovation and participatory sensemaking.

By structuring information as nourishing circulations, technology can shift toward cultivating empathy; from addicting users to advertised products to aligning connectivity with meaning. 

Holochain was built to enable this kind of circular value flow, connecting users, resisting centralizing tendencies, and enabling the Commons.

We hope our data and digital traces might then grow communal wisdom rather than being captured and used to control.


1Kosmos BlockID

Blockchain Identity Management: A Complete Guide

Traditional identity verification methods show their age, often proving susceptible to data breaches and inefficiencies. Blockchain emerges as a beacon of hope in this scenario, heralding a new era of enhanced data security, transparency, and user-centric control to manage digital identities. This article delves deep into blockchain’s transformative potential in identity verification, highlighting

Traditional identity verification methods show their age, often proving susceptible to data breaches and inefficiencies. Blockchain emerges as a beacon of hope in this scenario, heralding a new era of enhanced data security, transparency, and user-centric control to manage digital identities. This article delves deep into blockchain’s transformative potential in identity verification, highlighting its advantages and the challenges it adeptly addresses.

What is Blockchain?

Blockchain technology represents the decentralized storage of a digital ledger of transactions. Distributed across a network of computers, decentralized storage of this ledger ensures that every transaction gets recorded in multiple places. The decentralized nature of blockchain technology ensures that no single entity controls the entire blockchain, and all transactions are transparent to every user.

Types of Blockchains: Public vs. Private

Blockchain technology can be categorized into two primary types: public and private. Public blockchains are open networks where anyone can participate and view transactions. This transparency ensures security and trust but can raise privacy concerns. In contrast, private blockchains are controlled by specific organizations or consortia and restrict access to approved members only. This restricted access offers enhanced privacy and control, making private blockchains suitable for businesses that require confidentiality and secure data management.

Brief history and definition

The concept of a distributed ledger technology, a blockchain, was first introduced in 2008 by an anonymous entity known as Satoshi Nakamoto. Initially, it was the underlying technology for the cryptocurrency Bitcoin. The primary goal was to create a decentralized currency, independent of retaining control of any central authority, that could be transferred electronically in a secure, verifiable, and immutable way. Over time, the potential applications of blockchain have expanded far beyond cryptocurrency. Today, it is the backbone for various applications, from supply chain and blockchain identity management solutions to voting systems.

Core principles

Blockchain operates on a few core principles. Firstly, it’s decentralized, meaning no single entity or organization controls the entire chain. Instead, multiple participants (nodes) hold copies of the whole blockchain. Secondly, transactions are transparent. Every transaction is visible to anyone who has access to the system. Lastly, once data is recorded on a blockchain, it becomes immutable. This means that it cannot be altered without altering all subsequent blocks, which requires the consensus of most of the blockchain network.

The Need for Improved Identity Verification

Identity verification is a cornerstone for many online processes, from banking to online shopping. However, traditional methods of identity verification could be more challenging. They often rely on centralized databases of sensitive information, making them vulnerable to data breaches. Moreover, these methods prove identity and often require users to share personal details repeatedly, increasing the risk of data theft or misuse.

Current challenges in digital identity

Digital credentials and identity systems today face multiple challenges. Centralized systems are prime targets for hackers. A single breach can expose the personal data of millions of users. Additionally, users often need to manage multiple usernames and passwords across various platforms, leading to password fatigue and increased vulnerability. There’s also the issue of privacy. Centralized digital identities and credentials systems often share user data with third parties, sometimes without the user’s explicit consent.

Cost of identity theft and fraud

The implications of identity theft and fraud are vast. It can lead to financial loss, credit damage, and a long recovery process for individuals. For businesses, a breach of sensitive information can result in significant financial losses, damage to business risks, reputation, and loss of customer trust. According to reports, the annual cost of identity theft and fraud runs into billions of dollars globally, affecting individuals and corporations.

How Blockchain Addresses Identity Verification

Blockchain offers a fresh approach to identity verification. By using digital signatures and leveraging its decentralized, transparent, and immutable nature, blockchain technology can provide a more secure and efficient way to verify identity without traditional methods’ pitfalls.

Decentralized Identity

Decentralized identity systems on the blockchain give users complete control over their identity data. Users can provide proof of their identity directly from a blockchain instead of relying on a central authority to keep medical records and verify identity. This reduces the risk of a centralized data breach and gives users autonomy over their identities and personal data.

Transparency and Trust

Blockchain technology fosters trust through transparency, but the scope of this transparency varies significantly between public and private blockchains. Public blockchains allow an unparalleled level of openness, where every transaction is visible to all, promoting trust through verifiable openness. On the other hand, private blockchains offer a selective transparency that is accessible only to its participants. This feature maintains trust among authorized users and ensures that sensitive information remains protected from the public eye, aligning with privacy and corporate security requirements.

Immutability

Once identity data is recorded on a blockchain, it cannot be altered without consensus. This immutability of sensitive, personally identifiable information ensures that identity data remains consistent and trustworthy. It also prevents malicious actors from changing identity data for fraudulent purposes.

Smart Contracts

Smart contracts automate processes on the blockchain. In identity verification, smart contracts can automatically verify a user’s bank account’s identity when certain conditions are met, eliminating the need for manual verification of bank accounts and reducing the often time-consuming process and potential for human error.

Benefits of Blockchain Identity Verification

Blockchain’s unique attributes offer a transformative approach to identity verification, addressing many of the challenges faced by the traditional identity systems’ instant verification methods.

Enhanced Security

Traditional identity verification systems, being centralized, are vulnerable to single points of failure. If a hacker gains access, the entire system can be compromised. Blockchain, with its decentralized nature, eliminates this single point of failure. Each transaction is encrypted and linked to the previous one. This cryptographic linkage ensures that even if one block is tampered with, it would be immediately evident, making unauthorized alterations nearly impossible.

User Control

Centralized identity systems often store user data in silos, giving organizations control over individual data. Blockchain shifts this control back to users. With decentralized identity solutions, individuals can choose when, how, and with whom they share their personal information. This not only enhances data security and privacy but also reduces the risk of data being mishandled or misused by third parties.

Reduced Costs

Identity verification, especially in sectors like finance, can be costly. Manual verification processes, paperwork, and the infrastructure needed to support centralized databases contribute to these costs. Blockchain can automate many of these processes using smart contracts, reducing the need for intermediaries and manual interventions and leading to significant cost savings.

Interoperability

In today’s digital landscape, individuals often have their digital identities and personal data scattered across various platforms, each with its verification process. Blockchain can create a unified, interoperable system where one’s digital identity documents can be used across multiple platforms once verified on one platform. This not only enhances user convenience but also streamlines processes for businesses.

The Mechanics Behind Blockchain Identity Verification

Understanding its underlying mechanics is crucial to appreciating the benefits of the entire blockchain network’s ability for identity verification.

How cryptographic hashing works

Cryptographic hashing is at the heart of the blockchain network’s various security measures. When a transaction occurs, it’s converted into a fixed-size string of numbers and letters using a hash function. This unique hash is nearly impossible to reverse-engineer. When a new block is created, it contains the previous block’s hash, creating a blockchain. Any alteration in a block changes its hash, breaking the chain and alerting the system to potential tampering.

Public and private keys in identity verification

Blockchain uses a combination of public and private keys to ensure secure transactions. A public key is a user’s address on the blockchain, while a private key is secret information that allows them to initiate trades. Only individuals with the correct private key can access and share their data for identity verification, ensuring their data integrity and security.

The role of consensus algorithms

Consensus algorithms are protocols that consider a transaction valid based on the agreement of the majority of participants in the network. They play a crucial role in maintaining the trustworthiness of the blockchain. In identity verification, consensus algorithms ensure that once a user’s identity data is added to the blockchain, it’s accepted and recognized by the majority, ensuring data accuracy and trustworthiness.

Challenges and Concerns

While blockchain offers transformative potential for identity verification, it’s essential to understand the challenges, key benefits, and concerns associated with its adoption.

Scalability

One of the primary challenges facing blockchain technology is scalability. As the number of transactions on a blockchain increases, so does the time required to process and validate them. This could mean delays in identity verification, especially if the system is adopted on a large scale. Solutions like off-chain transactions and layer two protocols are being developed to address this, but it remains a concern.

Privacy Concerns

While blockchain offers enhanced security, the level of privacy depends on whether the blockchain is public or private. In public blockchains, the transparency of transactions means that every action is visible to anyone on the network, which can compromise user privacy. Conversely, private blockchains control access and visibility of transactions to authorized participants only, significantly mitigating privacy risks. This controlled transparency is important in environments where confidentiality is paramount, leveraging blockchain’s security benefits without exposing sensitive data to the public.

Regulatory and Legal Issues

The decentralized nature of blockchain challenges traditional regulatory frameworks. Different countries have varying stances on blockchain and its applications, leading only to a fragmented regulatory landscape; for businesses looking to adopt blockchain for identity verification and online services, navigating this complex regulatory environment can take time and effort.

Adoption Barriers

Despite its benefits and technological advancements, blockchain faces skepticism. Many businesses need help to adopt a relatively new technology, especially when it challenges established processes. Additionally, the need for a standardized framework for blockchain identity management and verification and a complete ecosystem overhaul can deter many from its adoption.

Blockchain Identity Verification Standards and Protocols

For blockchain-based identity verification to gain widespread acceptance, there’s a need for standardized protocols and frameworks.

Decentralized Identity Foundation (DIF)

The Decentralized Identity Foundation (DIF) is an alliance of companies, financial institutions, educational institutions, and organizations working together to develop a unified, interoperable ecosystem for decentralized blockchain enables and identity solutions. Their work includes creating specifications, protocols, and tools to ensure that blockchain-based identity solutions are consistent, reliable, and trustworthy.

Self-sovereign identity principles

Self-sovereign identity is a concept where individuals have ownership and control over their data without relying on a centralized database or authorities to verify identities. The principles of self-sovereign identity emphasize user control, transparency, interoperability, and consent. Blockchain’s inherent attributes align well with these principles, making it an ideal technology for realizing self-sovereign digital identity.

Popular blockchain identity protocols

Several protocols aim to standardize blockchain identity verification. Some notable ones include DID (Decentralized Identifiers), which provides a new type of decentralized identifier created, owned, and controlled by the subject of the digital identity, and Verifiable Credentials, which allow individuals to share proofs of personal data without revealing the actual data.

Through its unique attributes, blockchain presents a compelling and transformative alternative to the pitfalls of conventional identity management and verification systems. By championing security, decentralization, and user empowerment, it sets a new standard for the future of digital and blockchain identity and access management solutions. To understand how this can redefine your identity management and verification processes, book a call with us today and embark on a journey toward a more secure security posture.

The post Blockchain Identity Management: A Complete Guide appeared first on 1Kosmos.

Tuesday, 16. April 2024

Indicio

SITA Leads the Way in Aviation Digital Revolution with Advanced Biometric Solutions

SITA The post SITA Leads the Way in Aviation Digital Revolution with Advanced Biometric Solutions appeared first on Indicio.

auth0

Call Protected APIs from a Blazor Web App

Calling a protected API from a .NET 8 Blazor Web App can be a bit tricky. Let's see what the problems are and how to solve them.
Calling a protected API from a .NET 8 Blazor Web App can be a bit tricky. Let's see what the problems are and how to solve them.

Microsoft Entra (Azure AD) Blog

Microsoft Graph activity logs is now generally available

We’re excited to announce the general availability of Microsoft Graph activity logs! Microsoft Graph activity logs give you visibility into HTTP requests made to the Microsoft Graph service in your tenant. With rapidly growing security threats and an increasing number of attacks, this log data source allows you to perform security analysis, threat hunting, and monitor application activity in your

We’re excited to announce the general availability of Microsoft Graph activity logs! Microsoft Graph activity logs give you visibility into HTTP requests made to the Microsoft Graph service in your tenant. With rapidly growing security threats and an increasing number of attacks, this log data source allows you to perform security analysis, threat hunting, and monitor application activity in your tenant.  

 

Some common use cases include: 

  

Identifying the activities that a compromised user account conducted in your tenant.  Building detections and behavioral analysis to identify suspicious or anomalous use of Microsoft Graph APIs, such as an application enumerating all users, or making probing requests with many 403 errors.  Investigating unexpected or unnecessarily privileged assignments of application permissions.  Identifying problematic or unexpected behaviors for client applications, such as extreme call volumes that cause throttling for the tenant. 

 

You’re currently able to collect sign-in logs to analyze authentication activity and audit logs to see changes to important resources. With Microsoft Graph activity logs, you can now investigate the complete picture of activity in your tenant – from token request in sign-in logs, to API request activity (reads, writes, and deletes) in Microsoft Graph activity logs, to ultimate resource changes in audit logs.

 

Figure 1: Microsoft Graph activity logs in Log Analytics.

 

 

We’re delighted to see many of you applying the Microsoft Graph activity logs (Preview) to awesome use cases. As we listened to your feedback on cost concerns, particularly for ingestion to Log Analytics, we’ve also enabled Log Transformation and Basic Log capabilities to help you scope your log ingestion to a smaller set if desired.

 

To illustrate working with these logs, we can look at some basic queries: 
 
Summarize applications and principals that have made requests to change or delete groups in the past day:

 

MicrosoftGraphActivityLogs 

| where TimeGenerated > ago(1d) 

| where RequestUri contains '/group' 

| where RequestMethod != "GET" 

| summarize UriCount=dcount(RequestUri) by AppId, UserId, ServicePrincipalId, ResponseStatusCode 

 

See recent requests that failed due to authorization:

 

MicrosoftGraphActivityLogs 

| where TimeGenerated > ago(1h) 

| where ResponseStatusCode == 401 or ResponseStatusCode == 403 

| project AppId, UserId, ServicePrincipalId, ResponseStatusCode, RequestUri, RequestMethod 

| limit 1000 

 

Identify resources queried or modified by potentially risky users:

Note: This query leverages Risky User data from Entra ID Protection.

 

MicrosoftGraphActivityLogs 

| where TimeGenerated > ago(30d) 

| join AADRiskyUsers on $left.UserId == $right.Id 

| extend resourcePath = replace_string(replace_string(replace_regex(tostring(parse_url(RequestUri).Path), @'(\/)+','/'),'v1.0/',''),'beta/','') 

| summarize RequestCount=dcount(RequestId) by UserId, RiskState, resourcePath,

RequestMethod, ResponseStatusCode 

 

Microsoft Graph activity logs are available through the Azure Monitor Logs integration of Microsoft Entra. Administrators of Microsoft Entra ID P1 or P2 tenants can configure the collection and storage destinations of Microsoft Graph activity logs through the diagnostic setting in the Entra portal. These settings allow you to configure the collection of the logs to a storage destination of your choice. The logs can be stored and queried in an Azure Log Analytics Workspace, archived in Azure Storage Accounts, or exported to other security information and event management (SIEM) tools through Azure Event Hubs. For logs collected in a Log Analytics Workspace, you can use the full set of Azure Monitor Logs features, such as a portal query experience, alerting, saved queries, and workbooks.   

 

Find out how to enable Microsoft Graph activity logs, sample queries, and more in our documentation

 

Kristopher Bash 

Product Manager, Microsoft Graph 
LinkedIn

 

 

Learn more about Microsoft Entra: 

See recent Microsoft Entra blogs  Dive into Microsoft Entra technical documentation  Learn more at Azure Active Directory (Azure AD) rename to Microsoft Entra ID  Join the conversation on the Microsoft Entra discussion space  Learn more about Microsoft Security  

This week in identity

E49 - The IAM and Fraud Episode

After a small spring break, Simon and David return with a special episode focused on the convergence of identity and access management and fraud. Why the convergence? How to measure success? What are the three 'V's' as they relate to fraud? How should people and process adapt to keep up with technology changes? And how to thwart the asymmetric advantage of the fraudster?

After a small spring break, Simon and David return with a special episode focused on the convergence of identity and access management and fraud. Why the convergence? How to measure success? What are the three 'V's' as they relate to fraud? How should people and process adapt to keep up with technology changes? And how to thwart the asymmetric advantage of the fraudster?


Shyft Network

Giottus Integrates Shyft Veriscope as its FATF Travel Rule Solution

Shyft Network is excited to announce that Giottus, one of India’s leading cryptocurrency exchanges, has integrated Shyft Veriscope as its FATF Travel Rule Solution. Giottus’ decision to choose Shyft Veriscope proves once again that Veriscope is one of the most trusted and effective Travel Rule Solutions among VASPs worldwide. This strategic partnership establishes Veriscope, the Shyft Network’s o

Shyft Network is excited to announce that Giottus, one of India’s leading cryptocurrency exchanges, has integrated Shyft Veriscope as its FATF Travel Rule Solution. Giottus’ decision to choose Shyft Veriscope proves once again that Veriscope is one of the most trusted and effective Travel Rule Solutions among VASPs worldwide.

This strategic partnership establishes Veriscope, the Shyft Network’s one-of-a-kind compliance technology solution, as a leader in the secure exchange of personally identifiable information (PII) for frictionless Travel Rule compliance. The collaboration is timely, aligning with India’s new crypto regulatory measures, which include the FATF Travel Rule that the country implemented in 2023.

Why did Giottus Choose Veriscope?

As an entity already reporting to India’s Financial Intelligence Unit and an Alliance of Reporting Entities for AML/CFT (ARIFAC) member, Giottus must take a more efficient approach to Travel Rule compliance, and Veriscope facilitates this with its streamlined and automated system.

Moreover, by adopting Veriscope, Giottus is positioned advantageously over other Indian VASPs, which may still rely on manual means to collect Travel Rule information, such as Google Forms and email. These traditional methods are less efficient and user-friendly compared to Veriscope’s automated and privacy-oriented approach.

Speaking about Giottus’ integration with Shyft Veriscope, Zach Justein, Veriscope co-founder, said:

“Giottus’ integration of Veriscope as its Travel Rule Solution demonstrates the unique advantages it offers to VASPs with its state-of-the-art compliance infrastructure for seamless FATF Travel Rule compliance. This is a significant development for the entire crypto ecosystem, as with this integration, both Veriscope and Giottus are setting a new standard for unwavering commitment to safety, transparency, and user experience.”

Vikram Subburaj, Giottus CEO, too, welcomed this development, noting:

“Since its inception in 2018, Giottus has been at the forefront of innovation and compliance in the Indian VDA space. Our partnership with Veriscope is timely and pivotal to establish us as part of a global compliance network and to strengthen our offering to all Indian crypto enthusiasts. We believe that collaboration and data exchange are crucial in shaping the future of this industry and are thankful to Veriscope for integrating us. We look forward to driving a positive change in the Indian VDA ecosystem.”
Conclusion

Overall, we expect our collaboration with Giottus to yield positive outcomes not only for Giottus and Shyft Veriscope but also for India’s crypto ecosystem. This partnership sets a new precedent for the country’s VASPs, as they can now comply with the FATF Travel Rule effortlessly while continuing with their privacy and user-friendly developments.

About Giottus

Giottus, with over a million users, is a customer-centric, all-in-one crypto investment platform that is changing the way Indian investors trade their virtual digital assets. Giottus aims to shed barriers that arise from the complexity of the asset class and the need to transact in English. We focus on building a simplified platform that is vernacular at heart. Investors can buy and sell crypto assets on Giottus in eight languages, including Hindi, Tamil, Telugu, and Bengali. Giottus is currently India’s top-rated crypto platform as per consumer ratings on Facebook, Google, and Trustpilot.

‍About Veriscope

‍Veriscope, the compliance infrastructure on Shyft Network, empowers Virtual Asset Service Providers (VASPs) with the only frictionless solution for complying with the FATF Travel Rule. Enhanced by User Signing, it enables VASPs to directly request cryptographic proof from users’ non-custodial wallets, streamlining the compliance process.

For more information, visit our website and contact our team for a discussion. To keep up-to-date on all things crypto regulations, sign up for our newsletter and follow us on X (Formerly Twitter), LinkedIn, Telegram, and Medium.

Giottus Integrates Shyft Veriscope as its FATF Travel Rule Solution was originally published in Shyft Network on Medium, where people are continuing the conversation by highlighting and responding to this story.


KuppingerCole

Jun 20, 2024: Unveiling the Triad: Zero Trust, Identity-First Security, and ITDR in Identity Cybersecurity

Dive into the intricate world of identity cybersecurity, where the convergence of Zero Trust, Identity-First Security, and Identity Threat Detection and Response (ITDR) presents both opportunities and challenges. With escalating cyber threats targeting identity assets, organizations face the daunting task of safeguarding sensitive data and systems while ensuring seamless operations.
Dive into the intricate world of identity cybersecurity, where the convergence of Zero Trust, Identity-First Security, and Identity Threat Detection and Response (ITDR) presents both opportunities and challenges. With escalating cyber threats targeting identity assets, organizations face the daunting task of safeguarding sensitive data and systems while ensuring seamless operations.

Elliptic

Crypto regulatory affairs: Hong Kong regulator approves Bitcoin and Ether ETFs

Regulators in Hong Kong have approved Bitcoin and Ether exchange traded funds (ETFs), providing another signal that Hong Kong is positioned to serve as a hub for well-regulated crypto activity. 

Regulators in Hong Kong have approved Bitcoin and Ether exchange traded funds (ETFs), providing another signal that Hong Kong is positioned to serve as a hub for well-regulated crypto activity. 


IDnow

IDnow bridges the AI-human divide with new expert-led video verification solution

New VideoIdent Flex elevates trust with a human touch in the face of rising fraud and the closing of physical bank branches London, April 16, 2024 – IDnow, a leading identity verification provider in Europe, has unveiled VideoIdent Flex, a new version of its expert-led video verification service that blends advanced AI technology with human […]
New VideoIdent Flex elevates trust with a human touch in the face of rising fraud and the closing of physical bank branches

London, April 16, 2024 – IDnow, a leading identity verification provider in Europe, has unveiled VideoIdent Flex, a new version of its expert-led video verification service that blends advanced AI technology with human interaction. The human-based video call solution, supported by AI, has been designed and built to boost customer conversion rates, reduce rising fraud attempts, increase inclusivity, and tackle an array of complex online verification scenarios, while offering a high-end service experience to end customers.

The company’s original expert-led product, VideoIdent, has been a cornerstone in identity verification for over a decade, serving the strictest requirements in highly regulated industries across Europe. VideoIdent Flex, re-engineered specifically for the UK market, represents a significant evolution, addressing the growing challenges of identity fraud, compliance related to Know-Your-Customer (KYC) and Anti-Money Laundering (AML) processes and ensuring fair access and inclusivity in today’s digital world outside of fully automated processes.

Empowering businesses with flexible human-based identity verification

As remote identity verification becomes more crucial yet more challenging, VideoIdent Flex combines high-quality live video identity verification with hundreds of trained verification experts, thus ensuring that genuine customers gain equal access to digital services while effectively deterring fraudsters and money mules. Unlike fully automated solutions based on document liveness and biometric liveness features, this human-machine collaboration not only boosts onboarding rates and prevents fraud but also strengthens trust and confidence in both end users and organizations. VideoIdent Flex can also serve as a fallback service in case a fully automated solution fails.

Bertrand Bouteloup, Chief Commercial Officer at IDnow, commented: “VideoIdent Flex marks a groundbreaking advancement in identity verification, merging AI-based technology with human intuition. In a landscape of evolving fraud tactics and steady UK bank branch closures, our solution draws on our decade’s worth of video verification experience and fraud insights, empowering UK businesses to maintain a competitive edge by offering a white glove service for VIP onboarding. With its unique combination of KYC-compliant identity verification, real-time fraud prevention solutions, and expert support, VideoIdent Flex is a powerful tool for the UK market.”

Whereas previously firms may have found video identification solutions to be excessive for their compliance requirement or out of reach due to costs, VideoIdent Flex opens up this option by customizing checks as required by the respective regulatory bodies in financial services, mobility, telecommunications or gaming, to offer a streamlined solution fit for every industry and geography.

Customizable real-time fraud prevention for high levels of assurance

VideoIdent Flex has a number of key features and benefits:

Customizable: Pre-defined configurations to meet specific industry requirements and regional regulations. Expert-led: High-quality live video verification conducted by trained identity verification experts, ensuring accuracy, reliability, and compliance for high levels of assurance. Extensive document coverage: Support for a wide range of documents, facilitating global expansion and inclusivity. Real-time fraud prevention: Advanced fraud detection capabilities, including AI-driven analysis and manual checks, combat evolving fraud tactics and help protect against social engineering fraud, document tampering, projection and deepfakes, especially for high-risk use cases and goods. Verification of high-risk individuals: Reviewing applications from high-risk persons, such as Politically Exposed Persons (PEPs), high-risk countries; or assessing where fraud might be expected with real-time decisions, without alerting suspicion.

Bouteloup concluded: “Identity verification is incredibly nuanced; it’s as intricate as we are as human beings. This really compounds the importance of adopting a hybrid approach to identity – capitalizing on the dual benefits of advanced technology when combined with human knowledge and awareness of social cues. With bank branches in the UK closing down, especially in the countryside, and interactions becoming more and more digital, our solution offers a means to maintain a human relationship between businesses and their end customers, no matter their age, disability or neurodiversity.   

“VideoIdent Flex is designed from the ground up for organizations that cannot depend on a one-size-fits-all approach to ensuring their customers are who they say they are. In a world where fraud is consistently increasing, our video capability paired with our experts adds a powerful layer of security, especially for those businesses and customers that require a face-to-face interaction.”


Subtle flex: IDnow team explains why video verification could revolutionize the UK market.

We sit down with our Principal Product Manager, Nitesh Nahta and Senior Product Marketing Manager, Suzy Thomas to find out why VideoIdent Flex is all set to become a game changer. In April 2024, we launched VideoIdent Flex, a customizable video identity verification solution aimed specifically at the UK market.   Our original expert-led product, VideoIdent, […]
We sit down with our Principal Product Manager, Nitesh Nahta and Senior Product Marketing Manager, Suzy Thomas to find out why VideoIdent Flex is all set to become a game changer.

In April 2024, we launched VideoIdent Flex, a customizable video identity verification solution aimed specifically at the UK market.  

Our original expert-led product, VideoIdent, has been a cornerstone in identity verification for over a decade, serving the strictest requirements in highly regulated industries across Europe. VideoIdent Flex, re-engineered specifically for the UK market addresses the nation’s growing challenges of identity fraud and compliance related Know-Your-Customer (KYC) and Anti-Money Laundering (AML) requirements. It also ensures fair access and inclusivity in today’s digital world.

Can you tell us a little more about VideoIdent Flex?

Nitesh: VideoIdent Flex, from our expert-led video verification toolkit, will revolutionize the onboarding process by focusing on the human touch in KYC onboarding. Our proprietary technology will boost conversion rates while thwarting escalating fraud attempts. What sets it apart? Unlike its predecessors, VideoIdent Flex transcends its origins as a German-centric product. Leveraging years of refinement, insights and unmatched security, we’re extending its capabilities beyond German borders. 

Suzy: VideoIdent Flex caters to a diverse range of global use cases, including boosting customer conversion rates, reducing fraud attempts, verifying high-risk individuals and onboarding VIPs. It offers a face-to-face service or provides an accessible and personalized alternative to automated identification processes.

Further differentiating IDnow in the market, VideoIdent Flex can also be combined with our digital signature solutions, allowing us to expand into loan and investment use cases from financial services, as well as recruitment, legal and insurance. With its advanced technology and expert-led verification, VideoIdent Flex offers three pre-configured packages tailored to suit different risk levels within organizations.

How important do you think VideoIdent Flex will be to the UK market?

Nitesh: Video verification is already a trusted AML-compliant onboarding solution across many European countries. Enter VideoIdent Flex: a versatile product catering to both AML and non-AML needs, boasting a seamless process, high conversion rates, and budget-friendly pricing. This marks a significant shift for IDnow, offering a distinctive value proposition that sets us apart. It’s a game-changer, enticing customers outside of the DACH region who hadn’t previously explored expert-led video verification.

Already embraced by numerous EU clients, this launch signifies our expansion beyond the DACH market, solidifying our foothold across the continent.  

Suzy: I’m excited about the potential impact of VideoIdent Flex! It not only expands our market reach beyond the DACH and France+ regions, but also allows IDnow to break into new territories, including regulated and unregulated sectors in the UK, and non-regulated markets in DACH, France and beyond. With the UK’s highly competitive document and biometric verification market, VideoIdent Flex serves as a powerful differentiator, offering a face-to-face, fraud busting solution that will drive organizations to better recognize, trust and engage with IDnow.  

I believe there are three main drivers that make this an excellent time to launch our new solution: 

The economic pressures to close expensive physical branches. Increasing consumer demand for remote digital experiences that can be accessed anytime, anywhere. Societal pressure driving environmental and corporate governance concerns.  

We’re excited to launch and see the market reaction. We believe it significantly enhances our value proposition and solidifies our position as industry leaders in identity verification. 

Interested in more information about VideoIdent Flex? Check out our recent blog, ‘How video identity verification can help British businesses finally face up to fraud.’

By

Jody Houton
Senior Content Manager at IDnow
Connect with Jody on LinkedIn


How video identity verification can help British businesses finally face up to fraud.

In an increasingly branchless, AI-automated world, offer your customers the VIP premium face-to-face experience they deserve. Technology – it gives with one hand and takes with the other.   Technology, and the internet in particular, has afforded consumers incredible convenience, providing 24/7 access to services, across every industry imaginable. Unfortunately, technology has also empowe
In an increasingly branchless, AI-automated world, offer your customers the VIP premium face-to-face experience they deserve.

Technology – it gives with one hand and takes with the other.  

Technology, and the internet in particular, has afforded consumers incredible convenience, providing 24/7 access to services, across every industry imaginable. Unfortunately, technology has also empowered criminals to commit fraud with an effortless ease and at an unprecedented level.
 
Discover more about the scourge of fraud in the UK, in our blog, ‘UK fraud strategy declares war on fraud, calls on tech giants to join the fight.’ 

On average, multinational banks are subjected to tens of thousands of fraud attacks every single month, ranging from account takeover fraud all the way to money laundering. It is this reason, among many others, that it’s paramount to verify the identity of customers as a preventative measure against fraud.  

As businesses scale and the need to onboard customers quickly increases, many banks have implemented AI-assisted automated identity verification solutions. In certain high-risk circumstances, however, the importance of human oversight cannot be overstated. 

As bank branches continue to close (almost three-fifths of the UK’s bank network closing since 2015), many UK banks are beginning to look for alternatives to data checks and automatic checks and are turning to expert-led video verification solutions.

UK Fraud Awareness Report Learn more about the British public’s awareness of fraud and their attitudes toward fraud-prevention technology. Get your free copy

Our recently launched VideoIdent Flex, specially designed for the UK market, works in two ways: full service or self-service, meaning banks can choose to either use our extensive team of multilingual identity experts, or have their bank staff trained to the highest standard of fraud prevention experts. Here’s how VideoIdent Flex can help.

Tackling fraud, in all 4 forms.

At IDnow, we categorize fraud into four different buckets. Here’s how VideoIdent Flex can help tackle fraud, in all its forms.

1. Fake ID fraud.

We classify fake ID fraud as the use of forged documents or fraudulent manipulations of documents. Common types of document fraud – the act of creating, altering or using false or genuine documents, with the intent to deceive or pass specific controls – include:  

Counterfeit documents: reproduction of an official document without the proper authorization from the relevant authority.
Forged documents: deliberate alteration of a genuine document in order to add, delete or modify information, while passing it off as genuine. Forged documents can include photo substitution, page substitution, data alteration, attack on the visas or entry/exit stamp.
 Pseudo documents: replicates codes from official documents, such as passports, driver’s licenses or national identity cards.

How VideoIdent Flex identifies and stops fake ID fraud.

As a first step, IDnow’s powerful automated checks can detect damaged documents, invalid/cut corners, photocopies and much more. As a second step and additional layer of assurance, identity experts, specially trained in fraud prevention request document owners to cover or bend certain parts of documents as a way of detecting fake IDs and document deepfakes.

2. Identity theft fraud.

Identity theft fraud is when individuals, without permission, use stolen, found or given identity documents, or when another person pretends to be another person. Although popular among teenagers to buy age-restricted goods like alcohol and tobacco, fake IDs are also used for more serious crime like human trafficking and identity theft. There are numerous forms of identity theft, with perhaps the darkest being ‘ghosting fraud’.  

Discover more about the fraudulent use of a deceased person’s personal information in our blog, ‘Ghosting fraud: Are you doing business with the dead?’ 

How VideoIdent Flex identifies and stops identity theft fraud. 

IDnow’s identity verification always begins with powerful automated identity checks of document data. Our identity specialists will then perform interactive motion detection tests like a hand movement challenge to detect deepfakes. To prevent cases of account takeover, VideoIdent Flex can be used to help customers reverify their identity when any changes to accounts (address, email etc) are made.

3. Social engineering fraud.

Worryingly, according to our recently published UK Fraud Awareness Report, more than half of Brits (54%) do not know what social engineering is. Social engineering fraud refers to the use of deception to manipulate individuals into divulging personal information, money or property and is an incredibly prevalent problem. Common examples of social engineering fraud include social media fraud and romance scams.

How VideoIdent Flex identifies and stops identity theft fraud.

To help prevent social engineering, in all its forms, our identity experts ask a series of questions specifically designed to identify whether someone has fallen victim to a social engineering scam. There are three different levels of questions: Basic; Advanced; and Premium, with questions ranging from “Has anyone promised you anything in return (money, loan etc) in return for this identification?” to “Has anyone prepared you for this identification?”

4. Money mules. 

Although many may initially envisage somebody being stopped at the airport with a suitcase full of cash, money mules, like every fraudulent activity, has gone digital, and has now been extended to persons who receive money from a third party in order to transfer it to someone else. It is important to distinguish between “money mules” and “social engineering“. Money mules are involved in fraud scenarios (i.e. bank drops) and cooperate as part of the criminal scheme. 

In a social engineering scenario, a person is seen as a victim of fraud and are usually unaware that they are breaking the law with their behaviour. They are tricked into opening accounts, e.g. through job advertisements. 

How VideoIdent Flex identifies and stops money mule fraud. 

Our fraud prevention tools like IP address collection and red flag alerts of suspicious combinations of data, such as email domains, phone numbers and submitted documents, can go some way to help prevent money mule fraud. However, as an additional safeguard, when combined with VideoIdent Flex, agents can be trained to pick up on suspicious social cues and pose certain questions.

Why video identity verification is an indispensable fraud-prevention tool.

Check out our interview with our Principal Product Manager, Nitesh Nahta and Senior Product Marketing Manager, Suzy Thomas to discover more about how expert-led video identity verification product, VideoIdent Flex can be used to boost customer conversion rates, reduce rising fraud attempts, and tackle an array of complex online verification scenarios and inclusivity and accessibility challenges.  

Learn more about Brits’ awareness of the latest fraud terms, the industries most susceptible to fraud and usage of risky channels, by reading our  ‘What the UK really knows about fraud’ blog and ‘The role of identity verification in the UK’s fight against fraud’

Or, interested in how fraud trends and fraud prevention has changed over the years, read our interview with ex CID officer, Paul Taylor.

By

Jody Houton
Senior Content Manager at IDnow
Connect with Jody on LinkedIn


Ocean Protocol

ASI Alliance Vision Paper

Building Decentralized Artificial Superintelligence SingularityNET, Fetch.AI, and Ocean Protocol are merging their tokens into the Artificial Superintelligence Alliance (ASI) token. The merged token aligns incentives of the projects to move faster and with more scale. There are three pillars of focus: R&D to build Artificial Superintelligence Practical AI application development
Building Decentralized Artificial Superintelligence

SingularityNET, Fetch.AI, and Ocean Protocol are merging their tokens into the Artificial Superintelligence Alliance (ASI) token. The merged token aligns incentives of the projects to move faster and with more scale.

There are three pillars of focus:

R&D to build Artificial Superintelligence Practical AI application development; towards a unified stack Scale up decentralized compute for ASI

***HERE IS THE VISION PAPER [pdf]***. It expands upon these pillars, and more .

About Ocean Protocol

Ocean was founded to level the playing field for AI and data. Ocean tools enable businesses and individuals to trade tokenized data assets seamlessly to manage data all along the AI model life-cycle. Ocean-powered apps include enterprise-grade data exchanges, data science competitions, and data DAOs. Our Ocean Predictoor product has over $800 million in monthly volume, just six months after launch with a roadmap to scale foundation models globally. Follow Ocean on Twitter or TG, and chat in Discord; and Ocean Predictoor on Twitter.

ASI Alliance Vision Paper was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


TBD

tbDEX 1.0 Now Available

The first major version of tbDEX, an open source liquidity and trust protocol, has been released

The first major version of tbDEX, an open source liquidity and trust protocol, has been released! 🎉 SDK implementations of the protocol are available in TypeScript/JavaScript, Kotlin, and Swift enabling integration with Web, Android, and iOS applications.

tbDEX enables wallet applications to connect liquidity seekers with providers and equips all participants with a common language for facilitating transactions.

tbDEX is architected on Web5 infrastructure, utilizing decentralized technologies such as Decentralized Identifiers (DIDs) and Verifiable Credentials (VCs) to securely validate counterparty identity and trust, as well as helping to enable compliance with relevant laws and regulations.

🏦 Features for PFIs

Participating Financial Institutions (PFIs) can use tbDEX to provide liquidity to any wallet application in the world who also uses tbDEX. Version 1.0 of tbDEX includes the ability to:

Provide a static list of offered currency pairs and payment methods

Specify the required credentials the customer must provide in order to transact

Provide real-time quotes based on the financial transaction the customer is requesting as well as the payment methods selected

Provide status updates on orders

Indicate if the transaction was completed successfully or not

💼 Features for Wallets

Wallet applications using tbDEX act as agents for customers who are seeking liquidity. Version 1.0 of tbDEX includes the ability to:

Obtain service offerings from PFIs to determine which meet your customers' needs

Initiate exchanges with PFIs

Present verifiable credentials to PFIs on behalf of your customers

Receive real-time quotes and place orders

Receive status updates on orders

Cancel an exchange

✅ Features for Issuers

In a tbDEX ecosystem, verifiable credentials - created and distributed by Issuers - serve as a method for establishing trust and facilitating regulatory compliance during transactions. tbDEX utilizes the Web5 SDK to allow Issuers to:

Create decentralized identifiers for PFIs, Issuers, and Wallet users

Issue verifiable credentials

Verify credentials

KYC Credential

We have developed a Known Customer Credential specifically designed to represent a PFI's Know Your Customer regulatory requirements.

🛠️ Get Started with tbDEX

tbDEX allows for a permissionless network, meaning you do not need our blessing to use the SDK. It's all open source, so feel free to begin building with tbDEX today!

If there are missing features that your business needs, we welcome your feedback and/or contributions.

Visit tbdex.io

Monday, 15. April 2024

KuppingerCole

The Right Foundation for Your Identity Fabric

Identity Fabrics have been established as the leading paradigm for a holistic approach on IAM, covering all aspects of IAM, all types of identities (human and non-human), and integrating these. Identity Fabrics can be constructed with a few or several tools. In the Leadership Compass Identity Fabrics, we’ve looked at solutions that cover multiple areas of IAM or that provide strong orchestration c

Identity Fabrics have been established as the leading paradigm for a holistic approach on IAM, covering all aspects of IAM, all types of identities (human and non-human), and integrating these. Identity Fabrics can be constructed with a few or several tools. In the Leadership Compass Identity Fabrics, we’ve looked at solutions that cover multiple areas of IAM or that provide strong orchestration capabilities.

In this webinar, Martin Kuppinger, Principal Analyst at KuppingerCole Analysts, will look at the status and future of Identity Fabrics, on what to consider when defining the own approach for an Identity Fabric, and how the vendor landscape looks like. He will discuss different approaches, from unified solutions to integrating / orchestrating different best-of-breed solutions. He also will look at the best approach for defining your own Identity Fabric.

Join this webinar to learn:

What makes up a modern Identity Fabric. Which approach to take for successfully defining your Identity Fabric. The different ways for constructing an Identity Fabric, from integrated to orchestrated. The Leaders for delivering a comprehensive foundation for an Identity Fabric.


Dock

13 Identity Management Best Practices for Product Professionals

Striking the balance between rigorous security measures and a fluid user experience represents a challenge for identity companies. While safeguarding processes and customers is essential, complicated verification procedures can result in drop-offs and revenue loss. The solution? Implementing identity management best practices that harmonize security with user convenience.

Striking the balance between rigorous security measures and a fluid user experience represents a challenge for identity companies.

While safeguarding processes and customers is essential, complicated verification procedures can result in drop-offs and revenue loss.

The solution? Implementing identity management best practices that harmonize security with user convenience.

Full article: https://www.dock.io/post/identity-management-best-practices


13 Identity Conferences in 2024 You Should Attend

Identity conferences are great opportunities for exchanging ideas, building networking, and discovering the latest trends and technologies regarding identification, digital identity, IAM and authentication. In this article, you'll see 13 identity conferences that can help you grow professionally and bring new ideas and solutions to your company.

Identity conferences are great opportunities for exchanging ideas, building networking, and discovering the latest trends and technologies regarding identification, digital identity, IAM and authentication.

In this article, you'll see 13 identity conferences that can help you grow professionally and bring new ideas and solutions to your company.

Full article: https://www.dock.io/post/identity-conferences


Microsoft Entra (Azure AD) Blog

Introducing "What's New" in Microsoft Entra

With more than 800,000 organizations depending on Microsoft Entra to navigate the constantly evolving identity and network access threat landscape, the need for increased transparency regarding product updates — particularly changes you may need to take action on — is critical.     Today, I’m thrilled to announce the public preview of What’s New in Microsoft Entra. This new hub

With more than 800,000 organizations depending on Microsoft Entra to navigate the constantly evolving identity and network access threat landscape, the need for increased transparency regarding product updates — particularly changes you may need to take action on — is critical.  

 

Today, I’m thrilled to announce the public preview of What’s New in Microsoft Entra. This new hub in the Microsoft Entra admin center offers you a centralized view of our roadmap and change announcements across the Microsoft Entra identity and network access portfolio. In this article, I’ll show you how admins can get the most from what’s new to stay informed about Entra product updates and actionable insights. 

 

Discover what’s new in the Microsoft Entra admin center  

 

Because you’ll want visibility to product updates often, we’ve added what’s new to the top section of the Microsoft Entra admin center navigation pane.

 

Figure 1: What's new is available from the top of the navigation pane in the Microsoft Entra admin center.

 

What’s new is not available in Azure portal, so we encourage you to migrate to the Microsoft Entra admin center if you haven’t already. It’s a great way to manage and gain cohesive visibility across all the identity and network access solutions.

 

Overview of what’s new functionality

 

What’s new offers a consolidated view of Microsoft Entra product updates categorized as Roadmap and Change announcements. The Roadmap tab includes public previews and recent general availability releases, while Change announcements detail modifications to existing features.

 

Highlights tab

To make your life easier, the Highlights tab summarizes important product launches and impactful changes.

 

Figure 2: The highlights tab of what's new is a quick overview of key product launches and impactful changes.

 

Clicking through the items on the highlights tab allows you to get details and links to documentation to configure policies.

 

Figure 3: Click View details to learn more about an announcement.

 

Roadmap tab

The Roadmap tab allows you to explore the specifics of public previews and recent general availability releases.  

 

Figure 4: The Roadmap tab lists the current public preview and recent general availability releases.

 

To know more, you can click on a title for details of that release. Click ‘Learn more’ to open the related documentation.

 

Figure 5: Learn more about an announcement by clicking its title.

 

Change Announcements tab  

Change announcements include upcoming breaking changes, deprecations, retirements, UX changes and features becoming Microsoft-managed.

 

Figure 6: Change announcements tab displays changes to the existing features.

 

You can customize your view according to your preferences, by sorting or by applying filters to prepare a change implementation plan.

 

Figure 7: Apply filters, sort by columns to create a customized view.

 

What’s next? 

  

We’ll continue to extend this transparency into Entra product updates and look forward to elevating your experience to new heights. We would love to hear your feedback on this new capability, as well as what would be most useful to you. Explore what's new in Microsoft Entra now.

 

Best regards,  

Shobhit Sahay

 

 

Learn more about Microsoft identity: 

See recent Microsoft Entra blogs  Dive into Microsoft Entra technical documentation  Learn more at Azure Active Directory (Azure AD) rename to Microsoft Entra ID  Join the conversation on the Microsoft Entra discussion space Learn more about Microsoft Security

liminal (was OWI)

Mastering Compliance: The Rise of Privacy and Consent Management Solutions

The handling of user data has become a central concern for businesses worldwide. As organizations navigate increasing regulations, the need for robust privacy and consent management solutions has never been more urgent. The changing landscape of data privacy, the challenges businesses face, and the sophisticated solutions emerging to address these issues are transforming how organizations […] Th
The handling of user data has become a central concern for businesses worldwide. As organizations navigate increasing regulations, the need for robust privacy and consent management solutions has never been more urgent. The changing landscape of data privacy, the challenges businesses face, and the sophisticated solutions emerging to address these issues are transforming how organizations operate and protect user data.

The data privacy landscape is undergoing significant changes globally. With the implementation of regulations like the General Data Protection Regulation (GDPR) in Europe, businesses are pressured to manage user data responsibly. This regulatory trend is not confined to Europe; it reflects a global shift towards stringent data privacy standards, with 83% of countries now having regulatory frameworks. This change underscores a broader movement towards ensuring consumer data protection and privacy.

Despite the clear directives of these regulations, many organizations need help to meet compliance standards. The main challenge lies in the complexity and speed of these requirements, which are continually evolving. Privacy practitioners on the front lines of implementing these changes feel particularly vulnerable; a staggering 96% report feeling exposed to data privacy risks. This vulnerability stems from the difficulty of adapting to the myriad global privacy laws that differ significantly across jurisdictions.

The complication arises with the global proliferation of GDPR-like privacy frameworks, which amplifies the complexity of compliance. As more countries adopt similar regulations, each with nuances, managing consent and privacy becomes increasingly daunting. Organizations must navigate these waters carefully, as non-compliance penalties can be severe. For instance, the rise in GDPR privacy penalties has highlighted financial and reputational risks related to non-compliance.

In light of these complexities and challenges, the critical questions for business leaders in privacy and consent management include: How can we efficiently manage user consent across different jurisdictions? What technologies and strategies can enhance privacy management while ensuring regulatory compliance? How can leveraging privacy and consent management be a competitive advantage for my company?

A recent survey found that privacy professionals seek sophisticated, automated privacy and consent management solutions to manage these challenges effectively. These tools offer a way to bridge the gap between regulatory demands and effective data management, ensuring compliance across different jurisdictions without sacrificing operational efficiency. Key features of these solutions include automation of consent management, robust data protection measures like encryption and access control, and comprehensive privacy audits.

Automated solutions are not just a compliance necessity; they also offer a competitive advantage by enhancing trust with consumers increasingly concerned about their data privacy. These tools enable businesses to handle data ethically and transparently, thus fostering a stronger relationship with customers.

Key Survey Insights: 96% of businesses are concerned about non-compliance penalties due to regulatory complexity. 74% of companies seek solutions for legacy systems to ensure compliance with modern privacy regulations. 66% of practitioners use automated privacy and consent management solutions, with an additional 20% planning to adopt them within two years. 72% of businesses consider privacy rights request management a critical feature in solutions.

The need for advanced and effective privacy and consent management solutions is clear. Organizations’ ability to adapt will define their success as the regulatory landscape becomes more complex. By leveraging the right tools and strategies, businesses can transform regulatory challenges into opportunities for growth and enhanced customer relationships.

Managing privacy and consent effectively is not just about compliance; it is about gaining and maintaining the trust of your customers. By adopting advanced privacy and consent management tools, businesses can navigate the complexities of global regulations while enhancing their operational efficiency and consumer trust. Access the market and buyer’s guide for detailed insights and information on selecting your organization’s right privacy and consent management tool. 

Download the industry report for privacy and consent management. 

What is Privacy and Consent Management?

Privacy and Consent Management refers to the structured processes and practices businesses and organizations implement to ensure they handle personal data ethically, lawfully, and transparently. This involves obtaining consent from individuals before collecting, processing, or sharing their data, managing their preferences, and ensuring their rights are protected throughout the data lifecycle. Privacy management focuses on adhering to data protection laws, such as GDPR, and establishing policies and technologies that safeguard personal information against unauthorized access or breaches. Consent management, a crucial component of this framework, involves documenting and managing the approval given by individuals for the use of their data, including their ability to modify or withdraw consent at any time. Privacy and consent management is critical in maintaining trust between businesses and consumers, mitigating legal risks, and fostering a culture of privacy across the digital ecosystem.

The post Mastering Compliance: The Rise of Privacy and Consent Management Solutions appeared first on Liminal.co.


Shyft Network

Veriscope Regulatory Recap — 19th March 2024 to 8th April 2024

Veriscope Regulatory Recap — 1st to 15th April 2024 With its new crypto regulatory update, Singapore is mandating strict custody and transaction rules. Brazil, on the other hand, is becoming more crypto-friendly, having started recognizing cryptocurrencies as a valid payment method. Both countries reflect the dynamic, evolving landscape of global crypto regulation. In this edition
Veriscope Regulatory Recap — 1st to 15th April 2024 With its new crypto regulatory update, Singapore is mandating strict custody and transaction rules. Brazil, on the other hand, is becoming more crypto-friendly, having started recognizing cryptocurrencies as a valid payment method. Both countries reflect the dynamic, evolving landscape of global crypto regulation.

In this edition of the Veriscope Regulatory Recap, we examine the latest crypto regulatory developments in Singapore and Brazil.

Both countries are revising their stand toward cryptocurrencies, which is reflected in their regulatory responses. This shift is clearly evident from their new measures.

Singapore Tightening Up Its Crypto Regulations

Until a series of failures rocked the crypto industry in 2022, most rankings identified Singapore as the most crypto-friendly country.

(Image Source)

Even founders who moved to Singapore, drawn by its crypto-friendly measures initiated pre-2022, found themselves questioning their decisions.

Recently, Singapore’s Central Bank, the Monetary Authority of Singapore (MAS), is rolling out new updates to the Payment Services Act to tighten its grip on the crypto landscape.

(Image Source)

These updates extend to crypto custody, token payments or transfers, and cross-border payments, even if transactions don’t physically touch Singapore’s financial system.

Key among these regulations is the requirement for service providers to keep customer assets separate from their own, with a hefty 90% of these assets to be stored in cold wallets to enhance security.

Additionally, the MAS is keen on preventing anyone from having too much control over these assets, favoring multi-party computation (MPC) wallets, which require a collaborative effort for transactions.

Moreover, the MAS is stepping in to protect retail customers by banning them from certain activities, such as crypto staking or lending, which are gaining attention from regulators worldwide.

Brazil Harnessing New Affection for Crypto

On Brazil’s part, it was never really considered among the top five crypto-friendly countries. Yet, it has initiated several measures that challenge crypto enthusiasts’ notions about the country.

(Image Source)

It is also among the major global economies (G20 Member Countries) that have rolled out crypto regulations.

Continuing with its crypto-friendly measures, President Jair Bolsonaro recently green-lit a bill that recognizes cryptocurrency as a valid payment method. Although this law, set to take effect in six months, does not declare cryptocurrencies as legal tender, it does incorporate them into the legal framework.

“With regulation, cryptocurrency will become even more popular.”
- Sen. Iraja Abreu

Under this new law, crypto assets classified as securities will fall under the Brazilian Securities and Exchange Commission’s watch, while a designated government body will oversee other digital assets.

In conclusion, Singapore’s and Brazil’s approaches to crypto regulations prove once again that the crypto industry as a whole is a continuously evolving space and can change significantly within a few years, from narrative to various national governments’ approach to them.

Interesting Reads

The Visual Guide on Global Crypto Regulatory Outlook 2024

Almost 70% of all FATF-Assessed Countries Have Implemented the Crypto Travel Rule

‍About Veriscope

‍Veriscope, the compliance infrastructure on Shyft Network, empowers Virtual Asset Service Providers (VASPs) with the only frictionless solution for complying with the FATF Travel Rule. Enhanced by User Signing, it enables VASPs to directly request cryptographic proof from users’ non-custodial wallets, streamlining the compliance process.

For more information, visit our website and contact our team for a discussion. To keep up-to-date on all things crypto regulations, sign up for our newsletter and follow us on X (Formerly Twitter), LinkedIn, Telegram, and Medium.

Veriscope Regulatory Recap — 19th March 2024 to 8th April 2024 was originally published in Shyft Network on Medium, where people are continuing the conversation by highlighting and responding to this story.


Ocean Protocol

New Data Challenge: Deciphering Crypto Trends

Exploring the relationship between Google Trends data and the cryptocurrency market… Overview This challenge is not just a platform to showcase one’s data science skills; it’s a gateway to gaining deep insights into one of the most dynamic and rapidly evolving markets. Participants will sharpen their data science expertise by analyzing the relationship between Google Trends data and cryptocurren

Exploring the relationship between Google Trends data and the cryptocurrency market…

Overview

This challenge is not just a platform to showcase one’s data science skills; it’s a gateway to gaining deep insights into one of the most dynamic and rapidly evolving markets. Participants will sharpen their data science expertise by analyzing the relationship between Google Trends data and cryptocurrency prices. They will also contribute to our understanding of how online interest influences financial markets. This knowledge could be a game-changer for investors, businesses, and researchers navigating the complexities of the cryptocurrency landscape. So, join us in this journey of discovery and make a significant impact in the world of data-driven finance!

Objective

Participants will explore the correlation between Google Trends data and cryptocurrency token prices to uncover patterns and draw significant conclusions. Emphasizing the development of predictive models, the challenge asks participants to navigate the complexities of cryptocurrency trading, discovering insights into how public interest influences market trends. This opportunity evaluates participants’ ability to apply data science skills in real-world scenarios and provides a deeper understanding of cryptocurrency market dynamics, extracting insights through exploratory data analysis and advanced machine learning methodologies.

Data

The data provided for the challenge is organized into two main categories: ‘trends’ and ‘prices.’ In the ‘trends’ section, participants will find web search interest data sourced from Google Trends for 20 cryptocurrencies, including Bitcoin, Ethereum, BNB, Solana, XRP, Dogecoin, Cardano, Polkadot, Chainlink, Litecoin, Uniswap, Filecoin, Fetch.ai, Monero, Singularitynet, Kezos, Kucoin, Pancakeswap, Oasis Network, and Ocean Protocol. Meanwhile, the ‘prices’ folder is equally important, containing pricing information and trading volume data for the same set of 20 cryptocurrencies. It’s worth noting that the level of interest for each cryptocurrency is normalized on a scale from 0 (representing the fewest searches) to 100 (reflecting the highest number of searches) over a specific period, ensuring uniformity but not providing a basis for direct comparison between cryptocurrencies.

Mission

Our mission is clear: to explore and understand the relationship between cryptocurrency market trends and public search behaviors. In this challenge, we identify the factors influencing crypto markets through rigorous analysis. We ask participants to create predictive models to forecast token trends and compile detailed reports sharing their insights and discoveries. The contest aims to foster innovation, collaboration, and learning within the data science community while contributing to a deeper understanding of the complex forces driving cryptocurrency markets.

Rewards

We’re dedicated to acknowledging excellence and nurturing talent, so we’ve designed a reward system that celebrates top performers while motivating participants of all skill levels. With a total prize pool of $10,000 distributed among the top 10 participants, our structure brings excitement and competitiveness to the 2024 championship. Not only do the top 10 contenders receive cash rewards, but they also accumulate championship points, ensuring an even playing field for both seasoned data scientists and newcomers alike.

Opportunities

But wait, there’s more! The top 3 performers in each challenge may have the opportunity to collaborate with Ocean on dApps that monetize their algorithms. What sets us apart? Unlike other platforms, you retain full intellectual property rights. Our goal is to empower you to bring your innovations to the market. Let’s work together to turn your ideas into reality!

How to Participate

Are you ready to join us on this quest? Whether you’re a seasoned data pro or just starting, there’s a place for you in our community of data scientists. Let’s explore and discover together on Desights, our dedicated data challenge platform. The challenge runs from April 11 until April 30, 2024, at midnight UTC. Click here to access the challenge.

Community and Support

To engage in discussions, ask questions, or join the community conversation, connect with us on Ocean’s Discord channel #data-science-hub or the Desights support channel #data-challenge-support.

About Ocean Protocol

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data.

Follow Ocean on Twitter or Telegram to keep up to date. Chat directly with the Ocean community on Discord — or track Ocean’s progress on GitHub.

New Data Challenge: Deciphering Crypto Trends was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


PingTalk

Permissions Management: A Developers' Perspective on Authorization | Ping Identity

Controlling access to resources and data is a critical priority for organizations. When developers are tasked with introducing a new application, one of the first considerations is the authorization model. How will we control access to features? Will there be limitations on who can perform actions? For too long, the answer has been to custom develop a homegrown solution for each application. &nb

Controlling access to resources and data is a critical priority for organizations. When developers are tasked with introducing a new application, one of the first considerations is the authorization model. How will we control access to features? Will there be limitations on who can perform actions? For too long, the answer has been to custom develop a homegrown solution for each application.

 

However, this approach often means that developers are repeatedly developing an authorization solution time and time again. This is a hidden cost of application development, where developer time is spent building an authorization framework rather than features and functionality that help drive business outcomes. Furthermore, homegrown authorization frameworks are often limited in the use cases they can solve.  

 

Following the pattern of authentication, developers are now turning to IAM platforms to manage authorization controls. For simple needs, authorization may be easily managed with an application permissions model. As more sophisticated use cases and requirements emerge, this simple model is best extended with fine-grained policies to handle user segmentation and dynamic decisioning.

Sunday, 14. April 2024

KuppingerCole

Analyst Chat #210: Exploring Real-Life Use Cases of Decentralized Identity

Matthias and Annie discuss real-life use cases of decentralized identity. They explore two categories of decentralized identity use cases: those that radically change the relationship between individuals and organizations, and those that solve specific problems using decentralized technology. They highlight the eIDAS 2.0 regulation in Europe as a driver for decentralized identity adoption and

Matthias and Annie discuss real-life use cases of decentralized identity. They explore two categories of decentralized identity use cases: those that radically change the relationship between individuals and organizations, and those that solve specific problems using decentralized technology.

They highlight the eIDAS 2.0 regulation in Europe as a driver for decentralized identity adoption and mention the importance of interoperability testing. They also touch on the potential use of decentralized identity in supply chain management and the need for open and interoperable ecosystems.




Spherical Cow Consulting

What is the W3C WICG Digital Identities Project?

In a digital age where the management of identity wallets and credentials is becoming increasingly complex, the W3C's Web Incubator Community Group (WICG) has initiated a pivotal work item called Digital Identities. As co-chair of the newly formed Federated Identity Working Group alongside Wendy Seltzer, I delve into why this project may (or may not!) soon find a permanent home within our group. T

About a year ago, a new work item was formed under the W3C’s Web Incubator Community Group (the WICG). This work item looks at how a browser should behave when it comes to identity wallets and the credentials they hold. While the project has gone through a few name changes, it is currently called Digital Identities; the scope is available in GitHub

Why am I writing about it now? Because the W3C is thinking about whether the new working group I’m co-chairing in the W3C with Wendy Seltzer, the Federated Identity Working Group, should be the standardization home for this project. This is definitely a niche post, but IYKYK!

Background

Initially, conversations about browsers, wallets, and how individuals are expected to select their credential of choice started in a FIDO Alliance board task force. Given the number of overlapping participants, members of the Federated Identity Community Group (FedID CG) took up the question as to whether that work should be in scope for the CG. The FedID CG, however, came to the conclusion that they were focused on determining how a different identity architecture, one that covers more traditional federation models via OpenID Connect and SAML, should handle the deprecation of third-party cookies. So, while there was alignment on the problem of “how is an individual supposed to actively select their identity,” the fact that the deprecation of third-party cookies mattered to the federation architecture but not to the wallet architecture suggested a separate incubation effort was necessary. If you don’t have alignment on what problem you’re trying to solve, you’re probably not going to solve the problem.

A Different Set of Stakeholders

That wasn’t necessarily a bad thing, that rejection from the FedID CG. The work item there is relatively simple when compared to what needs to come into play for an entirely new identity architecture. There are fewer stakeholders involved in a ‘traditional’ federation architecture. When considering wallet interactions, the number of interested parties goes well beyond a SAML or OIDC federation’s Identity Providers and Relying Parties and the browser.

With a digital identity wallet, we see requirements coming in from operating system developers, browser developers, and privacy advocates, as well as wallet and credential issuers and verifiers. This diversity of needs results in some confusion as to what problem the group is trying to solve. There are several layers to making a digital wallet function in a secure, privacy-preserving fashion; the group is not, however, specifying for all layers.

The WICG’s Digital Identities work may be a good fit for a more structured working group format than it was for a community group focused on incubation; that’s part of what has inspired this post.

Protocol Layers

The WICG Digital Identities work item did not start with a single problem statement the way the FedID CG did. Instead, their mission is described in their charter “to specify an API for user agents that would mediate access to, and representation of, verifiably-issued digital identities.”

To understand the totality of the effort to bring digital wallets and credentials to the web, which is a broader scope than that of the Digital Identities work item, you need to understand the many layers involved in enabling an identity transaction on the web and/or across apps. 

Our Layers Standardized API (W3C) = Digital Credentials Web Platform API (this is us) Standardized API (Other) = currently FIDO CTAP 2.2 (hybrid transport; a phishing-resistant, cross-device link is already in place for passkeys/WebAuthn) Platform-specific web translation API = platform abstraction of web platform APIs for verifiers* Platform-specific function API = independent platform native API* Protocol-specific = Protocol or deployment-specific request/response Digital Credentials Web Platform API

The output of the WICG Digital Identities work item is the Digital Credentials Web Platform API from that first layer in the stack. In incubating that API, the specification editors are relying on the existence and behavior of other APIs either already in place or being developed by their respective platforms. Having the developers of those other APIs involved to make sure that the end-to-end flow of wallet and credential selection works as anticipated by the Digital Credentials Web Platform API is critical. Requiring change to those other APIs is out of scope for the Digital Identities work item (though we can ask nicely). 

An FAQ Should the browser see the details of the request for what’s in a wallet? That’s not in scope for the W3C (though the question still comes up when people join the group). Should the OS see the details of what’s in a wallet? That’s not in scope for the W3C, either, so while of interest to many, it’s not something this group can or should resolve.  Should the API be protocol agnostic when it comes to verifiable credentials? Some say yes, some say no. The more protocols you have to support, the more expensive maintenance gets. So, while on the one hand, being protocol-agnostic supports the largest number of use cases, it’s also the most expensive thing to do.  What does protocol-agnostic look like in practice when different credentials format similar information differently? That’s one of the things we talk about.  At what point(s) does the individual consent to the information being requested from a wallet? being requested to be added to a wallet? We’re still talking about that, too. Is the use case under consideration primarily remote presentation or in-person presentation? The scope is online presentation, so the work is focused on the remote use case.  Is the payload of requests in scope for this group, or is the group only concerned with communication channels (leaving payload handling up to the platforms)? Here’s another area of contention. Of course, the browser wants to prevent bad traffic. But this stuff is encrypted for a reason, and making everything (most things? some things?) viewable to the browser isn’t necessarily the right answer either from a privacy and security perspective.  Is the question of what signal must exist (and who provides that signal) for a wallet to be trusted by the browser in scope for the group? If not, where can those discussions be directed? This is not in scope as the web platform does not directly communicate with the wallet in this architecture. Wrap Up

So what happens now? Conversations are happening at the OAuth Security Workshop, within the W3C Advisory Committee, and soon at the Internet Identity Workshop. By the time those wrap up, the Federated Identity Working Group will start meeting and will have its own say as to whether this work belongs in scope or not. If you’re interested in participating in the conversation, there is a GitHub issue open where we are collecting input on the topic. You are welcome to chime in there, or just grab some popcorn and watch the story unfold!

I love to receive comments and suggestions on how to improve my posts! Feel free to comment here, on social media, or whatever platform you’re using to read my posts! And if you have questions, go check out Heatherbot and chat with AI-me

The post What is the W3C WICG Digital Identities Project? appeared first on Spherical Cow Consulting.

Saturday, 13. April 2024

Finema

The Hitchhiker’s Guide to KERI. Part 3: How do you use KERI?

This blog is the third part of a three-part series, the Hitchhiker’s Guide to KERI: Part 1: Why should you adopt KERI? Part 2: What exactly is KERI? Part 3: How do you use KERI? Now that you grasp the rationale underpinning the adoption of KERI and have acquired a foundational understanding of its principles, this part of the series is dedicated to elucidating the prelimi

This blog is the third part of a three-part series, the Hitchhiker’s Guide to KERI:

Part 1: Why should you adopt KERI? Part 2: What exactly is KERI? Part 3: How do you use KERI?

Now that you grasp the rationale underpinning the adoption of KERI and have acquired a foundational understanding of its principles, this part of the series is dedicated to elucidating the preliminary steps necessary for embarking upon a journey with KERI and the development of applications grounded in its framework.

The resources provided below, while presented in no particular order, serve to supplement your exploration of KERI. Moreover, this blog will serve as an implementer guide to further deepen your understanding and proficiency in utilizing KERI.

Photo by Ilya Pavlov on Unsplash Read the Whitepaper

The Key Event Receipt Infrastructure (KEI) protocol was first introduced in the KERI whitepaper by Dr. Samuel M. Smith in 2019. The whitepaper kickstarts the development of the entire ecosystem.

While the KERI whitepaper undoubtedly offers invaluable insights into the intricate workings and underlying rationale of the protocol, I would caution against starting your KERI journey with it. Its length exceeding 140 pages, may pose a significant challenge for all but a few cybersecurity experts. It is advisable to revisit the whitepaper once you have firmly grasped the foundational concepts of KERI. Nevertheless, should you be inclined towards a more rigorous learning approach, you are certainly encouraged to undertake the endeavor.

The KERI Whitepaper, first published July 2019.

I also recommend related whitepapers by Dr. Samuel M. Smith as follows:

Universal Identifier Theory: a unifying framework for combining autonomic identifiers (AID) with human meaningful identifiers. Secure Privacy, Authenticity, and Confidentiality (SPAC): the whitepaper that laid the foundation for the ToIP trust-spanning protocol. Sustainable Privacy: a privacy-protection approach in the KERI ecosystem. Read Introductory Contents

Before delving into the whitepaper and related specifications, I recommend the following introductory materials, which helped me personally:

KERI Presentation at SSI Meetup Webinar, given by the originator of KERI, Dr. Samuel M. Smith, himself KERI for Muggles, by Samuel M. Smith and Drummond Reed. This was a presentation given at the Internet Identity Workshop #33.
Note: the author of this blog was first exposed to KERI by this presentation.
Section 10.8 of “Self-Sovereign Identity” by Alex Preukschat & Drummond Reed, Manning Publication (2021). This section was also written by Dr. Samuel M. Smith. The Architecture of Identity Systems, by Phil Windley. Written by one of the most prominent writers in the SSI ecosystem, Phil compared administrative, algorithm, and autonomic identity systems. KERISSE, by Henk van Cann and Kor Dwarshuis, this an educational platform as well as a search engine for the KERI ecosystem.

More resources can also be found at https://keri.one/keri-resources/. Of course, this Hitchhiker’s Guide to KERI series has also been written as one such introductory content.

“Self-Sovereign Identity” by Alex Preukschat & Drummond Reed Read the KERI and Related Specifications

As of 2024, the specifications for KERI and related protocols are being developed by the ACDC (Authentic Chained Data Container) Task Force under the Trust over IP (ToIP) Foundation. Currently, there are four specifications:

Key Event Receipt Infrastructure (KERI): the specification for the KERI protocol itself. Authentic Chained Data Containers (ACDC): the specification for the variant of Verifiable Credentials (VCs) used within the KERI ecosystem. Composable Event Streaming Representation (CESR): the specification for a dual text-binary encoding format used for messages exchanged within the KERI protocol. DID Webs Method Specification: the specification did:webs method that improves the security property of did:web with the KERI protocol. KERI Specification v1.0 Draft

There are also two related protocols, which do not have their own dedicated specifications:

Self-Addressing Identifier (SAID): a protocol for generating identifiers used in the KERI protocol. Almost all identifiers in KERI are SAIDs, including AIDs, ACDCs’ identifiers, and schemas’ identifiers. Out-Of-Band-Introduction (OOBI): a discovery mechanism for AIDs and SAIDs using URLs.

To learn about these specifications, I also recommend my blog, the KERI jargon in a nutshell series.

Note: The KERI community intends to eventually publish the KERI specifications in ISO. However, this goal may take several years to achieve.
Check out the KERI Open-Source Projects

The open-source projects related to the KERI protocols and their implementations are hosted in WebOfTrust Github, all licensed under Apache Version 2.0.

Note: Apache License Version 2.0 is a permissive open-source software license that allows users to freely use, modify, and distribute software under certain conditions. It permits users to use the software for any purpose, including commercial purposes and grants patent rights to users. Additionally, it requires users to include a copy of the license and any necessary copyright notices when redistributing the software.

Here are some of the important projects being actively developed by the KERI community:

Reference Implementation: KERIpy

The core libraries and the reference implementation for the KERI protocol have been written in Python, called KERIpy. This is by far the most important project that all other KERI projects are based on.

KERIpy (Python): https://github.com/WebOfTrust/keripy

KERIpy is also available in Dockerhub and PyPI:

Dockerhub: https://hub.docker.com/r/weboftrust/keri PyPI: https://pypi.org/project/keri/ Edge Agent: Signify

The KERI ecosystem follows the principle of “key at the edge (KATE),” that is, all essential cryptographic operations are performed at edge devices. The Signify projects have been developed to provide lightweight KERI functionalities at edge devices. Currently, Signify is already in Python and Typescript.

SignifyPy (Python) https://github.com/WebOfTrust/signifypy Signify-TS (Typescript) https://github.com/WebOfTrust/signify-ts

Signify is also available in PyPI and NPM:

PyPI: https://pypi.org/project/signifypy/ NPM: https://www.npmjs.com/package/signify-ts Cloud Agent: KERIA

Signify is designed to be lightweight and is reliant on a KERI cloud agent, called KERIA. KERIA helps with data storage and facilitates communication with external parties. As mentioned above, all essential cryptographic operations are performed at the edge using KERIA. Private and sensitive data are also encrypted at the edge before being stored in a KERIA server.

KERIA (Python): https://github.com/WebOfTrust/keria

KERIA is also available in Dockerhub:

Dockerhub: https://hub.docker.com/r/weboftrust/keria Browser Extension: Polaris

The browser extension project is based on Signify-TS for running in browser environments. There is also a companion repository called the Polaris Web for building frontend applications that are compatible with the Signify browser extension.

Signify Browser Extension: https://github.com/WebOfTrust/signify-browser-extension Polaris web: https://github.com/WebOfTrust/polaris-web
Note: The Signify browser extension project was funded by Provanant Inc. and developed by RootsID. The project has been donated to the WebOfTrust Github project under Apache License Version 2.0.
Study KERI Command Line Interface (KLI)

Once you grasp the basic concept of KERI, one of the best ways to start learning about the KERI protocol is to work with the KERI command line interface (KLI), which uses simple bash scripts to provide an interactive experience.

I recommend the following tutorials on KLI:

KERI & OOBI CLI Demo, by Phillip Feairheller & Henk van Cann. KERI KLI Tutorial Series, by Kent Bull. Currently, two tutorials are available: (1) Sign & Verify with KERI and (2) Issuing ACDC with KERI.

Many more examples of KLI scripts can be found in the KERIpy repository, at:

KLI demo scripts: WebOfTrust/keripy/scripts/demo.

While KLI is a good introductory program for learning the KERI protocol, it is crucial to note that KLI is not suitable for developing end-user (client-side) applications in a production environment.

Note: KLI can be used in production for server-side applications.
KERI KLI Series: Sign and Verify by Kent Bull Build an App with Signify and KERIA

For building a KERI-based application in production environments, it is recommended by the KERI community to utilize Signify for edge agents and KERIA for cloud agents. These projects were specifically designed to complement each other, enabling the implementation of “key at the edge (KATE)”. That is, essential cryptographic operations are performed at edge devices, including key pair generation and signing, while private and sensitive data are encrypted before being stored in an instance of KERIA cloud agent.

The Signify-KERIA protocol by Philip Feairheller can be found here:

Signify/KERIA Request Authentication Protocol (SKRAP): https://github.com/WebOfTrust/keria/blob/main/docs/protocol.md

The API between a Signify client and KERIA server can be found here:

KERI API (KAPI): https://github.com/WebOfTrust/kapi/blob/main/kapi.md

Example Signify scripts for interacting with a KERIA server can also be found here:

Example scripts: https://github.com/WebOfTrust/signify-ts/tree/main/examples/integration-scripts Join the KERI Community!

To embark on your KERI journey, I recommend joining the KERI community. As of April 2024, there are three primary ways to engage:

Join the WebOfTrust Discord Channel

The WebOfTrust Discord channel is used for casual discussions and reminders for community meetings. You can join with the link below:

https://discord.gg/YEyTH5TfuB. Join the ToIP ACDC Task Force

The ACDC Task Force under the ToIP foundation focuses on the development of the KERI and related specifications. It also includes reports on the news and activities of the community’s members as well as in-depth discussions of related technologies.

The ACDC Task Force’s homepage can be found here:

https://wiki.trustoverip.org/display/HOME/ACDC+(Authentic+Chained+Data+Container)+Task+Force

Currently, they hold a meeting weekly on Tuesdays:

NA/EU: 10:00–11:00 EST / 14:00–15:00 UTC Zoom Link: https://zoom.us/j/92692239100?pwd=UmtSQzd6bXg1RHRQYnk4UUEyZkFVUT09

For all authoritative meeting logistics and Zoom links, please see the ToIP Calendar.

Note: While anyone is welcome to join meetings of ToIP as an observer, only members are allowed to contribute. You can join ToIP for free here.
Join the KERI Implementer Call

Another weekly meeting is organized every Thursday:

NA/EU: 10:00–11:00 EST / 14:00–15:00 UTC Zoom link: https://us06web.zoom.us/j/81679782107?pwd=cTFxbEtKQVVXSzNGTjNiUG9xVWdSdz09

In contrast to the ToIP ACDC Task Force’s meeting, the implementer call focuses on the development and maintenance of the open-source projects in WebOfTrust Github. As a result, the weekly Thursday meetings tend to delve deeper into technical details.

Note: There is also a weekly meeting on DID Webs Method every Friday. See the ToIP DID WebS Method Task Force’s homepage here: https://wiki.trustoverip.org/display/HOME/DID+WebS+Method+Task+Force.

The Hitchhiker’s Guide to KERI. Part 3: How do you use KERI? was originally published in Finema on Medium, where people are continuing the conversation by highlighting and responding to this story.

Friday, 12. April 2024

Ayan Works

Introducing Our Fresh Identity: A Journey of Evolution

As Benjamin Disraeli said, “Change is inevitable, change is constant”. It’s a natural evolution that propels us, enabling us to adapt, grow, and better serve our community. At AYANWORKS, we have been living this through constant learning, innovation, and adaptation. In continuation to the same passion, we’re thrilled to announce a significant milestone in our journey: the launch of our new logo an

As Benjamin Disraeli said, “Change is inevitable, change is constant”. It’s a natural evolution that propels us, enabling us to adapt, grow, and better serve our community. At AYANWORKS, we have been living this through constant learning, innovation, and adaptation. In continuation to the same passion, we’re thrilled to announce a significant milestone in our journey: the launch of our new logo and brand identity. As part of our ongoing evolution, we’ve refreshed our professional profile to better reflect who we are today and where we’re headed tomorrow.

Why the new logo?

Since our inception in 2015, we’ve experienced substantial transformation. Our journey has been marked by innovation, excellence, growth, trust, a futuristic approach, and a relentless commitment to delivering outstanding services to our clients and partners. As we continue to evolve, our brand identity must evolve with us, aligning with our core values and audacious aspirations.

Introducing our new logo:

After several creative sessions, iterations and careful consideration, we’re proud to unveil our new logo — a symbol of our dynamic future and unwavering commitment to excellence.

Key elements of our new logo:

So long, we have fostered a culture of positivity and joy with a dedication to creating uplifting experiences for our customers, our partners and our mighty team. The Yellow color radiates the same cheerfulness and happiness that are deeply rooted in our culture. At the very core of AyanWorks lies our unwavering belief in speed, innovation, and a forward-thinking mindset. The arrow pointing towards the up-right, symbolizes these values, embodies our commitment to staying ahead of the curve and embracing the future. Since day one, AYANWORKS has consistently delivered exceptional results, earning the trust of our clients and partners. The use of uppercase letters in ‘AYANWORKS’ symbolizes the confidence and capability inherent in the value we provide.

Through the evolution of our logo and brand identity, we aim to strengthen our connection with a global audience. The dynamic design elements and forward-thinking symbolism in our new logo resonate not only with our existing stakeholders but also with individuals and organizations worldwide who share our vision for innovation and excellence.

What remains unchanged:

While our look may be new, our core values remain steadfast. Our commitment to delivering sovereign, secure, sustainable, and verifiable digital identity and data solutions remains unchanged. Our pursuit for continuous improvement and innovative products will not only continue but also will take new strides as we innovate, collaborate, co-create, and provide ‘wow’ user experience. We’re excited to embark on this new chapter of our journey, fueled by optimism, creativity, and a relentless drive for excellence.

In addition to our new brand logo, we have also revamped our corporate website, set to launch soon. The site embodies a clean, refined, and modern aesthetic, providing visitors with easy access to information about our solutions and services. This transformation reflects the growth and open-minded culture of our company, inspiring us to take new heights as we continue to deliver top-tier solutions to our esteemed clients.

We hope you embrace our new look as much as we do. Thank you for your continued support.

Namaste 🙏


Civic

Civic Milestones & Updates: Q1 2024

The first quarter of 2024 is expected to result in modest growth during earnings season. In the crypto sector, Ethereum’s revenue soared, marking a 155% YoY increase. Encouragingly, Coinbase posted a profit on strong trading for the first time in two years. The sector also benefited from a spot Bitcoin ETF approval by the SEC, […] The post Civic Milestones & Updates: Q1 2024 appeared first o

The first quarter of 2024 is expected to result in modest growth during earnings season. In the crypto sector, Ethereum’s revenue soared, marking a 155% YoY increase. Encouragingly, Coinbase posted a profit on strong trading for the first time in two years. The sector also benefited from a spot Bitcoin ETF approval by the SEC, […]

The post Civic Milestones & Updates: Q1 2024 appeared first on Civic Technologies, Inc..


KuppingerCole

May 23, 2024: Adapting to Evolving Security Needs: WAF Solutions in the Current Market Landscape

Join us for a webinar where we will explore recent shifts in the WAF market and the rising prominence of WAAP solutions. Discover the latest security features and capabilities required by the WAF market in 2024. Gain valuable insights into market trends and key vendors and discover what differentiates the industry leaders.
Join us for a webinar where we will explore recent shifts in the WAF market and the rising prominence of WAAP solutions. Discover the latest security features and capabilities required by the WAF market in 2024. Gain valuable insights into market trends and key vendors and discover what differentiates the industry leaders.

Thursday, 11. April 2024

KuppingerCole

Revolutionizing Secure PC Fleet Management

Many organizations are battling with effectively managing their PC fleet. These challenges range from hybrid work, temporary staff, and edge computing, especially when it comes to topics like data security and asset management. HP has come up with a way to overcome these challenges through Protect and Trace with Wolf Connect. Connect integrated into their E2E Security and Fleet Management Sta

Many organizations are battling with effectively managing their PC fleet. These challenges range from hybrid work, temporary staff, and edge computing, especially when it comes to topics like data security and asset management. HP has come up with a way to overcome these challenges through Protect and Trace with Wolf Connect. Connect integrated into their E2E Security and Fleet Management Stack.

Join experts from KuppingerCole Analysts and HP as they unpack the new capabilities of HP’s Protect and Trace with Wolf Connect. Organizations are now able to interact with their PC fleet globally. It is a low-cost cellular-based management connection to HP PCs. Organizations now have the capability to locate, secure, and erase a computer remotely, even when powered down or disconnected from the Internet.

John Tolbert, Director of Cybersecurity Research and Lead Analyst at KuppingerCole Analysts will discuss the importance of endpoint security, look at some common threats, and describe the features of popular endpoint security tools such as Endpoint Protection, Detection and Response (EPDR) solutions. He will also look at specialized Unified Endpoint Management (UEM) tools and how these tools all fit into an overall cybersecurity architecture.

Lars Faustmann, Leading Digital Services for Central Europe at HP and Oliver Pfaff, Business Execution Manager at HPs Workforce Solutions Business will demonstrate the functionality of Wolf Connect, and reveal how to maintain tighter control over a PC fleet to secure data, track assets, cut costs, manage devices, reduce risk, and support compliance.

Join this webinar to:

Solve challenges across asset management. Maintain control of data. Remotely find, lock, and erase a PC, even when powered down or disconnected from the Internet. Protect sensitive data. Improve user experience and peace of mind.


Shyft Network

Guide to FATF Travel Rule Compliance in India

India amended its anti-money laundering law to include cryptocurrencies, requiring KYC checks and reporting of transactions. The FATF Travel Rule has been effective in India since 2023. It mandates crypto exchanges in India to collect and report detailed sender and receiver information to combat money laundering and terrorist financing. As India continues to gain prominence in the crypto
India amended its anti-money laundering law to include cryptocurrencies, requiring KYC checks and reporting of transactions. The FATF Travel Rule has been effective in India since 2023. It mandates crypto exchanges in India to collect and report detailed sender and receiver information to combat money laundering and terrorist financing.

As India continues to gain prominence in the crypto market, the government has been providing clarity on various related issues, including the application of anti-money laundering (AML) and FATF Travel Rule for crypto transactions.

To bring crypto under the ambit of the Act and reign in Virtual Digital Assets Service Providers (VDASPs), the Indian government amended the PMLA 2002 (Prevention of Money Laundering Act).

Key Features

Per the guidelines, a designated service provider in India must have policies and procedures. India’s Ministry of Finance considers those involved in the following activities to be ‘reporting entities’:

- Transfer of crypto
- Exchange between fiat currencies and crypt
- Exchange between one or more types of crypto
- Participation in financial services related to an issuer’s offer and sale of crypto
- Safekeeping or administration of crypto or instruments enabling control over crypto

These VDPA entities need to ensure compliance with the following:

- The reporting entities must register with the Financial Intelligence Unit and provide transaction details to the agency within the stipulated period.

- Crypto exchanges must verify the identities of their customers and beneficial owners, if any.

- Platforms have to perform ongoing due diligence on every client. In the case of certain specified transactions, VDASPs have to conduct enhanced due diligence (EDD) on their clients.

- In addition to identifying the customers, exchanges have to maintain records of updated client identification and transactions for up to five years after the business relationship between the two has ended or the account has been closed.

- VDASPs are also required to appoint a principal officer and a director who will be responsible for ensuring that the entity complies with rules. Their details, which include name, phone number, email ID, address, and designation, must be submitted to the Financial Intelligence Unit — India (FIU-IND).

Meanwhile, FIU’s guidelines further require VDASPs to conduct counterparty due diligence and have adequate employee screening procedures. These entities must also provide instruction manuals for onboarding, transaction processing, KYC, due diligence, transaction review, sanctions screening (which must be done when onboarding and transferring crypto), and record keeping.


Compliance Requirements

In line with global efforts to regulate crypto assets, the Indian government has introduced AML guidelines similar to those already followed by banks.

Per the guidelines, a designated service provider in India must have policies and procedures to combat money laundering and terrorist financing. This includes verifying customer identity, for which VDASPs must obtain and hold certain information that must be made available to appropriate authorities on request. Moreover, this applies regardless of whether the value of the crypto transfer is denominated in fiat or another crypto.

For the originating (sender) VDASP, the following information must be acquired and held:

- Originator’s full verified name.
- The Permanent Account Number (PAN) of the sending person.
- Originator’s wallet addresses used to process the transaction.
- Originator’s date and place of birth or their verified physical address.
- Beneficiary’s (receiver) name and wallet address.

For the beneficiary (receiver) VDASP, the following information must be acquired and held:

- Beneficiary’s verified name.
- Beneficiary’s wallet address used to process the transaction.
- Originator’s name
- Originator’s National Identity Number or Permanent Account Number (PAN).
- Originator’s wallet addresses and physical address or date and place of birth.

When it comes to reporting obligations, the entity must report any suspicious transactions within a week of identification. Reporting entities are further prohibited from disclosing or “tipping off” that a Suspicious Transactions Report (STR) is provided to the FIU-IND.

However, the minimum threshold for Travel Rule compliance is unclear, as the Indian government hasn’t mentioned it in the circular. However, a few Indian exchanges are requesting Travel Rule data for compliance even when the transaction amount is less than $1000.

Unfortunately, instead of adopting a fully automated, privacy-oriented, frictionless Travel Rule solution like Veriscope, a few exchanges in India depend on manual methods to collect personally identifiable information from users (i.e., Google forms, emails, etc).

When it comes to unhosted or non-custodial wallets, the FIU classifies any crypto transfers made to and from them as “high risk” as they may not be hosted on an obligated entity such as an exchange. As per the guidelines, P2P transfers also fall into this category, given that one of the wallets is not hosted.

Hence, when crypto transfers are made between two wallets where at least one of them is a hosted wallet, the compliance responsibility falls on the entity where the wallet is hosted.

“Additional limitations or controls may be put in place on such transfers with unhosted wallets,” according to FIU — IND.

Concluding Thoughts

The implementation of the Crypto Travel Rule shows that India is gradually regulating the crypto sector and requiring businesses dealing with crypto to adhere to the same AML requirements as registered financial institutions like banks.

While it may lead to some short-term challenges, India’s crypto businesses believe this is expected to create a more trustworthy environment in the long run.


About Veriscope


‍Veriscope, the compliance infrastructure on Shyft Network, empowers Virtual Asset Service Providers (VASPs) with the only frictionless solution for complying with the FATF Travel Rule. Enhanced by User Signing, it enables VASPs to directly request cryptographic proof from users’ non-custodial wallets, streamlining the compliance process.

For more information, visit our website and contact our team for a discussion. To keep up-to-date on all things crypto regulations, sign up for our newsletter and follow us on X (Formerly Twitter), LinkedIn, Telegram, and Medium.

Guide to FATF Travel Rule Compliance in India was originally published in Shyft Network on Medium, where people are continuing the conversation by highlighting and responding to this story.


Ocean Protocol

DF84 Completes and DF85 Launches

Predictoor DF84 rewards available. Passive DF & Volume DF are pending ASI merger vote. DF85 runs Apr 11— Apr 18, 2024 Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by locking OCEAN, curating data, and making predictions via Predictoor. Ocean Protocol is joining with Fetch and SingularityNET to form the Superintelligence Alliance, with a unified token
Predictoor DF84 rewards available. Passive DF & Volume DF are pending ASI merger vote. DF85 runs Apr 11— Apr 18, 2024

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by locking OCEAN, curating data, and making predictions via Predictoor.

Ocean Protocol is joining with Fetch and SingularityNET to form the Superintelligence Alliance, with a unified token $ASI. This is pending a vote of “yes” from the Fetch and SingularityNET communities, a process that will take several weeks. This Mar 27, 2024 article describes the key mechanisms.
There are important implications for veOCEAN and Data Farming. The article “Superintelligence Alliance Updates to Data Farming and veOCEAN” elaborates.

Data Farming Round 84 (DF84) has completed. Passive DF & Volume DF rewards are on pause; pending the ASI merger votes. Predictoor DF claims run continuously.

DF85 is live today, April 11. It concludes on Apr 18.

Here is the reward structure for DF85:

Predictoor DF is like before, with 37,500 OCEAN rewards and 20,000 ROSE rewards The rewards for Passive DF and Volume DF are on pause, pending the ASI merger votes. About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

Data Farming is Ocean’s incentives program. In DF, you can earn OCEAN rewards by locking OCEAN, curating data, and making predictions.

DF84 Completes and DF85 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Verida

How DePINs Can Disrupt Tech Monopolies and Put People Back in Control

How DePINs Can Disrupt Tech Monopolies and Put People Back in Control Written by Chris Were (Verida CEO & Co-Founder), this article was originally published on Bazinga. Decentralized Infrastructure Physical Networks — DePINs — have the potential to transform how we access and use real-world services. Potential use cases are only restricted by your imagination. What if… Internet h
How DePINs Can Disrupt Tech Monopolies and Put People Back in Control

Written by Chris Were (Verida CEO & Co-Founder), this article was originally published on Bazinga.

Decentralized Infrastructure Physical Networks — DePINs — have the potential to transform how we access and use real-world services.

Potential use cases are only restricted by your imagination.

What if… Internet hotspots could be established in rural areas where there is little coverage? Homeowners could be rewarded by selling excess solar energy back to the grid? Consumers could share unused storage space on their devices with others? Entrepreneurs could unlock peer-to-peer microloans to build local projects?

Underpinned by blockchain technology, DePINs make all of this possible — at a time when the infrastructure powering the global economy is experiencing seismic change. Figures from Statista suggest 33.8% of the world’s population don’t use the internet, with people in low-income countries most likely to be shut out of the modern information society. The International Energy Agency estimates that 100 million households will depend on rooftop solar panels by 2030, and enhancing economic incentives will be a crucial catalyst for adoption. And let’s not forget that the rise of artificial intelligence means the need for storage and computation is booming, with McKinsey projecting demand for data centers will rise 10% a year between now and the end of the decade. DePINs have the power to cultivate a cloud storage network that’s much cheaper than traditional players including Google and Amazon.

DePINs mount a competitive challenge to the centralized providers who dominate the business landscape. Right now, most of the infrastructure we use every day is controlled by huge companies or governments. This creates a real risk of monopolies where a lack of choice pushes up prices for consumers and businesses — with the pursuit of profits stymying innovation and shutting out customers based on geography and income.

The need for change

Blockchains are at the beating heart of these decentralized networks. That’s because individuals and businesses who contribute physical infrastructure can be rewarded in crypto tokens that are automatically paid out through smart contracts. Consumers can also use digital assets to unlock services on demand.

This approach isn’t about modernizing access to infrastructure, but changing how it is managed, accessed and owned. Unlike centralized providers, the crypto tokens issued through DePINs incentivize all participants to get involved. Decentralized autonomous organizations (known as DAOs for short) play a vital role in establishing the framework for how these projects are managed. Digital assets can be used to vote on proposals ranging from planned network upgrades to where resources should be allocated. Whereas big businesses are motivated by profit, community-driven projects can focus on meeting the needs of underserved areas. The issuance of tokens can also provide the funding required to build infrastructure — and acquire the land, equipment and technical expertise needed to get an idea off the ground.

Web3 has been driven by a belief that internet users should have full control over their data, and tech giants should be stopped from monetizing personal information while giving nothing in return. DePINs align well with these values, all while reducing barriers to entry and ensuring there’s healthy competition. Multiple marketplaces for internet access, data storage and energy will result in much fairer prices for end users — and encourage rivals to innovate so they have compelling points of difference. It also means an entrepreneur with a deep understanding of what their community needs can start a business without large capital requirements. Open access and interoperability are the future.

Challenges on the road

Certain challenges must be overcome for DePINs to have a lasting global impact. There’s no denying that multibillion-dollar corporations currently benefit from economies of scale, vast user bases, and deep pockets. That’s why it’s incumbent on decentralized innovations to show why their approach is better. Reaching out to untapped markets that aren’t being served by business behemoths is a good first step. Another obstacle standing in the way of adoption concerns regulatory uncertainty, which can prevent investors and participants from getting involved. Careful thought also needs to be paid to the ramifications that DePINs can have on data privacy. Unless safeguards are imposed, someone who accesses an internet hotspot through blockchain technology could inadvertently disclose their particular location.

Ecosystems have been created that allow DePINs to be established while ensuring that user privacy is preserved at all times — championing data ownership and self-sovereignty. As well as reducing the risks surrounding identity theft, they have been built with the evolving nature of global regulation in mind — with measures such as GDPR in the EU forcing companies to rethink how much data they hold on their customers.

DePINs and the future of the internet

Zooming in on Europe as a use case, and how these regulatory headwinds will affect more than 400 million citizens on the continent, gives an invaluable insight into how DePINs — and the infrastructure they’re built on — can have an impact in the years to come.

For one, the current internet landscape means that we need to create a new digital identity every time we want to join a website or app — manually handing over personal information by filling out lengthy forms to open accounts. Users are then confronted by lengthy terms and conditions or privacy notices that often go unread, leaving people in the dark about how their data is going to be used in the future. That’s why the EU has proposed singular digital identities that could be used for multiple services — from “paying taxes to renting bicycles” — and change the dynamic about how confidential information is shared. This approach would mean that consumers are in the driving seat, and decide which counterparties have the right to learn more about who they are.

The European Union’s approach is ambitious and requires infrastructure that is fast, inexpensive and interoperable — allowing digital signatures, identity checks and credentials to be stored and executed securely across the trading bloc. Another element that must be thrown into the mix is central bank digital currencies, with the European Central Bank spearheading efforts to create an electronic form of the euro that is free to use and privacy preserving — all while enabling instant cross-border transactions with businesses, other consumers and governments.

High-performing and low-cost infrastructure will be essential if decentralized assets are going to be used by consumers across the continent — not to mention regulatory compliance. Privacy-focused wallets need to support multiple blockchains — as well as decentralized identities, verifiable credentials and data storage. A simple, user-friendly mobile application will be instrumental in guaranteeing that DePINs gain momentum.

The future is bright, and we’re yet to scratch the surface when it comes to the advantages decentralization can bring for all of us. But usability and efficiency are two key pillars that must be prioritized if this new wave of innovation is to match the unparalleled impact of the internet.

About Chris

Chris Were is the CEO of Verida, a decentralized, self-sovereign data network that empowers individuals to control their digital identity and personal data. Chris is an Australian-based technology entrepreneur who has spent over 20 years developing innovative software solutions, most recently with Verida. With his application of the latest technologies, Chris has disrupted the finance, media, and healthcare industries.

How DePINs Can Disrupt Tech Monopolies and Put People Back in Control was originally published in Verida on Medium, where people are continuing the conversation by highlighting and responding to this story.


Ontology

Empowering Privacy with Anonymous Credentials

Harnessing Zero-Knowledge Proofs for Secure Digital Identity In the digital realm, where privacy and security are paramount, the concept of anonymous credentials presents a revolutionary approach to safeguarding personal data. This technology leverages the power of zero-knowledge proofs (ZKP), enabling individuals to prove their identity or credentials without revealing any personal informat
Harnessing Zero-Knowledge Proofs for Secure Digital Identity

In the digital realm, where privacy and security are paramount, the concept of anonymous credentials presents a revolutionary approach to safeguarding personal data. This technology leverages the power of zero-knowledge proofs (ZKP), enabling individuals to prove their identity or credentials without revealing any personal information. Let’s see if we can demystify anonymous credentials and ZKPs, and improve our understanding of their significance, how they work, and their potential to transform digital security and privacy.

Understanding Anonymous Credentials

Anonymous credentials are at the forefront of enhancing digital privacy and security. They serve as a digital counterpart to physical identification, allowing users to prove their identity or possession of certain attributes without disclosing the actual data. This method ensures that personal information remains private, reducing the risk of data breaches and misuse. Through the strategic use of cryptographic techniques, anonymous credentials empower individuals with control over their online identity, marking a significant leap toward a more secure digital world.

The Parties Involved

The ecosystem of anonymous credentials involves three critical parties: the issuer, the user (prover), and the verifier. The issuer is the authority that generates and assigns credentials to users. Users, or provers, possess these credentials and can prove their authenticity to verifiers without revealing the underlying information. Verifiers are entities that need to validate the user’s claims without accessing their private data. This tripartite model forms the foundation of a secure, privacy-preserving digital identification system.

Technical Background: The BBS+ Signature Scheme

At the heart of anonymous credentials lies the BBS+ signature scheme, a cryptographic protocol that enables the creation and verification of credentials. This scheme utilizes advanced mathematical constructs to ensure that credentials are tamper-proof and verifiable. While the underlying mathematics may be complex, the essence of the BBS+ scheme is its ability to facilitate secure, anonymous credentials that uphold the user’s privacy while ensuring their authenticity to verifiers.

Key Concepts Explained Setup

The setup phase is crucial for establishing the cryptographic environment in which the BBS+ signature scheme operates. This involves defining the mathematical groups and functions that will be used to generate and verify signatures. It lays the groundwork for secure cryptographic operations, ensuring that the system is primed for issuing and managing anonymous credentials.

Key Generation (KeyGen)

In the KeyGen phase, unique cryptographic keys are created for each participant in the system. This process involves generating pairs of public and private keys that will be used to sign and verify credentials. The security of anonymous credentials heavily relies on the robustness of these keys, as they underpin the integrity of the entire system.

Signing and Verifying

Signing is the process by which issuers create digital signatures for credentials, effectively “stamping” them as authentic. Verifying, on the other hand, allows a verifier to check the validity of a credential’s signature without seeing the credential itself. This dual process ensures that credentials are both secure and privacy-preserving.

Non-Interactive Proof of Knowledge (PoK)

The Non-Interactive Proof of Knowledge (PoK) protocol is a cryptographic technique that allows a prover to demonstrate knowledge of a secret without revealing it. In the context of anonymous credentials, it enables users to prove possession of valid credentials without disclosing the credentials themselves. This non-interactive aspect ensures a smooth, privacy-centric verification process.

The Process in Action Issuer’s Key Pair Setup

The journey begins with the issuer’s key pair setup, where the issuer generates a pair of cryptographic keys based on the attributes to be included in the credentials. This setup is critical for creating credentials that are both secure and capable of supporting the non-interactive proof of knowledge protocol.

Issuance Protocol

The issuance protocol is an interactive process where the issuer and user exchange information to generate a valid credential. This involves the user creating a credential request, the issuer verifying this request, and then issuing the credential if the request is valid. This step is vital for ensuring that only legitimate users receive credentials.

Generating a Credential Request

To request a credential, users generate a credential request that includes a commitment to their secret key and a zero-knowledge proof of this secret. This request is sent to the issuer, who will then verify its authenticity before issuing the credential. This process ensures that the user’s identity remains anonymous while their credential request is being processed.

Issuing a Credential

Upon receiving a valid credential request, the issuer generates the credential using their private key. This credential is then sent back to the user, completing the issuance process. The credential includes a digital signature, attribute values, and a unique identifier, all encrypted to protect the user’s privacy.

Presentation Protocol

When users need to prove possession of a credential, they engage in the presentation protocol. This involves generating a proof of possession that selectively discloses certain attributes of the credential while keeping others hidden. The verifier can then confirm the credential’s validity without learning any additional information about the user or the undisclosed attributes.

Use Cases and Applications

Anonymous credentials are not just a theoretical construct; they have practical applications that can transform various industries by enhancing privacy and security. For instance, in healthcare, patients can verify their eligibility for services without revealing sensitive health information. In the digital realm, users can prove their age, nationality, or membership status without disclosing their full identity, opening doors for secure, privacy-focused online transactions and interactions. Governments can implement anonymous credential systems for digital identities, allowing citizens to access services with ease while protecting their personal data. These applications demonstrate the versatility and transformative potential of anonymous credentials in creating a more secure and private digital world.

Challenges and Considerations

While anonymous credentials offer significant benefits, their implementation is not without challenges. Technical complexity and the need for widespread adoption across various platforms and services can hinder their immediate integration into existing systems. Moreover, ethical considerations arise regarding the potential for misuse, such as creating undetectable false identities. Therefore, deploying anonymous credentials requires careful planning, clear regulatory frameworks, and ongoing dialogue between technology developers, users, and regulatory bodies to ensure they are used ethically and effectively.

Closing Thoughts

Anonymous credentials and zero-knowledge proofs represent a significant advancement in digital privacy and security. By allowing users to verify their credentials without revealing personal information, they pave the way for a more secure and private online world. While challenges remain, the potential of these technologies to transform how we think about identity and privacy in the digital age is undeniable. As we continue to explore and implement these solutions, we move closer to achieving a balance between security and privacy in our increasingly digital lives.

The journey towards a more private and secure digital identity is ongoing, and anonymous credentials play a crucial role in this evolution. We encourage readers to explore further, engage in discussions, and contribute to projects that aim to implement these technologies. By fostering a community of informed individuals and organizations committed to enhancing digital privacy, we can collectively drive the adoption of anonymous credentials and shape the future of online security and identity management. Together, let’s build a digital world where privacy is a right, not an option.

Empowering Privacy with Anonymous Credentials was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


Aergo

Aergo’s Developments into the Consumer Segment: Q1 Update

For many years, Aergo has been at the forefront of providing blockchain-based solutions catering to enterprises, businesses, governments, and non-profits. With Booost, Aergo is now venturing into new territories within the consumer segment, signaling a significant expansion of its brand and services. In the latter half of 2023, Booost embarked on a strategic shift, deliberately slowing down its f

For many years, Aergo has been at the forefront of providing blockchain-based solutions catering to enterprises, businesses, governments, and non-profits. With Booost, Aergo is now venturing into new territories within the consumer segment, signaling a significant expansion of its brand and services.

In the latter half of 2023, Booost embarked on a strategic shift, deliberately slowing down its feature rollout to focus on a comprehensive overhaul of its infrastructure. This move to Booost v2.0 was not merely an upgrade but a foundational step towards achieving seamless integration across its offerings, ensuring that Booost remained competitive in the fast-evolving blockchain landscape.

The results of this strategic pivot have begun to manifest. The upgraded infrastructure has enabled Booost to introduce new features such as Chat, Check-in, and Memories more efficiently. These features, aimed at enriching the socialfi experience, indicate Booost’s commitment to surpassing consumer expectations and enhancing user engagement.

Booost’s Q1 2024 Overview

Booost set ambitious objectives for the start of 2024. Significant progress has been made as the first quarter comes to a close, despite some minor delays. Below is an update on Booost’s Q1 progress, marked by several significant milestones:

New Social Features: Booost has successfully launched Chat, Check-in, and Memories, fully activating these crucial components within its socialfi strategy. Socialfi MVP: The Mission Center, a cornerstone of Booost’s platform, is nearing its launch, expected by late-April. Ambassador Pilot Phase 1: The initial group of ambassadors is gearing up to debut by the end of April. This was an initiative that was launch-ready in January but got held back to ensure it ties closer towards other product launches and roadmap plans. Website and Whitepaper Refresh: Updates to both the website and whitepaper are imminent, with a reveal anticipated by late April.

Partnership Highlights

The quarter also saw Booost enhance its ecosystem through strategic partnerships, including collaborations with Beoble and HK Web3 Festival. These alliances, along with others in the pipeline, are crucial for strengthening Booost’s network and market presence.

Looking Forward

Excitement is building for the introduction of Booost v3.0. The delays experienced in recent initiatives are seen as laying the groundwork for significant advancements Booost is preparing to roll out.

As Booost continues to bridge the gap between Aergo’s enterprise-grade solutions and the consumer market, it solidifies Aergo’s position at the forefront of the blockchain revolution. The community is encouraged to stay tuned for further updates as Booost propels Aergo’s journey into the consumer space, reshaping the digital landscape with each innovative step.

For further information on Aergo’s emphasis on dynamic NFT and consumer NFT markets, please refer to:

‘BOOOST’ing the Dynamic NFT and Consumer NFT Markets

Aergo’s Developments into the Consumer Segment: Q1 Update was originally published in Aergo blog on Medium, where people are continuing the conversation by highlighting and responding to this story.


BLOCKO XYZ Takes Lead in Blockchain-Based Identification Patent Development

BLOCKO XYZ announced that it has been selected for the ‘2024 Standard Patent Strategy Support Project’ organized by the Korea Intellectual Property Strategy and Development Agency (KISTA). The initiative, led by KISTA, aims to assist companies and organizations in developing standard patents. Expert teams, including patent specialists and attorneys, will offer tailored strategies through the

BLOCKO XYZ announced that it has been selected for the ‘2024 Standard Patent Strategy Support Project’ organized by the Korea Intellectual Property Strategy and Development Agency (KISTA).

The initiative, led by KISTA, aims to assist companies and organizations in developing standard patents. Expert teams, including patent specialists and attorneys, will offer tailored strategies through the ICT standard advisory service provided by the Korea Telecommunications Technology Association (TTA). In the current year, KISA (Korea Internet & Security Agency) will sponsor two blockchain companies to obtain standard patents, aiming for potential international adoption. KISA’s collaboration with KISTA is geared towards facilitating the global expansion of the two companies.

BLOCKO XYZ, one of the selected companies, endeavors to advance blockchain standardization by employing open badges for personal identity verification on the blockchain. The demand for institutional and technical standards to mitigate such issues has surged amid recent concerns over copyright issues within the NFT market.

BLOCKO XYZ has been innovating with blockchain technology, offering a comprehensive platform for blockchain-based digital badges to authenticate credentials and share professional information seamlessly. With its expertise and technological prowess, BLOCKO XYZ has effectively implemented numerous projects across government and corporate sectors, utilizing the Aergo Mainnet and Aergo Enterprise as its foundation.

Kim Kyung-hoon, CEO of BLOCKO XYZ, expressed the commitment to enhancing the stability and credibility of the NFT market while safeguarding participants’ rights through their blockchain-based personal identity verification services. Their selection for this project underscores their dedication to leading blockchain standardization efforts.

For further information on Aergo’s emphasis on dynamic NFT and consumer NFT markets, please refer to:

‘BOOOST’ing the Dynamic NFT and Consumer NFT Markets

The original article is published on ETNews:

블로코엑스와이지, 블록체인 표준화 선점을 위한 시동

BLOCKO XYZ Takes Lead in Blockchain-Based Identification Patent Development was originally published in Aergo blog on Medium, where people are continuing the conversation by highlighting and responding to this story.


PingTalk

Guide to Fraud Risk Management and How to Mitigate Fraud

Every year, businesses in the retail and financial services industries lose billions of dollars to data breaches and fraud, and the threat of future incidents continues to escalate. In the ecommerce space alone, companies lost $38 billion to fraud in 2023, and are forecasted to lose a staggering $362 billion between 2023 and 2028. Meanwhile, in financial services, 25% of companies lost over $1 mil

Every year, businesses in the retail and financial services industries lose billions of dollars to data breaches and fraud, and the threat of future incidents continues to escalate. In the ecommerce space alone, companies lost $38 billion to fraud in 2023, and are forecasted to lose a staggering $362 billion between 2023 and 2028. Meanwhile, in financial services, 25% of companies lost over $1 million to fraud last year.

 

Whether such exploitation is initiated by external parties or internal bad actors, these events can put customers’ private information at risk and be extremely costly for organizations to resolve. As such, many enterprises are focused on establishing fraud prevention and mitigation measures that can help them avoid the risk of breaches and fraud.

 

Fraud risk management is a critical component of a company’s ongoing success and longevity in the modern business environment. Fraud will not simply cease on its own, so businesses must implement a proactive approach that protects their sustained business growth and customer trust. The majority of security, IT, and business decision makers (76%) see identity fraud risks as their top priority in managing fraud. With that in mind, let’s discuss the best ways to manage fraud risk, especially when it comes to identity threats.

Wednesday, 10. April 2024

Holochain

How the World Is Shifting to Regenerative Economics

#HolochainChats with Pete Corke

Pete Corke's love for nature, nurtured through his walks in the coastal rainforests of British Columbia, Canada, inspired him to find ways to invest in the preservation of these precious ecosystems.

As the Director of Vision & Leadership at Kwaxala, an Indigenous-led and majority owned organization focused on protecting at-risk forests, Pete brings a unique perspective to the challenges and opportunities of creating a regenerative economic system.

With a background spanning technology, brand marketing, and supply chain management, Pete recognized that traditional conservation methods often left local economies poorer, relying heavily on philanthropy and government programs. He saw the need for a new approach that could generate sustainable economic value while protecting the environment.

Regenerative economics offers a solution by providing a for-profit orientation to conservation, benefiting communities and promoting biodiversity. However, accounting for the value of ecosystem services and ensuring the reliability of carbon credit markets can be complex. This is where Web3 technologies play a crucial role, providing tools to create transparency, trust, and custom solutions for documenting nature's value and connecting it to global markets.

Problems With the Current Extractive Economic System 

The current economic system primarily focuses on removing resources from the earth — creating a world that often overlooks the value of living nature.

As Pete Corke explains, "Living nature isn't represented in [our economy]. It's all dead and extracted from nature. And in fact, the only way of creating value-flows from the current system into protected nature is by literally burning the transactional medium, burning money through philanthropy."

In other words, the current economic system only recognizes the value of nature when it is extracted and sold, and the only way to direct money towards conservation is through philanthropic donations, which do not generate any economic returns or retain asset value.

This approach has led to a lack of ways to recognize the value of ecosystems and the services they provide. Remote communities, like those in British Columbia, Canada, face a tough choice between participating in the resource extraction that makes up most of the local economy or supporting conservation efforts that often lack economic incentives.

Additionally, the established laws and structures around land use are deeply rooted in this resource-focused economic system, making it difficult to create change. So how can one aim to operate within the current structure and transform the economics?

Pete emphasizes that "Traditional conservation makes British Columbia poorer. It's giving up tax revenues, it's giving up export revenues. And that's at a provincial level. At a local level, they're giving up jobs in remote communities."

The extractive industry's control over these structures poses a significant challenge to those seeking to create a regenerative economic system. As Pete points out, "It's not about replacing extractive economics. It's about creating an economic counterbalance to it."

Achieving this balance requires not only developing new economic models but also navigating complex laws and governance frameworks that have long prioritized resource extraction over regeneration.

Opportunities for Regenerative Economics

Despite the challenges posed by the current extractive economic system, there are significant opportunities for regenerative economics to create a counterbalance. Specifically, regenerative economics would be creating incentive-based conservation efforts that spur economic growth while protecting nature.

As Pete Corke explains, "The whole point of the regenerative economics space is, hey, we need a two polled [economic] world here to create a counterbalance."

One key opportunity lies in generating value flows to conserve natural areas and the local communities that support these ecosystems. Pete emphasizes the importance of creating "an economic activity that's based on regeneration [which] counteracts extractive economic value." 

Kwaxala is building a global network of protected forest areas that, at scale, generate economic markets for natural systems. Their projects demonstrate on-going returns for both forest communities and mindful investors. 

Most resource extraction happens on public lands, with companies buying the rights (to log, mine, or drill for oil) that give them use of that land. By buying these rights and safeguarding them, managing the land appropriately, and documenting the revitalization of these ecosystems, it’s possible to demonstrate the value of conservation in a global market. 

These thriving ecosystems provide numerous benefits to the people, industries, and municipalities who interact with them. They clean the air and water, offer mental health benefits to individuals, provide educational opportunities and recreational spaces for children, and attract tourism to communities. 

This only scratches the surface of the economic and social benefits of rich, site specific ecosystems, on both local and global levels, but we don’t have great ways to account for these benefits within our economy. One market tool is high quality carbon credits, which can be produced through the act of conservation to start accounting for these benefits. 

By developing mechanisms that allow individuals and businesses to invest in and hold value in living natural assets, regenerative economics can provide a viable alternative to extractive industries. Whilst carbon offsets and biodiversity credits represent an economic mechanism for recognising the services provided by nature, there are very few mechanisms that enable you to hold equity/investment value in the supply side of that value flow. But that is changing and Kwaxala is leading in that space.

The Next Economic Mechanism

Web3 tools, such as blockchain and smart contracts, offer promising solutions for creating transparency, provenance, and liquidity in regenerative economic systems. 

As Pete points out, "Web3 is so powerful because we're not trying to just build regenerative products, we're trying to build an entire regenerative stack and entire economic ecosystem that counterbalances the extractive ecosystem."

These tools enable the creation of new financial products and value flows that can be quickly prototyped and scaled, providing a more efficient means of realizing the potential of regenerative economics. 

Web3 technologies can help ensure the authenticity and on-the-ground truth of regenerative assets, such as carbon offsets, by embedding due diligence and provenance information directly into the digital tokens representing these assets. 

These technologies also allow transparent value redistribution back into the communities on the ground, ensuring that the regenerative economy is built from the foundations up in a far more equitable way than the extractive/colonial economy ever was. 

Ultimately, the success of regenerative economics hinges on shifting mindsets and creating a new paradigm that recognizes the inherent value of nature. As Pete states, "Nature doesn't just need a seat at the economic table, we need to acknowledge that it is the table and it's also the air in the room! The human economic system is a subsidiary of the natural economic system."

As the world faces mounting environmental challenges, the need for a regenerative economic system has never been more pressing. Pete Corke and Kwaxala's work in partnering with Indigenous communities, protecting at-risk forests, and generating regenerative returns through innovative financial mechanisms that allow anybody to hold equity value in a living forest serves as a powerful example of how we can begin to create a true counterbalance to the extractive economy.

By leveraging Web3 tools such as Holochain, regenerative projects can create a trustworthy data layer that ensures transparency and trust in regenerative economic systems. Holochain's architecture allows for the distributed storage and processing of vast amounts of data, essential for documenting ecosystem interactions and ensuring the integrity of regenerative assets.

Centralized solutions have often proved untrustworthy, with instances of carbon credit markets failing to deliver on their promises due to lack of oversight and accountability. These examples highlight the need for trustable solutions that provide transparent, verifiable, and tamper-proof records of regenerative assets and their associated impacts. Holochain's distributed data layer creates a system resistant to manipulation and greenwashing, ensuring the value generated by regenerative economics is genuine and long-lasting.

Recognizing the intrinsic worth of healthy ecosystems can create a new economic paradigm that prioritizes the well-being of both human communities and the natural world. The path forward requires collaboration, innovation, and a willingness to challenge entrenched structures.


auth0

Facial Biometrics: The Key to Digital Transformation and Enhanced Security

Facial biometrics revolutionizes digital processes. Implemented thoughtfully, it provides businesses a competitive edge while safeguarding privacy.
Facial biometrics revolutionizes digital processes. Implemented thoughtfully, it provides businesses a competitive edge while safeguarding privacy.

Tokeny Solutions

Tokeny’s Talent | Fedor

The post Tokeny’s Talent | Fedor appeared first on Tokeny.
Fedor Bolotnov is QA Engineer at Tokeny.  Tell us about yourself!

Hi, my name is Fedor, and I’m a QA Engineer at Tokeny. I prefer the QA Engineer position over Tester because QA involves more than just testing. While testing is a significant aspect, QA also encompasses communication, processes, and creating an error-prone environment.

What were you doing before Tokeny and what inspired you to join the team?

Before joining Tokeny, I held various QA roles, ranging from QA to Team Lead, in several companies in Russia. However, in 2022, I unexpectedly relocated to Spain and had to restart my QA career.

The concept of revolutionizing the traditional finance sector intrigued me, prompting my decision to join Tokeny. As a QA professional, part of my role involves “ruining” someone else’s code, but only to enhance its quality and resilience. Essentially, we strive to challenge the current system to pave the way for a newer and superior one.

How would you describe working at Tokeny?

Fun, educational, and collaborative. With a diverse team, each member brings their unique life and career experiences and expertise, fostering continuous learning and knowledge sharing every day.

What are you most passionate about in life?

Sleep, haha! I cherish a good 10-11 hours at least once a week. But on a serious note, learning something new is what really motivates me to get out of bed every morning. Of course, I also adore spending time with my wife and our cat (fortunately, she can’t read, so she won’t find out she’s second on the list!). Additionally, I’m a bit obsessed with sports, both playing and watching American football.

What is your ultimate dream?

To borrow from Archimedes: “Give me a point that is firm and immovable, and I will fall asleep.” So, I don’t have a single ultimate dream, but rather an endless list of tasks to accomplish to ensure my family’s happiness.

What advice would you give to future Tokeny employees?

Don’t be afraid to experiment and never give up.

What gets you excited about Tokeny’s future?

The borderlessness of it all. It opens up endless possibilities beyond any limits we can currently dream or imagine.

He prefers:

Tea

check

Coffee

check

Book

Movie

Work from the office

check

Work from home

check

Dogs

Cats

check

Text

Call

check

Burger

Salad

check

Mountains

check

Ocean

Wine

check

Beer

check

Countryside

City

check

Slack

Emails

Casual

Formal

check

Swimsuit

check

Crypto

check

Fiat

Morning

check

Evening

More Stories  Tokeny’s Talent | Ali 29 September 2023 Tokeny’s Talent|José’s Story 19 August 2021 Tokeny’s Talent|Alexis’ Story 26 October 2022 Tokeny’s Talent|Ivie’s Story 1 July 2022 Tokeny’s Talent|Joachim’s Story 23 April 2021 Tokeny’s Talent|Eva’s Story 19 February 2021 Tokeny’s Talent|Tony’s Story 18 November 2021 Tokeny’s Talent|Shurong’s Story 20 November 2020 Tokeny’s Talent|Ben’s Story 25 March 2022 Tokeny’s Talent | Denisa 26 October 2023 Join Tokeny Solutions Family We are looking for talents to join us, you can find the opening positions by clicking the button. Available Positions

The post Tokeny’s Talent | Fedor first appeared on Tokeny.

The post Tokeny’s Talent | Fedor appeared first on Tokeny.


Spherical Cow Consulting

Privacy and Personalization on the Web: Striking the Balance

This is the transcript to my YouTube explainer video on why privacy and personalization are so hard to balance. Likes and subscriptions are always welcome! The post Privacy and Personalization on the Web: Striking the Balance appeared first on Spherical Cow Consulting.

This is the transcript to my YouTube explainer video on why privacy and personalization are so hard to balance. Likes and subscriptions are always welcome!

Welcome to the Digital Cow Network! I’m your host, Heather Flanagan. In today’s explainer, we’re going to look at some of the challenges of balancing privacy with the desire for personalization on the web. This is important because the standards and regulations under development today are trying to do this, too.  Sneak preview: asking for user consent is not particularly helpful here. Think of it as necessary but not sufficient. 

When we surf the web, we want to see more of what’s of interest to us, and we also want to know that our privacy is being protected. Let’s look at this dichotomy—the desire for privacy versus the desire for personalization—that’s at the heart of our digital lives. How much are we willing to share for a tailored online experience?

The Personalization Phenomenon

Personalization is everywhere – from your social media feed to shopping recommendations. Millennials and Gen Z in particular expect a level of personalization that older generations aren’t quite used to. But ever wondered how it works? Websites and apps collect data about our preferences, activities, and more to create a custom experience. Sometimes that is as simple as optimizing for whatever web browser you use (Chrome, Firefox, Safari, or something else). Other times it’s a lot more invasive.

The Data Behind Personalization

Let’s break down the data journey. It starts with what you click, what you search, and even how long you linger on a page. This data forms a digital profile, which then guides the content you see. 

Here’s where the magic of Real-Time Bidding comes in! Real-time bidding only works because the Internet is blindingly fast for most, especially compared to the days of old-school dial-up connections. It works like this: 

You visit a website.  The website has a space on it for an ad.  That space includes a piece of code that says “go to this ad exchange network, and take information about this website AND information about the user (either via cookies, or their browser fingerprint) AND the physical location of the user because their device probably knows that and send it all to the ad exchange.”  The ad exchange has a list of advertisers who have preloaded information on what they’re willing to pay to promote their ad based on specific criteria about the website, the user, and even who the user is physically close to.  The ad exchanger immediately figures out who wins the auction and returns the winning ad to be embedded in the website. 

All this takes milliseconds. 

Real-time bidding: the Internet is fast enough to stream movies… and to collect information about you, where you are, what you’re looking at, and even where you focus your attention on the screen in real-time.

Privacy in the Personalized World 

And there’s the catch: this level of personalization requires access to a lot of personal data. That’s where privacy concerns come in. How do companies ensure our data is safe? How much control do we have over what’s collected?

Thanks to laws and regulations like the European Union’s General Data Protection Regulation (GDPR), individuals do have some ability to control this flow of information. For example, there are cookie banners on many websites that are supposed to let you decide what type of information you’re willing to share. There are also authenticated ids for when an individual has logged in and provided consent to be tracked. Google’s Privacy Sandbox has several mechanisms they’re testing out, like the Protected Audience API and the Topics API to help with ethical advertising.

Navigating the Trade-offs

But ultimately, accommodating privacy, personalization, and legal requirements around both is a trade-off, both for advertisers and for individuals. Personalization can make people’s online life more convenient and enjoyable. The increase in regulatory pressure, though, means that every entity involved in serving up a website and its associated ads to an individual needs to be a part of the consent process. It’s a barrage of “are you ok with us collecting data? How about now? Is now ok? What about over here? And here? And here, too?” This is a terrible user experience.

Best Practices for Users and Developers 

So, what can we do? For individuals, it’s about making informed choices, understanding privacy settings, and being patient with the barrage of consent requests. For developers, the challenge is to respect user privacy while providing value. This is all still a very new space, which is why there is so much activity within the W3C and the browser vendors to find a path forward that satisfies the business requirements while still keeping on the right side of privacy law. The best thing organizations that are in the business of benefiting from tracking need to get involved in the standards process to test out those APIs under development and offer feedback the API developers can use. 

Wrap Up: The Future of Privacy and Personalization 

Looking ahead, the landscape is ever-evolving. New technologies, stricter privacy laws, and changing user attitudes are reshaping this balance. If you’re looking at the One True Way for your business to thread this needle, I’m afraid you’ve still got some waiting around to do. The browser vendors are trying different things at the same time lawyers are trying to find different ways to interpret the legal requirements into technical requirements. If it were easy, it would have been solved already.

Thanks for joining me! Stay curious, stay informed, and f you have questions, go ask my AI clone, Heatherbot, on my website at https://sphericalcowconsulting.com. I’ve trained it to chat with you!

The post Privacy and Personalization on the Web: Striking the Balance appeared first on Spherical Cow Consulting.


KuppingerCole

Identity Threat Detection and Response (ITDR): IAM Meets the SOC

by Mike Neuenschwander The nascent identity threat detection and response (ITDR) market is gaining tremendous momentum in 2024. Cisco and Delinea recently jumped into the market with their recent acquisitions of Oort and Authomize, respectively. Top cybersecurity companies BeyondTrust, CrowdStrike, and SentinelOne continue to make substantial investments in ITDR. Microsoft has leaned into ITDR by

by Mike Neuenschwander

The nascent identity threat detection and response (ITDR) market is gaining tremendous momentum in 2024. Cisco and Delinea recently jumped into the market with their recent acquisitions of Oort and Authomize, respectively. Top cybersecurity companies BeyondTrust, CrowdStrike, and SentinelOne continue to make substantial investments in ITDR. Microsoft has leaned into ITDR by blending technologies from its Entra and Defender XDR products. Other vendors, such as Gurucul, Securonix, and Sharelock are attempting to broaden the definition of ITDR in various ways. Given these developments, the market remains difficult to quantify. Arguably, there isn’t even a real “ITDR market,” because it’s ultimately more like an activity or an identity protection platform. Many of these vendors don’t even use the term ITDR in their products’ names. But what is clear is that ITDR is a banner under which enterprise identity and access management (IAM) and security operations center (SOC) teams must unite. So, what’s your organization’s best route to protecting identities and IAM infrastructure? This Leadership Compass evaluates the market dynamics for ITDR in 2024 and provides guidance on how your organization can take advantage of these critical technologies.

Ontology

Ontology Weekly Report (April 2nd — April 8th, 2024)

Ontology Weekly Report (April 2nd — April 8th, 2024) This week at Ontology has been full of exciting developments, continued progress in our technical milestones, and active community engagement. Here’s everything you need to know about our journey over the past week: 🎉 Highlights Ontology at PBW: We’re excited to announce that Ontology will be participating in PBW! Come meet us and
Ontology Weekly Report (April 2nd — April 8th, 2024)

This week at Ontology has been full of exciting developments, continued progress in our technical milestones, and active community engagement. Here’s everything you need to know about our journey over the past week:

🎉 Highlights Ontology at PBW: We’re excited to announce that Ontology will be participating in PBW! Come meet us and learn more about our vision and projects. Latest Developments StackUp Quest Part 2 Live: The second part of our thrilling StackUp quest is now officially live. Dive in for new challenges and rewards! Weekly Update with Clare: Hosted on the Ontology Discord channel, Clare brought our community the latest updates and insights directly from the team. Development Progress Ontology EVM Trace Trading Function: Now at 80%, we’re making significant strides in enhancing our trading functionalities within the EVM. ONT to ONTD Conversion Contract: Progress has been ramped up to 45%, streamlining the conversion process for our users. ONT Leverage Staking Design: We’ve reached the 30% milestone in our development of an innovative staking mechanism to leverage ONT holdings. Product Development March’s Top 10 DApps: Check out the top 10 DApps on ONTO for March, showcasing the diverse and vibrant ecosystem on Ontology. On-Chain Activity DApp Stability: Our MainNet continues to support 177 dApps, maintaining a robust and dynamic ecosystem. Transaction Growth: We’ve seen an increase of 2,226 dApp-related transactions and a total of 12,604 transactions on MainNet this week, reflecting active engagement within our network. Community Growth Engaging Discussions: Our Twitter and Telegram channels are buzzing with lively discussions and the latest developments. We invite you to join the conversation and stay updated. Telegram Discussion on Digital Identity: Led by Ontology Loyal Members, this week’s discussion focused on “Blockchain’s Role in Digital Identity,” exploring how blockchain technology is revolutionizing digital identity management. Stay Connected

We encourage our community members to stay engaged with us through our official social media channels. Your insights, participation, and feedback are crucial to our continuous growth and innovation.

Follow us on social media for the latest updates:

Ontology website / ONTO website / OWallet (GitHub)

Twitter / Reddit / Facebook / LinkedIn / YouTube / NaverBlog / Forklog

Telegram Announcement / Telegram English / GitHubDiscord

As we move forward, we’re excited about the opportunities and challenges that lie ahead. Thank you for being a part of our journey. Stay tuned for more updates, and let’s continue to build a more secure and equitable digital world together!

Ontology Weekly Report (April 2nd — April 8th, 2024) was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


BlueSky

Perguntas Frequentes do Usuário Bluesky (Português)

Bem-vindo ao aplicativo Bluesky! Este é um guia do usuário que responde a algumas perguntas comuns.

Este guia do usuário foi traduzido da versão em inglês aqui. Por favor, desculpe quaisquer imprecisões na tradução!

Bem-vindo ao aplicativo Bluesky! Este é um guia do usuário que responde a algumas perguntas comuns.

Para perguntas gerais sobre a empresa Bluesky, por favor, visite nosso FAQ aqui.

Entrando no Bluesky

Como faço para me juntar ao Bluesky?

Você pode criar uma conta em bsky.app. (Não é necessário código de convite!)

Você pode baixar o aplicativo Bluesky para iOS ou Google Play, ou usar o Bluesky via desktop.

Moderação

Qual é a abordagem do Bluesky para a moderação?

A moderação é uma parte fundamental das redes sociais. No Bluesky, estamos investindo em segurança de duas formas. Primeiro, construímos nossa própria equipe de moderação dedicada a fornecer cobertura contínua para manter nossas diretrizes comunitárias. Além disso, reconhecemos que não existe uma abordagem única para a moderação — nenhuma empresa pode garantir a segurança online corretamente para todos os países, culturas e comunidades do mundo. Portanto, também estamos construindo algo maior — um ecossistema de moderação e ferramentas de segurança de código aberto que dá às comunidades o poder de criar seus próprios espaços, com suas próprias normas e preferências. Ainda assim, usar o Bluesky é familiar e intuitivo. É um aplicativo simples à primeira vista, mas por baixo do capô, habilitamos uma verdadeira inovação e competição nas mídias sociais ao construir um novo tipo de rede aberta.

Você pode ler mais sobre nossa abordagem para moderação aqui.

O que a função de silenciar faz?

Silenciar impede que você veja quaisquer notificações ou postagens principais de uma conta. Se eles responderem a um tópico, você verá uma seção que diz "Postagem de uma conta que você silenciou" com uma opção para mostrar a postagem. A conta não saberá que foi silenciada.

O que o bloqueio faz?

O bloqueio impede a interação. Quando você bloqueia uma conta, tanto você quanto a outra conta não poderão mais ver ou interagir com as postagens uma da outra.

Como eu denuncio abuso?

Você pode denunciar postagens clicando no menu de três pontos. Você também pode denunciar uma conta inteira visitando o perfil dela e clicando no menu de três pontos lá.

Onde posso ler mais sobre seus planos para moderação?

Você pode ler mais sobre nossa abordagem para moderação aqui.

Feeds Personalizados

O que são feeds personalizados?

Feeds personalizados é um recurso no Bluesky que permite escolher o algoritmo que define sua experiência nas mídias sociais. Imagine querer que sua linha do tempo exiba apenas postagens dos seus contatos mútuos, ou apenas postagens com fotos de gatos, ou somente postagens relacionadas a esportes — você pode simplesmente selecionar seu feed preferido de um mercado aberto.

Para os usuários, a capacidade de personalizar seu feed devolve o controle de sua atenção para si mesmos. Para os desenvolvedores, um mercado aberto de feeds proporciona a liberdade de experimentar e publicar algoritmos que qualquer um pode usar.

Por exemplo, experimente este feed.

Você pode ler mais sobre feeds personalizados e escolha algorítmica em nosso post no blog aqui.

Como eu uso feeds personalizados?

No Bluesky, clique no ícone de hashtag na parte inferior do aplicativo. A partir daí, você pode adicionar e descobrir novos feeds. Você também pode explorar diretamente os feeds através deste link.

Como posso criar um feed personalizado?

Desenvolvedores podem usar nosso kit inicial de gerador de feeds para criar um feed personalizado. Eventualmente, forneceremos ferramentas melhores para que qualquer pessoa, incluindo não desenvolvedores, possa construir feeds personalizados.

Além disso, SkyFeed é uma ferramenta criada por um desenvolvedor independente que possui um recurso de Construtor de Feeds que você pode usar.

Domínios Personalizados

Como posso configurar meu domínio como meu identificador?

Por favor, consulte nosso tutorial aqui.

Posso comprar um domínio diretamente pelo Bluesky?

Sim, você pode comprar um domínio e definí-lo como seu nome de usuário pelo Bluesky aqui.

Privacidade de Dados

O que é público e o que é privado no Bluesky?

O Bluesky é uma rede social pública. Pense nas suas postagens como postagens de blog – qualquer pessoa na web pode vê-las, mesmo aquelas sem um código de convite. Um código de convite simplesmente concede acesso ao serviço que estamos executando e que permite publicar uma postagem você mesmo. (Desenvolvedores familiarizados com a API podem ver todas as postagens, independentemente de terem ou não uma conta própria.)

Especificamente:

Postagens e curtidas são públicas. Bloqueios são públicos. Silenciamentos são privados, mas as listas de silenciamento são listas públicas. Suas assinaturas de lista de silenciamento são privadas.

Por que minhas postagens, curtidas e bloqueios são públicos?

O Protocolo AT, no qual o Bluesky é construído, foi projetado para suportar conversas públicas. Para tornar as conversas públicas portáteis em todos os tipos de plataformas, seus dados são armazenados em repositórios de dados que qualquer pessoa pode visualizar. Isso significa que, independentemente de qual servidor você escolher para se juntar, você ainda poderá ver postagens em toda a rede, e se escolher mudar de servidor, poderá facilmente levar todos os seus dados com você. É isso que faz com que a experiência do usuário do Bluesky, um protocolo federado, seja semelhante a todos os outros aplicativos de mídia social que você usou antes.

Posso definir meu perfil para ser privado?

Atualmente, não existem perfis privados no Bluesky.

O que acontece quando eu excluo uma postagem?

Depois de excluir uma postagem, ela será imediatamente removida do aplicativo voltado para o usuário. Qualquer imagem anexada à sua postagem também será imediatamente excluída em nosso armazenamento de dados.

No entanto, leva um pouco mais de tempo para o conteúdo de texto de uma postagem ser completamente excluído no armazenamento. O conteúdo do texto é armazenado de forma não legível, mas é possível consultar os dados via API. Realizaremos periodicamente exclusões no back-end para apagar completamente esses dados.

Posso obter uma cópia de todos os meus dados?

Sim — o Protocolo AT mantém os dados do usuário em um arquivo endereçado por conteúdo. Este arquivo pode ser usado para migrar dados de conta entre servidores. Para desenvolvedores, você pode usar este método para exportar uma cópia do seu repositório. Para não desenvolvedores, as ferramentas ainda estão sendo construídas para facilitar.

Atualização: Pessoas técnicas podem ler mais sobre o download e a extração de dados em este post no blog de desenvolvedores do atproto.

Você pode ler nossa política de privacidade aqui.

O que acontece quando eu excluo uma postagem?

Após excluir uma postagem, ela será imediatamente removida do aplicativo voltado para o usuário. Qualquer imagem anexada à sua postagem também será imediatamente excluída em nosso armazenamento de dados.

No entanto, leva um pouco mais de tempo para o conteúdo de texto de uma postagem ser completamente excluído no armazenamento. O conteúdo do texto é armazenado em uma forma não legível, mas é possível consultar os dados via API. Realizaremos periodicamente exclusões no back-end para apagar completamente esses dados.

Posso obter uma cópia de todos os meus dados?

Sim — o Protocolo AT mantém os dados do usuário em um arquivo endereçado por conteúdo. Este arquivo pode ser usado para migrar dados de conta entre servidores. Para desenvolvedores, você pode usar este método para exportar uma cópia do seu repositório. Para não desenvolvedores, as ferramentas ainda estão sendo construídas para facilitar.

Atualização: Pessoas técnicas podem ler mais sobre download e extração de dados em este post no blog de desenvolvedores.

Você pode ler nossa política de privacidade aqui.

Segurança

Como posso redefinir minha senha?

Clique em "Esqueci" na tela de login. Você receberá um e-mail com um código para redefinir a senha.

E se eu não receber o e-mail para redefinição de senha?

Confirme o e-mail da sua conta nas suas configurações e adicione noreply@bsky.social à sua lista de remetentes permitidos.

Como posso alterar o e-mail da minha conta?

Você pode atualizar e verificar o e-mail da sua conta nas Configurações.

Vocês implementarão autenticação de dois fatores (2FA)?

Sim, a implementação da 2FA está em nosso plano de desenvolvimento a curto prazo.

Bluesky, o Protocolo AT e Federação

Qual a diferença entre Bluesky e o Protocolo AT?

Bluesky, a empresa de benefício público, está desenvolvendo dois produtos: o Protocolo AT, e o aplicativo de microblogging Bluesky. O aplicativo Bluesky tem como objetivo demonstrar as funcionalidades do protocolo subjacente. O Protocolo AT é construído para suportar um ecossistema inteiro de aplicativos sociais que vai além do microblogging.

Você pode ler mais sobre as diferenças entre Bluesky e o Protocolo AT em nosso FAQ geral aqui.

Como a federação me afeta, como usuário do aplicativo Bluesky?

Estamos priorizando a experiência do usuário e queremos tornar o Bluesky o mais amigável possível. Independentemente de qual servidor você se juntar, você pode ver postagens de pessoas em outros servidores e levar seus dados com você se escolher mudar de servidor.

O Bluesky é construído em uma blockchain? Ele usa criptomoeda?

Não e não.

O Bluesky suporta domínios Handshake (HNS)?

Não, e não há planos para isso.

Diversos

Como posso enviar feedback?

No aplicativo móvel, abra o menu lateral esquerdo e clique em "Feedback". No aplicativo web, há um link para "Enviar feedback" no lado direito da tela.

Você também pode enviar e-mails para support@bsky.app com solicitações de suporte.

Como uma postagem no Bluesky é chamada?

O termo oficial é "postagem".

Como posso incorporar uma postagem?

Existem duas maneiras de incorporar uma postagem do Bluesky. Você pode clicar no menu de três pontos diretamente na postagem que deseja incorporar para usar o snippet de código.

Você também pode visitar embed.bsky.app e colar o URL da postagem para obter o snippet de código.

Como posso encontrar amigos ou contatos mútuos de outras redes sociais?

Desenvolvedores terceirizados mantêm ferramentas para encontrar amigos de outras redes sociais. Alguns desses projetos estão listados aqui. Por favor, gere uma Senha de Aplicativo via Configurações > Avançado > Senhas de Aplicativos para fazer login em quaisquer aplicativos de terceiros.

Existe um modo escuro?

Sim. Você pode alterar as configurações de exibição para o modo claro ou escuro, ou para combinar com as configurações do seu sistema, através de Configurações > Aparência.

As respostas aqui estão sujeitas a alterações. Atualizaremos este guia regularmente conforme continuamos a lançar mais recursos. Obrigado por se juntar ao Bluesky!

Tuesday, 09. April 2024

KuppingerCole

Navigating Security Silos: Identity as a New Security Perimeter

Companies are grappling with countless challenges in the realm of identity security. These challenges range from dealing with the dynamic nature of identities, the rise of insider threats, the ever-evolving threat landscapes, handling the complexity of identity ecosystems to insufficient visibility into identity posture. This webinar explores the fundamental role of Identity Threat Detection &

Companies are grappling with countless challenges in the realm of identity security. These challenges range from dealing with the dynamic nature of identities, the rise of insider threats, the ever-evolving threat landscapes, handling the complexity of identity ecosystems to insufficient visibility into identity posture. This webinar explores the fundamental role of Identity Threat Detection & Response (ITDR) and Identity Security Posture Management in fortifying defenses against these challenges.

Join identity and security experts from KuppingerCole Analysts and Sharelock.ai as they discuss moving beyond conventional security measures. In the ever-evolving landscape of cybersecurity, a mature Identity Security Posture is the key to resilience. To establish a mature Identity Security Posture, organizations require emerging technologies ITDR and Identity Security Posture Management, offering a proactive and comprehensive defence.

Mike Neuenschwander, VP and Head of Research Strategy at KuppingerCole Analysts, will focus on organizations' current security status and the challenges they encounter. He will emphasize the importance of developing a mature Identity Security Posture to address shortcomings in conventional security measures.

Andrea Rossi, Senior Identity & Cybersecurity expert, President and Investor at Sharelock.ai, will discuss robust security measures, from Security Information and Event Management (SIEM) to Identity and Access Management (IAM/IAG) systems, eXtended Detection and Response (XDR), and essential perimeter defences like antivirus and firewalls. He will offer attendees practical insights into improving their organization's security posture.




Indicio

Faster Decentralized Identity Services Now Available for Europe

The post Faster Decentralized Identity Services Now Available for Europe appeared first on Indicio.
Indicio recently released its European Central Cloud Scale Mediator to provide better latency and service to local users. Here’s what you need to know to build an effective decentralized identity solution in Europe.

By Tim Spring

Faster mediation, happier customers

Indicio recently unveiled its dedicated European cloud scale mediator. Now, customers across Europe have access to better latency and faster service when sending messages through a decentralized network. 

For those not familiar, a mediator plays a key role in delivering messages in a decentralized network. You can think of it almost like a post office: the mediator receives messages and can find and deliver them to the correct recipient. 

This is important because in a decentralized network there is no direct connection between you and the party you are trying to communicate with. Having a faster mediator allows a decentralized identity solution to process more messages and provide a better experience to the end user. 

The European Cloud Scale Mediator is part of Indicio’s commitment to helping customers in Europe build powerful and fast identity solutions. Interest in the technology has been growing as the European Union looks to allow for easier travel and better identity management for its citizens. 

European Identity Standards

If you are looking to build identity technology or processes in Europe, there are a number of regulations and standards to keep in mind. The two that are most important are the “electronic Identification, Authentication and Trust Services” (eIDAS) and OpenID Standards. If you’re not familiar with them, here’s a quick overview.

eIDAS 

The goal of eIDAS Regulation is to ensure that electronic interactions are safer, faster and more efficient, regardless of the European country in which they take place. The net result of the regulations is a single framework for electronic identification (eID) and trust services, making it more straightforward to deliver services across the European Union.

eIDAS2 (New!)

eIDAS was a good start, but as the technology has evolved, the European Union has recognized some issues and problems that the original regulations didn’t address. Namely, eIDAS does not cover how certificates or professional qualifications are issued and used (for example medical or attorney licenses), making these electronic credentials complicated to implement and use across Europe. More worryingly for the individual, it does not allow the end user to control the data exchanged during the verification process.

eIDAS2 proposes creating a European digital identity that can be controlled by citizens through a European Digital Identity Wallet (EDIW) that anyone can read to verify the identity of citizens.

OpenID

The OpenID Foundation was created to “lead the global community in creating identity standards that are secure, interoperable, and privacy-preserving.” This group is a non-profit standards body that you don’t technically need to be compliant with, but building along their guidelines will add interoperability to your project and allow more people to make use of it easier. 

OpenID for VC or OID4VC

The OpenID Foundation also provides some specifications specifically for verifiable credentials and their issuance, presentation and how they are stored. You can learn more at the above link.

Indicio makes building easy

Indicio offers a full package of support to our European customers. Not only do we have all the pieces to help you put together the decentralized identity solution to best meet your needs, we make it our priority to offer solutions that are universal in accommodating current and emerging protocols and standards.

To learn more about the European Cloud Scale Mediator or discuss a decentralized identity project that you have in mind please get in touch with our team here.

####

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post Faster Decentralized Identity Services Now Available for Europe appeared first on Indicio.


auth0

Calling a Protected API from an iOS Swift App

A step-by-step guide to leveraging OAuth 2.0 when accessing protected APIs via an iOS app built with Swift and integrated with Auth0.
A step-by-step guide to leveraging OAuth 2.0 when accessing protected APIs via an iOS app built with Swift and integrated with Auth0.

Elliptic

Practical implementation of FATF Recommendation 15 for VASPs: Leveraging on-chain analytics for crypto compliance

Compliance officers are essential in implementing anti-money laundering (AML) and counter-terrorism finance (CFT) measures, particularly in the ever-evolving digital asset landscape. The Financial Action Task Force (FATF)’s Recommendation 15 focuses on the AML/CFT measures necessary for managing the risks of new technologies, including digital asset compliance. Although Recommendation 1

Compliance officers are essential in implementing anti-money laundering (AML) and counter-terrorism finance (CFT) measures, particularly in the ever-evolving digital asset landscape. The Financial Action Task Force (FATF)’s Recommendation 15 focuses on the AML/CFT measures necessary for managing the risks of new technologies, including digital asset compliance. Although Recommendation 15 forms the cornerstone of global efforts to address financial crime risks in the crypto space, implementation has been noted as a challenge. There’s far more that crypto compliance professionals need to know. 


KuppingerCole

Identity Management Trends: Looking Back at EIC 2023 and Ahead to EIC 2024

by Martin Kuppinger Only a bit more than two months to go until the global Digital Identity and IAM community gathers again at the European Identity and Cloud Conference (EIC) in Berlin. From June 4th to 7th, we will spend four days packed with interesting sessions. More than 230 sessions, more than 250 speakers – this again will become a great event and the place to be. When looking back at EI

by Martin Kuppinger

Only a bit more than two months to go until the global Digital Identity and IAM community gathers again at the European Identity and Cloud Conference (EIC) in Berlin. From June 4th to 7th, we will spend four days packed with interesting sessions. More than 230 sessions, more than 250 speakers – this again will become a great event and the place to be.

When looking back at EIC 2023, I remember the Closing Keynote where I have been talking about three main topics and trends I had observed during that edition of EIC:

Decentralized Identity becoming a reality: My observation has been that decentralized identity started to shift from a discussion about early protocol developments and concepts to the real-world impact, on enterprise IAM and consumer business. AI and Identity: Intensely discussed as a challenge we need to tackle. Policy-based Access Controls: Not a new thing, but coming back with strong momentum, for instance in the development of modern digital services. Back in the spotlight.

So, where do we stand with this?

With the recent approval of eIDAS 2.0 by the European Parliament, which involves the EUDI Wallet (EU Decentralized Identity Wallet), the momentum for decentralized identity has experienced a massive leverage. It is the topic in Digital Identity and IAM of today, with many sessions around this at EIC.

AI and Identity also has also got its own track now. For a good reason. It is such an important topic with so many facets: the identity of AI and autonomous components powered by AI, AI empowering IAM, and so on. We started the discussion in 2023 and will continue in 2024.

Policy-based Access Controls still is evolving. We see more and more traction, also in the conversations we have on both the research and the advisory side. More and more organizations are looking at how to make PBAC a reality.

Looking forward to EIC 2024: What can we expect from the next edition? Let me try a prediction of what I will cover on June 7th in the closing keynote:

Decentralized Identity again: With the momentum in the EU and beyond, it becomes increasingly clear what we already can do and where we need to join forces to meet the needs of businesses, consumers, citizens, and, last but not least, governments. AI and Identity or “AIdentity”: I expect the conversations increasingly shift from discussing the challenge (as in 2023) to discussing the solutions. Identity Security: There is Digital Identity. There is Cybersecurity. There is Identity Security, which is about the identity impact on cybersecurity. With the ever-increasing threats, this is a topic that will be covered in many sessions.

While you may argue that two out of three will be the same as in 2023, this is not entirely true. On one hand, this demonstrates what the mega-trends are. On the other, we will look at the next level of evolution for these areas. What does it need for the perfect EUDI wallet that everyone wants to use? How will we deal with the identities of autonomous agents or bots acting on our behalf? So many questions. There will be many answers at EIC, but also a lot of food for thought for 2024 and beyond.

But with more than 230 sessions covering such a broad range of topics, from running your IAM well and modernizing it towards an Identity Fabric to Decentralized Identity and the future of CIAM (Consumer IAM), it is hard to predict what finally will be the hottest topics that are not only discussed during sessions throughout the conference, in the breaks, in the evening events, and on all the other occasions EIC provides.

See you in Berlin. Don’t miss booking your individual time with the KuppingerCole Analysts and Advisors early. Looking forward to meeting you in person.


Dock

KYC Onboarding: 10 strategies to improve KYC onboarding

Product professionals face the challenge of optimizing the KYC (Know Your Customer) onboarding process to improve conversion rates and efficiency while maintaining strict compliance with regulatory requirements. If you fail to find this balance, customer acquisition—and, ultimately, revenue—will be affected. Fortunately, how your company conducts 

Product professionals face the challenge of optimizing the KYC (Know Your Customer) onboarding process to improve conversion rates and efficiency while maintaining strict compliance with regulatory requirements.

If you fail to find this balance, customer acquisition—and, ultimately, revenue—will be affected.

Fortunately, how your company conducts KYC onboarding can seamlessly integrate compliance and user experience.

Full article: https://www.dock.io/post/kyc-onboarding


liminal (was OWI)

Mobile Identity: Charting the Future of Digital Security

In this episode of State of Identity, host Cameron D’Ambrosi welcomes Uku Tomikas, CEO of Messente Communications, for an in-depth discussion on the role of mobile communications within the digital identity landscape. Discover how mobile devices became central to our digital lives as literal authenticators and symbolic representations of our identity. Learn how Messente navigates […] The post Mo

In this episode of State of Identity, host Cameron D’Ambrosi welcomes Uku Tomikas, CEO of Messente Communications, for an in-depth discussion on the role of mobile communications within the digital identity landscape. Discover how mobile devices became central to our digital lives as literal authenticators and symbolic representations of our identity. Learn how Messente navigates the changing landscape of mobile identity, combating fraud and enhancing security with innovative technology while uncovering key takeaways on the future of authentication, the impact of SMS OTPs, and the potential of subscriber data in identity verification.

The post Mobile Identity: Charting the Future of Digital Security appeared first on Liminal.co.


OWI - State of Identity

Mobile Identity: Charting the Future of Digital Security

In this episode of State of Identity, host Cameron D’Ambrosi welcomes Uku Tomikas, CEO of Messente Communications, for an in-depth discussion on the role of mobile communications within the digital identity landscape. Discover how mobile devices became central to our digital lives as literal authenticators and symbolic representations of our identity. Learn how Messente navigates the changing land

In this episode of State of Identity, host Cameron D’Ambrosi welcomes Uku Tomikas, CEO of Messente Communications, for an in-depth discussion on the role of mobile communications within the digital identity landscape. Discover how mobile devices became central to our digital lives as literal authenticators and symbolic representations of our identity. Learn how Messente navigates the changing landscape of mobile identity, combating fraud and enhancing security with innovative technology while uncovering key takeaways on the future of authentication, the impact of SMS OTPs, and the potential of subscriber data in identity verification.

 


KuppingerCole

May 28, 2024: Identity Sprawl: The New Scourge of IAM

In today's digital landscape, businesses grapple with the pervasive challenge of identity sprawl, a phenomenon that threatens the integrity of Identity and Access Management (IAM) systems. The proliferation of cloud applications and digital resources has led to fragmented user accounts and access points, posing significant security risks and compliance challenges.
In today's digital landscape, businesses grapple with the pervasive challenge of identity sprawl, a phenomenon that threatens the integrity of Identity and Access Management (IAM) systems. The proliferation of cloud applications and digital resources has led to fragmented user accounts and access points, posing significant security risks and compliance challenges.

Tokeny Solutions

The SILC Group Partners with Tokeny to Pilot Alternative Assets Through Tokenization

The post The SILC Group Partners with Tokeny to Pilot Alternative Assets Through Tokenization appeared first on Tokeny.

9th of April, Luxembourg – The SILC Group (SILC), a leading alternative assets solutions provider based in Australia with more than $2 billion in funds under supervision, announced today it will partner with Tokeny, the leading institutional tokenization solution provider. Together, they aim to realize SILC’s ambitious digitalization vision by upgrading alternative assets onto blockchain through tokenization. 

The collaboration begins with a pilot project aimed at tokenizing a test fund using Tokeny’s unique tokenization infrastructure. The pilot project will assess the potential of blockchain to ultimately replace the various legacy centralized systems that are used to administer funds used by SILC to unite investors and capital seekers within the alternative assets industry.

By combining both parties’ deep expertise, it is intended they will be able to offer the compliant tokenization of alternative assets in Australia and across the region, by delivering institutional-grade solutions in the issuance and lifecycle management of real-world asset tokens. 

The announcement marks a transformative step within the alternative assets industry as institutional involvement across blockchain begins to ramp up and significant players enter the market. The traditional way of issuing financial products is time-consuming, multi-tiered and involves many steps using cumbersome systems and processes. Through blockchain technology, parties can benefit from using one single and global infrastructure that can compliantly improve transaction speeds, utilize automation, and is accessible 24/7/365 days a year.

By using the ERC-3643 token smart contract standard, SILC will have the ability to automate compliance validation processes and control real-world asset tokens while preserving typical features of a blockchain like immutability and interoperability.

Blockchain technology offers a potential paradigm shift in the efficiency of capital markets, with The SILC Group seeking to pass these efficiency and service improvement gains along to our clients. We are excited to be working with Tokeny on this pilot as we explore ways to further support our clients and enhance risk management activities, as well as increase the velocity and scalability of the solutions we provide. Koby JonesCEO of The SILC Group Alternative assets are one of the most suitable assets to be tokenized to make it transparent, accessible, and transferable, which has been historically hard to do. Our collaboration with The SILC Group underscores the growing recognition among regulated institutions of tokenization's tremendous potential. It's no longer a question of if tokenization will occur, but rather, when it will fundamentally transform the financial landscape. Luc FalempinCEO Tokeny About The SILC Group

The SILC Group is an alternative assets solutions specialist servicing the unique needs of investment managers, asset sponsors and wholesale investors through a distinct portfolio, digital and capital solutions. Since launching in 2012, The SILC Group has become a leading alternative provider of independent wholesale trustee, security trustee, fund administrator, registry, facility agency and licensing services. The SILC Group works alongside sophisticated clients to understand their business, project or asset funding requirements to determine the appropriate solutions to support their future growth plans.

About Tokeny

Tokeny provides a compliance infrastructure for digital assets. It allows financial actors operating in private markets to compliantly and seamlessly issue, transfer, and manage real-world assets using distributed ledger technology. By applying trust, compliance, and control on a hyper-efficient infrastructure, Tokeny enables market participants to unlock significant advancements in the management and liquidity of financial instruments. The company is backed by strategic investors such as Apex Group and Inveniam.

The post The SILC Group Partners with Tokeny to Pilot Alternative Assets Through Tokenization first appeared on Tokeny.

The post The SILC Group Partners with Tokeny to Pilot Alternative Assets Through Tokenization appeared first on Tokeny.

Monday, 08. April 2024

Shyft Network

Almost 70% of all FATF-Assessed Countries Have Implemented the Crypto Travel Rule

Over two-thirds of countries assessed have enacted or passed the FATF Travel Rule. Only 5% of jurisdictions surveyed have explicitly prohibited the use of virtual assets (VAs) and VASPs. 68% of jurisdictions have registered or licensed VASPs in practice. Last month, the Financial Action Task Force (FATF) published a report detailing its objectives and key findings on the status of i
Over two-thirds of countries assessed have enacted or passed the FATF Travel Rule. Only 5% of jurisdictions surveyed have explicitly prohibited the use of virtual assets (VAs) and VASPs. 68% of jurisdictions have registered or licensed VASPs in practice.

Last month, the Financial Action Task Force (FATF) published a report detailing its objectives and key findings on the status of implementation of Recommendation 15 by FATF Members and Jurisdictions.

The Findings

The report revealed that almost 70% of its member jurisdictions had implemented the FATF Travel Rule.

In North America, the USA and Canada are among those that have fully embraced the Travel Rule, putting in place the needed systems and checks, such as active registration of VASPs, supervisory inspections, and enforcement actions. Mexico is still on the path to full implementation, highlighting the varied progress within the same region.

Europe shows a similarly varied picture, but with many countries demonstrating strong adherence to the FATF recommendations. Nations like Austria, France, and Germany have successfully integrated the rule into their systems, whereas others are still adjusting and refining their approaches to meet the requirements.

Asia shows a vibrant mix of Crypto Travel Rule adoption levels, with countries like Singapore and Japan having taken significant steps towards compliance, including the enactment of necessary legislation and the licensing of VASPs. Meanwhile, other countries like Indonesia and Malaysia are making progress but are not yet fully compliant.

In Latin America, Argentina, Brazil, and Colombia are working towards aligning their regulations with the Travel Rule, with varying degrees of progress. The picture is similar in the Middle East and Africa, where the UAE demonstrates strong progress, whereas countries like Egypt and South Africa are grappling with the challenges of regulatory adaptation and enforcement.

Not the First Time

This is not the first time that the FATF has issued such a report. Mid-last year, a FATF report noted that only a few countries have fully implemented the Travel Rule, highlighting the urgency of implementing it. It also shed light on the challenges countries and Virtual Asset Service Providers (VASPs) face in complying with the Travel Rule.

The 2023 FATF report not only urged countries to implement the Travel Rule but also pointed out that many existing Travel Rule solutions were not fully capturing and sharing data quickly enough and often failed to cover all types of digital assets. Additionally, these solutions lacked interoperability, making it harder and more costly for VASPs to comply with the Travel Rule.

The Solution

In this evolving regulatory landscape, Shyft Veriscope’s innovative approach aligns with the FATF’s guidelines and offers a robust solution where others may fall short. Furthermore, the recently released User Signing enables VASPs to request cryptographic proof directly from users’ non-custodial wallets, fortifying the self-custody process and enabling seamless Travel Rule compliance for withdrawal transactions.

‍About Veriscope

‍Veriscope, the compliance infrastructure on Shyft Network, empowers Virtual Asset Service Providers (VASPs) with the only frictionless solution for complying with the FATF Travel Rule. Enhanced by User Signing, it enables VASPs to directly request cryptographic proof from users’ non-custodial wallets, streamlining the compliance process.

For more information, visit our website and contact our team for a discussion. To keep up-to-date on all things crypto regulations, sign up for our newsletter and follow us on X (Formerly Twitter), LinkedIn, Telegram, and Medium.

Almost 70% of all FATF-Assessed Countries Have Implemented the Crypto Travel Rule was originally published in Shyft Network on Medium, where people are continuing the conversation by highlighting and responding to this story.


Spruce Systems

AI Is The Final Blow For An ID System Whose Time Has Passed

This article was first published in Forbes on March 28, 2024.

Last month, the world got a preview of a looming catastrophe: the use of artificial intelligence (AI) to bypass antiquated identity and security systems. The news outlet 404 Media reported the discovery of an “underground” service called OnlyFake that created and sold fake IDs for 26 countries through Telegram, and one of 404’s reporters used one of OnlyFake’s IDs to bypass the “KYC,” or “know your customer,” process of crypto exchange OKX.

There’s nothing terribly new there, except that OnlyFake claims it uses AI to create the bogus documents. 404 wasn’t able to confirm OnlyFake’s claim to use AI, but OnlyFake’s deep-discount pricing may suggest the claims are real.

Either way, this should be a wake-up call: It’s only a question of when, not if, AI tools will be used at scale to bypass identity controls online.

New AI-Enabled Tools for Online Identity Fraud

The scariest thing about AI-generated fake IDs is how quickly and cheaply they can be produced. The OnlyFake team was reportedly selling AI-generated fake driver’s licenses and passports for $15, claiming they could produce hundreds of IDs simultaneously from Excel data, totaling up to 20,000 fakes per day.

A flood of cheap, convincing fake physical IDs would leave bars, smoke shops and liquor stores inundated with fake-wielding teenagers. But there would be some chance at detection, thanks to anti-fraud features, like holograms, UV images, and microtext, now common on physical ID cards.

But OnlyFakes’ products are tailored for use online, making them even more dangerous. When a physical ID is used online, the holograms and other physical anti-fraud measures are rendered useless. OnlyFakes even generates fake backdrops to make the images look like photos of IDs snapped with a cell phone.

One tentative method of making online identity more secure is video verification, but new technologies like OpenAI’s Sora are already undermining that method. They’re frighteningly effective in one-on-one situations, such as when a finance staffer was tricked out of $25 million by ‘deepfake’ versions of their own colleagues.

With critical services moving online en masse, digital fraud is becoming even more professionalized and widespread than the offline version.

The Numbers Don’t Add Up, But They Don’t Have To

You might wonder how those generative fakes work without real driver’s licenses or passport numbers. If you submit an AI-generated driver’s license number for verification at a crypto exchange or other financial institution, the identity database would immediately flag it as a fake, right?

Well, not exactly. Police or other state entities can almost always directly access ID records, but those systems don’t give third parties easy access to their database—partly out of privacy concerns. Therefore, many verification systems simply can't ask the issuing agency if a driver’s license or ID is valid, hence why 404 Media was able to use an AI-generated card to fool OKX.

A KYC provider might instead rely on third-party data brokers for valid matches or pattern-based alphanumeric verification—in other words, determining whether or not an ID number is valid by whether it matches a certain pattern of letters and numbers used by issuers.

This would make such systems particularly vulnerable to AI fakes since detecting and reproducing patterns is where generative AI shines.

The OnlyFakes instance is just one example of a growing fraud problem that exploits flaws in our identity systems. The U.S. estimated losses between $100-$135 billion in pandemic unemployment insurance fraud, which is often perpetuated by false identities. Even scarier, there has been a rise in fake doctors, whether selling fake treatments online or practicing in American hospitals, enabled by identity fraud.

We can do better.

How Do We Fight AI Identity Fraud?

It’s clearly time to develop a new kind of identification credential—a digital ID built for the internet, and resistant to AI mimicry. An array of formats and standards are currently being adopted for this new kind of digital ID, such as mDLs (mobile driver’s licenses) and digital verifiable credentials.

At the core of these digital credentials are counterparts to the holograms and other measures that let a bartender verify your physical ID. That includes cryptographic security schemes, similar to what the White House is supposedly considering to distinguish official statements from deepfakes. These cryptographic attestations use unique codes as long as 10 to the 77th power, making it computationally impossible for an AI to mimic.

However, new approaches are not without new risks. While digital ID systems may promise to prevent fraud and provide convenience, they must be implemented carefully to enshrine privacy and security throughout our infrastructure. When implemented without consideration for the necessary policy and data protection frameworks, they may introduce challenges such as surveillance, unwanted storage of personal information, reduced accessibility or even increased fraud.

Fortunately, many mitigations exist. For example, these digital IDs can be bound to physical devices using security chips known as secure elements, which can add the requirement for the same device to be present when they are being used. This makes digital IDs much harder to steal than just copying a file or leaking a digital key from your cloud storage. The technology can also be paired with privacy and accessibility laws to ensure safe and simple usage.

This new kind of ID makes it easier for the user to choose what data they reveal. Imagine taking a printed ID into a liquor store and being able to verify to the clerk that you’re over 21—without sharing your address or even your specific birthday. That flexibility alone would greatly increase privacy and security for individuals and society as a whole.

Privacy-preserving technology would also make it safer to verify a driver’s license number directly with the issuer, potentially rendering OnlyFake’s AI-generated fake ID numbers useless.

We’ve already sacrificed too much—in safety and in plain old dollars—by sticking with physical IDs in a digital world. Transitioning to a modernized, privacy-preserving, digital-first system would be a major blow to fraudsters, money launderers and even terrorists worldwide. It’s time.

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


Verida

Verida Storage Node Tokenomics: Trout Creek Advisory Selected as Cryptoeconomic Design Partner

Verida Storage Node Tokenomics RFP Process: Trout Creek Advisory Selected as Cryptoeconomic Design Partner Verida Storage Node Tokenomics: Trout Creek Advisory Selected as Cryptoeconomic Design Partner The Verida Network is a private decentralised database network developing important infrastructure elements for the emerging web3 ecosystem. Private storage infrastructure is an important
Verida Storage Node Tokenomics RFP Process: Trout Creek Advisory Selected as Cryptoeconomic Design Partner Verida Storage Node Tokenomics: Trout Creek Advisory Selected as Cryptoeconomic Design Partner

The Verida Network is a private decentralised database network developing important infrastructure elements for the emerging web3 ecosystem. Private storage infrastructure is an important growth area for web3, and the Verida network is creating a decentralized and hyper-efficient tokenized data economy for private data.

Last year Verida issued a Request for Proposals (RFP) to progress the development and implementation of the Verida Network tokenomic modelling. The RFP called for independent parties to bid on the delivery of analysis, design, modelling, and implementation recommendations for the economic aspects of the protocol.

A very competitive bidder process resulted in 11 bids being received from vendors globally. After an extensive evaluation process we are excited to announce that Trout Creek Advisory was successful in their bid. They will be working closely with the Verida team as we open network access to storage providers, launch of the VDA token and its utility.

Trout Creek Advisory is a full service cryptoeconomic design and strategy consultancy serving web3 native and institutional clients around the globe. At the forefront of cryptoeconomic and token ecosystem design since 2016, its team has both closely observed and actively shaped the evolution of thinking about distributed incentive systems across different industry epochs, narratives, and cycles.

“We’ve followed the technical progress of the Verida team for several years, and are excited to see them reach this stage. We’re delighted to now be able to leverage our own expertise towards the development of their token ecosystem, and to create a sustainable structure that will help ensure the protocol’s growth and most effectively enable private, distributed storage infrastructure for the broader community, ” said Brant Downes, Trout Creek Co-Founder.

“Verida’s Tokenomics RFP process resulted in many high quality submissions. Ultimately we chose Trout Creek given their strong proposal, and deep engagement they demonstrated through the process. They identified and addressed the specific needs for Verida cryptoeconomics, and through the evaluation process came out as best aligned to engage for this service.” Ryan Kris, COO & Co-Founder.

We would like to thank Dave Costenaro from Build Well who managed the bid process in conjunction with the Verida team. Dave previously worked at the CryptoEconLab at Protocol Labs, focusing on token incentives, tokenomics, mechanism design, and network monitoring for Filecoin.

We deeply thank all the teams who took time to submit bids for the RFP. The quality of submissions was extremely high, and demonstrates the growing maturity of work being conducted on token design in the crypto industry. We look forward to working with Trout Creek and will be sharing more updates as we progress the tokenomics research.

About Verida

Verida is a pioneering decentralized data network and self-custody wallet that empowers users with control over their digital identity and data. With cutting-edge technology such as zero-knowledge proofs and verifiable credentials, Verida offers secure, self-sovereign storage solutions and innovative applications for a wide range of industries. With a thriving community and a commitment to transparency and security, Verida is leading the charge towards a more decentralized and user-centric digital future.

Verida Missions | X/Twitter | Discord | Telegram | LinkedInLinkTree

Verida Storage Node Tokenomics: Trout Creek Advisory Selected as Cryptoeconomic Design Partner was originally published in Verida on Medium, where people are continuing the conversation by highlighting and responding to this story.


KuppingerCole

Cyber Risk Frameworks in 2024

by Osman Celik The landscape of cybersecurity is continually evolving, with new threats and technologies reshaping the way organizations protect their digital assets. In order to understand the significance of these changes, it is crucial to understand the evolving cyber threat landscape, which acts as the driving force behind cyber risk framework improvements. In this Advisory Note, we explore th

by Osman Celik

The landscape of cybersecurity is continually evolving, with new threats and technologies reshaping the way organizations protect their digital assets. In order to understand the significance of these changes, it is crucial to understand the evolving cyber threat landscape, which acts as the driving force behind cyber risk framework improvements. In this Advisory Note, we explore the latest revisions and updates to prominent cyber risk frameworks, including NIST CSF 2.0, ISO/IEC 27000 series, SOC 2, CIS, PCI-DSS 4.0, and CSA CCM. Investigating these frameworks and their adaptations enable practitioners to gain valuable insights into the emerging practices and standards that are essential to mitigating risk and ensuring the security of sensitive data.

Metadium

Keepin Wallet Application Termination

Dear Community, As of September 30, 2024, the integrated personal information wallet Keepin app will be discontinued. Keepin is provided for the convenience of META deposit/withdrawal and ID creation and management on the Metadium blockchain. Due to the end of app support, we recommend extracting and storing the recovery code (Mnemonic) to maintain and protect your assets. Additionally, downloads

Dear Community,

As of September 30, 2024, the integrated personal information wallet Keepin app will be discontinued. Keepin is provided for the convenience of META deposit/withdrawal and ID creation and management on the Metadium blockchain. Due to the end of app support, we recommend extracting and storing the recovery code (Mnemonic) to maintain and protect your assets. Additionally, downloads of the Keepin app will be discontinued as of April 30, 2024, and if you have META assets, be sure to back up the recovery code (Mnemonic).

[How to back up Mnemonic in the Keepin app] On the Keepin main screen, click [More] in the bottom right.

2. Click [Check recovery information].

3. Click [Mnemonic].

4. Enter your PIN number.

5. You can recover your META assets by copying and storing the confirmed recovery code (Mnemonic).

Keeping the recovery code and Meta ID to recover saved information is essential.

Metadium does not retain any information about Keepin users. This means that if the user loses both the recovery code and private key, Metadium cannot recover the META balance or information. Please note that once service support ends, the recovery code will no longer be available.

Write down and store the recovery code and private key in a safe place, and do not share them with others.

Suppose you want to use Metadium assets held in your existing Keepin wallet. To transfer the assets, you must connect MetaMask to the Metadium mainnet.

[How to connect Metadium mainnet to MetaMask]

MetaMask is an Ethereum-based software wallet used by people around the world.

MetaMask is connected to the Ethereum mainnet by default, and you can add and change Metadium mainnet networks. You can add the Metadium network from the network selection menu at the top or the settings page and connect it to your MetaMask wallet.

From the Networks menu on the top of the page, select “Ethereum Mainnet” and click “Add Network.”

2. Use the information below to fill out the form:

- Network name: META Mainnet

- New RPC URL: https://api.metadium.com/prod

- Chain ID: 11

- Currency Symbol: META

- Block Explorer URL: https://explorer.metadium.com/

3. Metadium mainnet has been added to your networks

We sincerely thank everyone who has used the Keepin app so far, and we ask for your understanding as we are no longer able to provide the service. We are committed to working even harder to deliver an improved app experience.

Metadium Team

안녕하세요. 메타디움 팀입니다.

2024년 9월 30일부로 통합 개인정보 지갑 키핀 앱의 서비스가 종료됩니다. 키핀은 메타디움 블록체인 상의 META 입출금과 ID 생성 및 관리 편의를 위해 제공되었습니다. 앱 지원 종료로 인해, 자산의 유지 및 보호를 위해 복구코드(Mnemonic)를 추출하여 보관하는 것을 권장합니다. 또한, 2024년 4월 30일부로 키핀 앱의 다운로드가 중단되며, META 자산이 있는 경우 반드시 복구코드(Mnemonic)를 백업 해두시기 바랍니다.

[키핀 앱에서 복구 코드를 백업하는 방법] Keepin 메인 화면에서 우측 하단 [더보기]를 클릭합니다.

2. [복구 정보 확인]을 클릭합니다.

[Mnemonic]을 클릭합니다.

3. PIN 번호를 입력합니다.

4. 확인된 복구 코드(Mnemonic)를 복사하여 보관하면, META 자산을 복구할 수 있습니다.

저장된 정보와 내 Meta ID를 복구하기 위해 복구 코드 보관이 반드시 필요합니다.

메타디움은 Keepin 사용자에 대한 어떤 정보도 보유하고 있지 않습니다. 이는 사용자가 복구 코드 및 Private Key를 모두 분실할 경우, 메타디움이 META 잔고나 정보를 복구해주는 것이 불가능함을 의미합니다. 서비스 지원이 종료되면 더 이상 복구코드를 확인할 수 없기 때문에 복구코드를 찾는 것이 불가능하다는 점 유의바랍니다.

복구 코드 및 Private Key는 안전한 곳에 작성하여 보관하시고, 타인에게 공유하지 마십시오.

기존 키핀 지갑에 보유하고 있던 메타디움 자산을 사용하고 싶다면, 메타마스크에 메타디움 메인넷을 연결하여 자산을 이동시키는 과정이 필요합니다.

[메타마스크에 메타디움 메인넷을 연결하는 방법]

메타마스크(MetaMask)는 전세계인이 사용하는 이더리움 기반 소프트웨어 월렛입니다.

메타마스크에는 기본적으로 이더리움 메인넷이 연결되어 있으며, 메타디움 메인넷 네트워크를 추가하고 변경할 수 있습니다. 상단 네트워크 선택 메뉴 또는 설정 페이지에서 메타디움 네트워크를 추가하고 메타마크스 지갑에 연결할 수 있습니다.

메타마스크 상단 우측 [이더리움 메인넷] 선택 > [네트워크 추가] 클릭

2. 네트워크 수동 추가 페이지에서 아래 내용 입력 후 [저장] 클릭

여기를 클릭하면 <7. How to Connect Metamask with Metadium Testnet>에서 복사할 수 있습니다.

메타디움 네트워크 정보는 다음과 같습니다.

- 네트워크 이름: META Mainnet

- 새 RPC URL: https://api.metadium.com/prod

- 체인 ID: 11

- 통화: META

- 블록 탐색기 URL: https://explorer.metadium.com/

3. 메타디움 메인넷 네트워크 추가 완료

그동안 키핀 앱을 이용해주신 여러분께 진심으로 감사드리며, 더 이상 서비스하지 못하는 점에 대해 여러분의 많은 양해를 부탁드립니다.

메타디움 팀은 사용자 여러분들에게 더 나은 앱 경험을 제공하기 위해 더욱 노력할 것을 약속드립니다.

-메타디움 팀

Website | https://metadium.com

Discord | https://discord.gg/ZnaCfYbXw2

Telegram(EN) | http://t.me/metadiumofficial

Twitter | https://twitter.com/MetadiumK

Medium | https://medium.com/metadium

Keepin Wallet Application Termination was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.


PingTalk

ForgeRock Software Release 7.5 | Ping Identity

In today’s competitive environment, businesses heavily rely on a robust identity and access management (IAM) platform to attract and maintain customers and ensure protection for the organization and its customers from cyberthreats. Central to this strategy is the provision of a comprehensive suite of IAM tools and resources. These resources are critical for the seamless creation and management of

In today’s competitive environment, businesses heavily rely on a robust identity and access management (IAM) platform to attract and maintain customers and ensure protection for the organization and its customers from cyberthreats. Central to this strategy is the provision of a comprehensive suite of IAM tools and resources. These resources are critical for the seamless creation and management of secure and compliant applications. The accessibility of these tools from a single provider simplifies management for administrators and empowers developers to efficiently build secure solutions.

 

Together as one combined Ping Identity, we are committed to supporting, developing, and innovating the core platforms for Ping and ForgeRock to deliver the most robust IAM offering available to benefit our customers. In support of this mission, it is with great enthusiasm that Ping Identity unveils the ForgeRock Software version 7.5. This release includes a host of innovative features designed to empower our self-managed software customers. It enables organizations to integrate and leverage more of the Ping capabilities within the combined portfolio of IAM services, elevates security and compliance measures, and enhances the experiences of developers and administrators. The ForgeRock Software Version 7.5 release includes:

 

ForgeRock Access Management 7.5

ForgeRock Identity Management 7.5

ForgeRock Directory Services 7.5 

ForgeRock Identity Gateway 2024.3 


Types of Bank Fraud And How to Prevent Them (With Examples)

Bank fraud is becoming more prevalent, with sophisticated attacks resulting in both financial and reputational damage. One study reports that over 70% of financial institutions lost at least $500,000 to fraudulent activity in 2022. The hardest-hit institutions were fintech companies and regional banks.   On top of that, the financial services industry is becoming increasingly regulated, p

Bank fraud is becoming more prevalent, with sophisticated attacks resulting in both financial and reputational damage. One study reports that over 70% of financial institutions lost at least $500,000 to fraudulent activity in 2022. The hardest-hit institutions were fintech companies and regional banks.

 

On top of that, the financial services industry is becoming increasingly regulated, particularly when it comes to verifying customer identities and incorporating anti-money laundering protocols.

 

Establishing mutual trust between customers and financial institutions goes a long way in preventing bank fraud. With the right practices in place, users enjoy a frictionless experience while financial institutions can prevent multiple types of consumer fraud using increased identity verification and monitoring — all while staying compliant with federal regulations.


Verida

Top Three Data Privacy Issues Facing AI Today

Written by Chris Were (Verida CEO & Co-Founder) and originally published on DailyHodl.com, this post is Part 1 of a Privacy / AI series. See Part 2: How Web3 and DePIN Solves AI’s Data Privacy Problems. AI (artificial intelligence) has caused frenzied excitement among consumers and businesses alike — driven by a passionate belief that LLMs (large language models) and tools like ChatGPT will t

Written by Chris Were (Verida CEO & Co-Founder) and originally published on DailyHodl.com, this post is Part 1 of a Privacy / AI series. See Part 2: How Web3 and DePIN Solves AI’s Data Privacy Problems.

AI (artificial intelligence) has caused frenzied excitement among consumers and businesses alike — driven by a passionate belief that LLMs (large language models) and tools like ChatGPT will transform the way we study, work and live.

But just like in the internet’s early days, users are jumping in without considering how their personal data is used — and the impact this could have on their privacy.

There have already been countless examples of data breaches within the AI space. In March 2023, OpenAI temporarily took ChatGPT offline after a ‘significant’ error meant users were able to see the conversation histories of strangers.

That same bug meant the payment information of subscribers — including names, email addresses and partial credit card numbers — were also in the public domain.

In September 2023, a staggering 38 terabytes of Microsoft data was inadvertently leaked by an employee, with cybersecurity experts warning this could have allowed attackers to infiltrate AI models with malicious code.

Researchers have also been able to manipulate AI systems into disclosing confidential records.

In just a few hours, a group called Robust Intelligence was able to solicit personally identifiable information from Nvidia software and bypass safeguards designed to prevent the system from discussing certain topics.

Lessons were learned in all of these scenarios, but each breach powerfully illustrates the challenges that need to be overcome for AI to become a reliable and trusted force in our lives.

Gemini, Google’s chatbot, even admits that all conversations are processed by human reviewers — underlining the lack of transparency in its system.

“Don’t enter anything that you wouldn’t want to be reviewed or used,” says an alert to users warns.

AI is rapidly moving beyond a tool that students use for their homework or tourists rely on for recommendations during a trip to Rome.

It’s increasingly being depended on for sensitive discussions — and fed everything from medical questions to our work schedules.

Because of this, it’s important to take a step back and reflect on the top three data privacy issues facing AI today, and why they matter to all of us.

1. Prompts aren’t private

Tools like ChatGPT memorize past conversations in order to refer back to them later. While this can improve the user experience and help train LLMs, it comes with risk.

If a system is successfully hacked, there’s a real danger of prompts being exposed in a public forum.

Potentially embarrassing details from a user’s history could be leaked, as well as commercially sensitive information when AI is being deployed for work purposes.

As we’ve seen from Google, all submissions can also end up being scrutinized by its development team.

Samsung took action on this in May 2023 when it banned employees from using generative AI tools altogether. That came after an employee uploaded confidential source code to ChatGPT.

The tech giant was concerned that this information would be difficult to retrieve and delete, meaning IP (intellectual property) could end up being distributed to the public at large.

Apple, Verizon and JPMorgan have taken similar action, with reports suggesting Amazon launched a crackdown after responses from ChatGPT bore similarities to its own internal data.

As you can see, the concerns extend beyond what would happen if there’s a data breach but to the prospect that information entered into AI systems could be repurposed and distributed to a wider audience.

Companies like OpenAI are already facing multiple lawsuits amid allegations that their chatbots were trained using copyrighted material.

2. Custom AI models trained by organizations aren’t private

This brings us neatly to our next point — while individuals and corporations can establish their custom LLM models based on their own data sources, they won’t be fully private if they exist within the confines of a platform like ChatGPT.

There’s ultimately no way of knowing whether inputs are being used to train these massive systems — or whether personal information could end up being used in future models.

Like a jigsaw, data points from multiple sources can be brought together to form a comprehensive and worryingly detailed insight into someone’s identity and background.

Major platforms may also fail to offer detailed explanations of how this data is stored and processed, with an inability to opt out of features that a user is uncomfortable with.

Beyond responding to a user’s prompts, AI systems increasingly have the ability to read between the lines and deduce everything from a person’s location to their personality.

In the event of a data breach, dire consequences are possible. Incredibly sophisticated phishing attacks could be orchestrated — and users targeted with information they had confidentially fed into an AI system.

Other potential scenarios include this data being used to assume someone’s identity, whether that’s through applications to open bank accounts or deepfake videos.

Consumers need to remain vigilant even if they don’t use AI themselves. AI is increasingly being used to power surveillance systems and enhance facial recognition technology in public places.

If such infrastructure isn’t established in a truly private environment, the civil liberties and privacy of countless citizens could be infringed without their knowledge.

3. Private data is used to train AI systems

There are concerns that major AI systems have gleaned their intelligence by poring over countless web pages.

Estimates suggest 300 billion words were used to train ChatGPT — that’s 570 gigabytes of data — with books and Wikipedia entries among the datasets.

Algorithms have also been known to depend on social media pages and online comments.

With some of these sources, you could argue that the owners of this information would have had a reasonable expectation of privacy.

But here’s the thing — many of the tools and apps we interact with every day are already heavily influenced by AI — and react to our behaviors.

The Face ID on your iPhone uses AI to track subtle changes in your appearance.

TikTok and Facebook’s AI-powered algorithms make content recommendations based on the clips and posts you’ve viewed in the past.

Voice assistants like Alexa and Siri depend heavily on machine learning, too.

A dizzying constellation of AI startups is out there, and each has a specific purpose. However, some are more transparent than others about how user data is gathered, stored and applied.

This is especially important as AI makes an impact in the field of healthcare — from medical imaging and diagnoses to record-keeping and pharmaceuticals.

Lessons need to be learned from the internet businesses caught up in privacy scandals over recent years.

Flo, a women’s health app, was accused by regulators of sharing intimate details about its users to the likes of Facebook and Google in the 2010s.

Where do we go from here

AI is going to have an indelible impact on all of our lives in the years to come. LLMs are getting better with every passing day, and new use cases continue to emerge.

However, there’s a real risk that regulators will struggle to keep up as the industry moves at breakneck speed.

And that means consumers need to start securing their own data and monitoring how it is used.

Decentralization can play a vital role here and prevent large volumes of data from falling into the hands of major platforms.

DePINs (decentralized physical infrastructure networks) have the potential to ensure everyday users experience the full benefits of AI without their privacy being compromised.

Not only can encrypted prompts deliver far more personalized outcomes, but privacy-preserving LLMs would ensure users have full control of their data at all times — and protection against it being misused.

Chris Were is the CEO of Verida, a decentralized, self-sovereign data network empowering individuals to control their digital identity and personal data. Chris is an Australian-based technology entrepreneur who has spent more than 20 years devoted to developing innovative software solutions.

Top Three Data Privacy Issues Facing AI Today was originally published in Verida on Medium, where people are continuing the conversation by highlighting and responding to this story.


TBD

DID DHT: Ready For Primetime

Learn how the DID:DHT method was created and why it's the default method of Web5 and tbDEX.

Digital identity shapes every facet of our online interactions. The quest for a system that balances decentralization, scalability, security, and user experience has been relentless. Today, I'm thrilled to share that TBD has birthed a new solution: the DID DHT method. This leap forward is not just a technical achievement; it's a foundational component for the more inclusive and trust-based digital world we want for tomorrow.

The specification is nearing its first major version, and our team has produced client software in all of our open source SDKs in more than five languages. We have also built and deployed a publicly-available developer gateway, which has already registered many thousands of DIDs, with staging and production-ready gateways coming soon.

Some of you may already be familiar with DID DHT and why we’ve chosen to make it our default DID method, but if not, or if you’re curious to find out more, read on to learn more about how the method was created, and how we’ve ended up here today.

What Makes a Good DID Method?

Our vision for a superior DID method hinged on several critical features:

Sufficient Decentralization — a foundational principle to mitigate censorship and enhance user autonomy. Scalability and Accessibility – making digital identity accessible to billions, without a prohibitive cost barrier. Comprehensive Feature Set — supporting multiple cryptographic key types, services, and other core DID properties. Reliable and Verifiable History — enabling trust through transparency historical data. Global Discoverability – facilitating easy access and independent verification of digital identifiers. The Evolution of Our DID Strategy

Historically our software has supported did:key, did:web, and did:ion, and some other methods within certain segments of our stack. Recognizing the impracticality of a "one-size-fits-all" approach, we embraced a multi-method strategy. Today that strategy incorporates three key DID methods: did:jwk, did:web, and did:dht, each catering to specific scenarios with their unique strengths.

Within our SDKs, JWK replaces did:key, a simple and widely adopted method that uses standard JSON Web Keys (JWKs), limiting complexity over did:key’s approach to using multi-formats. DID Web is an obvious choice for entities with existing brands, as trust can be easily linked to existing domains without the use of any special or complex technology. DID DHT is a new method that we view as a replacement for ION, which has strong decentralization characteristics, a robust feature set, and a simpler architecture.

Leaping Beyond ION

The biggest change we’ve made is going from ION to DID DHT.

ION is a Sidetree-based DID method that is a L2 on the Bitcoin blockchain. ION is a fully-featured DID method, and one of the few that supports root key rotation and discoverability of all DIDs with complete historical state. However, there are three main reasons that led us to move away from using ION — architectural complexity, centralization risk, and a sub-optimal user experience.

While ION presented a robust DID method with admirable features, its complexity, centralization risks, and user experience challenges prompted us to explore alternatives. DID DHT stands out as our choice for a new direction, offering simplicity, enhanced decentralization, and a user-friendly approach without compromising on the core features essential for a comprehensive DID method.

Enter DID DHT

DID DHT is built upon Pkarr, a community-created project. Pkarr stands for Public Key Addressable Resource Records and acts as a bridge between our DID method and the Mainline Decentralized Hash Table, overlaying DNS record semantics and providing a space-efficient encoding mechanism for DID Documents. The choice of Mainline DHT, with its over 20 million active nodes and a 15-year successful run, guarantees exceptional decentralization out of mainline nodes, but can also leverage Pkarr nodes and DID DHT gateways for additional functionality.

DID DHT trades off use of a blockchain for immediate decentralization, fast publishing and resolution, and trustless verification. That’s right — the DID Document’s records are signed by its own key before entering the DHT — there’s no need to trust nodes, as you can verify payloads yourself client-side. Similar to ION, DID DHT documents support multiple keys, services, type indexing, and other DID Document properties.

DID DHT, however, is not without its limitations. The main three being: a need to republish records to nodes regularly to prevent them from expiring from the DHT and reliance on a non-rotatable Ed25519 key called the “identity key,” as is required by the DHT and BEP44, and historical DID state not being shared between nodes. Our ongoing development and the community-driven enhancements aim to address these challenges, refining DID DHT's architecture for broader applicability and reliability.

DID DHT Opportunities

One of the most interesting opportunities we have identified for DID DHT is interoperability with existing single-key methods like did:key and did:jwk. Essentially, you can continue to use single-key methods as you do today, with an optional resolution step to the DHT to extend the functionality of these methods, like adding additional keys or service(s) to a DID Document. We have begun to define this interoperability in the method’s registry.

Another interesting opportunity for DID DHT is with the W3C DID working group, which is currently going through a rechartering effort to focus on interoperability and DID Resolution. Depending on how the charter ends up, there could be an opportunity to promote the DID DHT method as one that is broadly interoperable, decentralized, and does not necessitate the use of a blockchain — a common critique of DIDs by community members in the past.

We have additional ideas such as associating human-readable names with DIDs, tying DIDs into trust and discoverability services, and supporting gossip between DID DHT gateways. We encourage you to join us on GitHub to continue the discussion.

Looking Forward

The introduction of DID DHT represents a significant milestone in our journey toward a more decentralized, accessible, and secure digital identity landscape. We believe DID DHT has the characteristics to make it widely useful, and widely adopted. Try it out for yourself, and let us know what you think.

As we continue to refine and expand its capabilities, we invite the community to join us in this endeavor, contributing insights, feedback, and innovations to shepard DID DHT toward becoming your default DID method.

Sunday, 07. April 2024

KuppingerCole

Analyst Chat #209: Behind the Screens - A Day in the Life of a Tech Analyst

In this episode Matthias welcomes Osman Celik, a research analyst with KuppingerCole Analysts, to uncover the daily life and career dynamics within the tech analysis industry. They take a glimpse into Osman’s day-to-day activities, exploring the challenges and highlights of being a tech analyst. They discuss essential pathways for entering the tech analysis field, including the qualifications and

In this episode Matthias welcomes Osman Celik, a research analyst with KuppingerCole Analysts, to uncover the daily life and career dynamics within the tech analysis industry. They take a glimpse into Osman’s day-to-day activities, exploring the challenges and highlights of being a tech analyst. They discuss essential pathways for entering the tech analysis field, including the qualifications and experiences that bolster a candidate’s profile.

Osman offers deep insights into the critical skills and attributes necessary for success in this role, addressing common misconceptions and highlighting the aspects that make the job fascinating. Furthermore, the conversation navigates through the evolving landscape of tech analysis, providing listeners with strategic advice for nurturing a long-term career in this ever-changing sector.



Friday, 05. April 2024

Spherical Cow Consulting

The Evolving Landscape of Non-Human Identity

This blog entry explores the insane world of non-human identity, a subject as complicated as the world’s many cloud computing environments. My journey from the early days of digital identity management to the revelations at IETF 119 serves as the backdrop, and I share what I’m learning based on those experiences. The post zips through the labyrinth of authorization challenges that processes and AP

I’ve recently started to explore the world that is non-human identity. Coming out of IETF 119, where some of the best (and most terrifying) hallway conversations were about physical and software supply chains, I realized this was a space that didn’t look like what I’d experienced early in my digital identity career.

When I was first responsible for the group managing digital identity for a university, our issues with non-human identity centered around access cards. These were provisioned in the same system that provisioned employee badges: the HR system. Having entities that were not people in a people-driven system always involved annoying policy contortions. It was fitting a square peg in a round hole.

That kind of non-human identity has nothing to do with what I learned at IETF 119. Personally, I blame cloud computing.

Cloud Computing

Depending on the person you’re talking to, cloud computing is often seen as the answer to a company’s computing cost and efficiency problems. Rather than paying for a data center, equipment, network capacity, and all the staff that goes with it, the company can buy only the computing power it needs, and the rest magically disappears. IT staff tend to look at cloud computing another way: the same headaches, just more complicated to manage because they are on someone else’s computers.

However, cost and some dimensions of efficiency are the driving force behind business costs, and cloud computing has become even more attractive. Companies regularly purchase cloud computing services from a variety of vendors. A cloud here via Microsoft for Azure, a cloud there at Amazon Web Services for database computing, and another cloud there at Google for storage. It looks fantastic on paper (or at least on a monitor), but it’s introduced big identity problems as a result.

Authorization in the Cloud

Processes and APIs are not people. They don’t get hired and fired over the course of years like people do. They may never be associated with a person at all. And yet, they usually have a start and an end. They need to be constrained to access or accomplish only the specific things they are supposed to do. They may delegate specific tasks to other processes. They need authorization and access control at a speed and scale that makes human authorization look like a walk in the park.

If all authorizations happen in one environment, it’s not too bad. Everything gets their data from the same sources and in the format they expect. It might be like those old key cards managed via a people system, even when they aren’t themselves people, but it is a simple enough model.

Authorization in Many Clouds

However, if the authorization has to happen across environments, things get hairy. The format of the code may change. The source of truth may vary depending on where the process started. There may be hundreds, thousands, millions more of these processes and APIs than there are people in the company. And these processes are almost entirely independent of any human involvement.

This new kind of non-human identity operates in ways human identities don’t. Batch processing is a great example since the process does not necessarily act on behalf of a user. Training an AI model is batch processing that runs for a week and has no human involved. Batch transactions in the bank, such as payroll, run unsupervised and aren’t handled as a person. Furthermore, a human may be flagged by a computer’s security system as showing strange behavior when they are logging in from both Brisbane and Chicago simultaneously. An application, in all its glory, may suddenly expand to be in data centers around the world because it’s dealing with a Taylor Swift concert sale. What would be anomalous for a person is just another day in cloud computing.

Sorry, HR badge system, you’re just not suited for managing authorization here.

DevOps, IT Admins, and IAM Practitioners

Developing and deploying code in a cloud environment is generally in the hands of the DevOps and IT teams. DevOps traditionally move quickly to develop and manage the applications a company needs to deliver its products or services. IT teams deal with the applications that have been developed and deployed. The staff in these groups often specialize in one cloud computing environment or another; it’s not easy to be a generalist in this space.

DevOps is often the fastest in terms of getting new code in place, and the IT admin tries to deal with the symptoms of what’s been developed and deployed. Neither group tends to think in terms of identity management; most IAM teams are focused on people. This is a problem.

Identity administrators are beginning to understand there is an authorization problem, but solutions are sparse. DevOps teams also realize they have an identity problem (ask your favorite DevOps person how much fun it is to manage API permissions without leaking a password or private key). But the DevOps team is not going to the IAM people and say, “Create all these millions of identities and manage them for us, kthxbai.” For one thing, it wouldn’t occur to them. For another, the IAM staff would have a nervous breakdown.

Standards to the Rescue!

And here’s where the hallway conversations at IETF 119 enter the story. The whole reason I learned about the authorization-in-the-cloud problem was because of discussions around two working groups and an almost-working group:

Supply Chain Integrity, Transparency, and Trust (scitt) Workload Identity in Multi System Environments (wimse) Secure Patterns for Internet CrEdentials (spice) SCITT

The big picture here is the software supply chain. A software supply chain is the collection of components, libraries, tools, and processes used to develop, build, and publish software. Software is very rarely one monolithic thing. Instead, it’s made up of lots of different components. Some may be open-source libraries, and some may be proprietary to the company selling them.

Just like a physical bill of materials is required when importing or exporting physical goods, there is also a software bill of materials (SBOM) that is supposed to list all the components of a software package. Now, wouldn’t it be fantastic if, based on a computer-readable, standardized format of an SBOM, a computer could decide in real-time whether a particular software package was safe to run based on a quick check for any severe security vulnerabilities associated with any of the components listed in the SBOM?

It’s an entirely different way of looking at authorization, and that’s what scitt is working on.

WIMSE

Workload identity is a term that’s still struggling to find a common, industry-wide definition (not an unusual problem in the IAM space). I do like Microsoft’s definition, though: “an identity you assign to a software workload (such as an application, service, script, or container) to authenticate and access other services and resources.”

I mentioned earlier that there can be a ridiculous number of applications and services running across multiple cloud environments. DevOps gets to develop and deploy those, but IT admins need to keep track of all the signals from all the services to make sure everything is running as expected and uninfluenced by hackers. There needs to be a standardized format for the signals all these workloads will send, regardless of any particular cloud environment.

Enter WIMSE. WIMSE is standardizing secure identity presentation for workload components to enable least-privilege access and obtain signals from audit trails so that IT admins get visibility and exercise control of over workload identities. To make it more challenging, this must be platform agnostic because no one is deploying single-platform environments.

SPICE

Sometimes, processes and APIs have nothing to do with humans. But sometimes, they do. They might run as a person in order to do something on that person’s behalf. In those cases, it would make life a lot easier to have a credential format that is lightweight enough to support the untold number of workload identities out there AND the human identities that might exist in the same complex ecosystem.

Here is where the proposed working group, spice, sits. Personally, I think it might have the hardest job of the three. While standardizing a common format makes a lot of sense, we can’t ignore that with human identities, issues like privacy, identity verification and proofing, and revocation are an incredibly big deal. Those same issues, however, either don’t apply or don’t apply in the same way for workload identities. If you insist on constraining the credential to be entirely human in its concerns, it’s too burdensome for the millions of apps and processes to handle at the speeds necessary. If you don’t constrain the credentials, security problems may creep in if the developers misuse the credentials.

So, of course, this is the group I volunteered to co-chair. I’m insane.

Wrap Up

Non-human identity is identity at an unprecedented scale. It’s a whole new world because there are many more workload instances than users. The same tooling and standards for human identity are not designed to operate at this new scale or velocity.

I have a lot more to learn in this space, and one person I follow (literally, I will chase him down in hallways) who knows a LOT about this stuff is Pieter Kasselman (Microsoft). He and several others are engaged within the IETF and the broader world to make sense of this complicated and anxiety-inducing space. If you work in IAM and you default to only thinking about the people in your organization, I’m afraid you need to start thinking much more broadly about your field. If you need a place to start, come to Identiverse 2024 or IETF 120 and hang out with me as we all learn about the non-human identity landscape. 

I love to receive comments and suggestions on how to improve my posts! Feel free to comment here, on social media, or whatever platform you’re using to read my posts! And if you have questions, go check out Heatherbot and chat with AI-me

The post The Evolving Landscape of Non-Human Identity appeared first on Spherical Cow Consulting.


Northern Block

Empowerment Tech: Wallets, Data Vaults and Personal Agents (with Jamie Smith)

Discover empowerment tech with Jamie Smith on The SSI Orbit Podcast. Learn about digital wallets, AI agents, and taking control of your digital life. The post Empowerment Tech: Wallets, Data Vaults and Personal Agents (with Jamie Smith) appeared first on Northern Block | Self Sovereign Identity Solution Provider. The post <strong>Empowerment Tech: Wallets, Data Vaults and Personal Agents

🎥 Watch this Episode on YouTube 🎥
🎧   Listen to this Episode On Spotify   🎧
🎧   Listen to this Episode On Apple Podcasts   🎧

About Podcast Episode

Are you tired of feeling like a passive bystander in your digital interactions? What if there was a way to take control and shape your online experiences to work for you? In this thought-provoking episode of The SSI Orbit Podcast, host Mathieu Glaude sits down with Jamie Smith, Founder and CEO of customerfutures.com, to explore the exciting world of empowerment tech.

Empowerment tech promises to put power back into individuals’ hands, allowing them to take an active role in their digital lives. Jamie delves into empowerment tech, which encompasses tools like digital wallets, verifiable credentials, and personal AI agents designed to help customers get things done on their terms.

Some of the valuable topics discussed include:

Understanding the alignment of incentives between businesses and customers Exploring the role of regulators in promoting empowerment tech Uncovering the potential of Open Banking and Open Finance Envisioning the future of personal AI agents and their impact on customer experiences

Take advantage of this opportunity to gain insights into the cutting-edge world of empowerment tech and how it could revolutionize how we interact with businesses and services. Tune in now!

 

Key Insights Empowerment tech puts the individual at the center, enabling them to make better decisions and get things done on their terms. Aligning incentives between businesses and customers is crucial for creating sustainable and valuable relationships. Regulators play a vital role in promoting empowerment tech by shaping the environment for individuals to be active participants. Open Banking and Open Finance are potential trigger points for empowerment tech, enabling individuals to control and share their financial data securely. Personal AI agents trained on an individual’s data can provide personalized recommendations and insights, creating immense value. Strategies Implementing digital wallets and verifiable credentials as foundational tools for empowerment tech. Leveraging small language models (SLMs) tailored to an individual’s data and needs. Building trust through ethical design patterns and transparent data practices. Exploring new commercial models and incentive structures that align with empowerment tech principles. Process Evaluating the alignment of incentives between businesses and customers to identify potential friction points. Designing digital experiences that prioritize the individual’s needs and goals. Implementing governance frameworks to define reasonable data-sharing practices for different transactions. Establishing trust through transparent onboarding processes and clear communication of data practices. Chapters: 00:02 Defining Empowerment Tech

02:01 Aligning Incentives Between Customers and Businesses 

04:41 The Role of Regulators in Promoting Empowerment Tech 

07:57 The Potential of Open Banking and Open Finance

09:39 The Rise of Personal AI Agents 

16:21 Wallets, Credentials, and the Future of Digital Interactions 

21:50 Platforms, Protocols, and the Economics of Empowerment Tech 

28:47 Rethinking User Interfaces and Device-Centric Experiences

35:22 Generational Shifts and the Future of Digital Relationships 

41:16 Building Trust Through Design and Ethics

Additional resources: Episode Transcript Customer Futures ‘Open banking’ may soon be in Canada. Here’s what it means — and how it would save you money Five Failed Blockchains: Why Trade Needs Protocols, Not Platforms by Timothy Ruff Platform Revolution: How Networked Markets are Trasnforming the Economy – and How to Make Them Work For You Projects by IF About Guest

Jamie Smith is the Founder and CEO of Customer Futures Ltd, a leading advisory firm helping businesses navigate the opportunities of disruptive and customer-empowering digital technologies. With over 15 years of experience in digital identity, privacy, and personal AI, Jamie is a recognized expert in the empowerment tech movement. He is passionate about creating new value with personal data and empowering consumers with innovative digital tools. Jamie regularly shares his insights on the future of digital experiences through his weekly Customer Futures Newsletter.

Website: customerfutures.com

LinkedIn: linkedin.com/in/jamiedsmith

The post Empowerment Tech: Wallets, Data Vaults and Personal Agents (with Jamie Smith) appeared first on Northern Block | Self Sovereign Identity Solution Provider.

The post <strong>Empowerment Tech: Wallets, Data Vaults and Personal Agents</strong> (with Jamie Smith) appeared first on Northern Block | Self Sovereign Identity Solution Provider.


auth0

A Customer Identity Migration Journey

Upgrading made easy: from in-house authentication to modern login flows, flexible user profiles, and the convenience and security of passkey
Upgrading made easy: from in-house authentication to modern login flows, flexible user profiles, and the convenience and security of passkey

Indicio

Senior Software Engineer (Remote)

Work with the Director of Sales to support him in day-to-day responsibilities including... The post Senior Software Engineer (Remote) appeared first on Indicio.

Senior Software Engineer (Remote)

Job Description

We are the world’s leading verifiable data technology. We bring complete solutions that fit into an organization’s existing technology stack, delivering secure, trustworthy, verifiable information. Our Indicio Proven® flagship product removes complexity and reduces fraud. With Indicio Proven® you can build seamless processes to deliver best-in-class verifiable data products and services.

As a rapidly growing start up we need team members who can work in a fast paced environment, produce high quality work on time, work without supervision, show initiative, innovate, and be laser focused on results. You will create lasting impact and see the results of your work immediately. 

The ideal candidate will have experience in designing and coding software and user interfaces for decentralized identity applications using Node.js, Express.js, and React.js.

We have weekly sprints, daily standups, occasional pair programming sessions, and weekly game sessions. We have optional opportunities for mentoring others, community outreach, and team leadership. This is a full-time US-based position with company benefits including:

Subsidized healthcare  Matching 401k Unlimited PTO 14 Federal paid days off

Indicio is a fully remote team (our Maryland colleagues have a co-working space) and our clients are located around the world. Working remotely requires you to be self-motivated, a demonstrated team-player, and have outstanding communication skills. 

We do not conduct live coding interviews, but we do like to talk about your favorite projects and may ask for code samples if you are shortlisted.

Responsibilities

Understand requirements, design and scope features, and generate stories, tasks, and estimates Work with other team members to coordinate work and schedules Write high quality software and tests Assist our testing team to document features and create testing procedures Time spent handling Jira and navigating Slack

Required Skills

Expert in JavaScript Deep experience with Node.js, Express.js, and React.js Expert in using git, docker, bash 5+ years relevant work experience Must live  in and be legally able to work in the US. We cannot sponsor work visas at this time. Understanding of basic cryptography principles (hashing, symmetric and asymmetric encryption, signatures, etc.)

Nice to Haves that are not required

Understanding of basic blockchain principles, verifiable credentials, and/or SSID Experience contributing to open source software projects Experience working in an agile team Working understanding of Websockets Experience with RESTful APIs Functional skills with Curl / Postman Well-formed opinions on state management Comfortable using Linux/Unix environments Utilization of TDD methodologies

We highly encourage candidates of all backgrounds to apply to work with us – we recruit based on more than just official qualifications, including non-technical experience, initiative, and curiosity. We aim to create a welcoming, diverse, inclusive, and equitable environment for all.

As a Public Benefit Corporation, a women-owned business, and WSOB certified, Indicio is committed to advancing decentralized identity as a public good that enables all people to control their online identities and share their data by consent. 

Apply today!

The post Senior Software Engineer (Remote) appeared first on Indicio.


1Kosmos BlockID

Behind Fingerprint Biometrics: How It Works and Why It Matters

As society becomes more reliant on technology, the protection of confidential data increases. One innovative way organizations are keeping information safe is through fingerprint biometrics. In this article, we will explore the science of fingerprint biometrics and highlight its potential for security. We will analyze how security biometrics can be utilized and how this technology … Continued Th

As society becomes more reliant on technology, the protection of confidential data increases. One innovative way organizations are keeping information safe is through fingerprint biometrics. In this article, we will explore the science of fingerprint biometrics and highlight its potential for security. We will analyze how security biometrics can be utilized and how this technology shapes our present and future security landscapes.

Key Takeaways Fingerprint Uniqueness: The patterns of an individual’s fingerprints are uniquely influenced by both genetic and environmental factors. They serve as an effective and dependable identification method. Scanner Diversity: Different fingerprint scanners (optical, capacitive, ultrasonic, and thermal) address diverse security requirements. These scanners differ in cost, accuracy, durability, and spoofing resistance. Biometrics Future: Despite the powerful security advantages of fingerprint biometrics, issues like potential data theft and privacy violations demand continuous technological evolution and robust legal safeguards. Future prospects for the field include 3D fingerprint imaging, AI integration, and advanced anti-spoofing techniques. What Are Fingerprint Biometrics?

Fingerprint biometrics is the systematic study and application of unique physical attributes inherent in an individual’s fingerprints. Representing a more dependable identification method than traditional passwords or identity cards, fingerprint biometrics eliminates the issues of misplacement, forgetfulness, or theft. The distinctive nature of each person’s fingerprint ensures a robust barrier against unauthorized access to secure data.

The Science of Uniqueness: Fingerprints Origins

Fingerprints are nature’s signature of a person’s identity. Using fingerprints as a biometric identification tool dates back to ancient Babylon and has roots in our evolutionary biology. The friction ridges on our fingertips that comprise these prints have been crucial to human survival, helping us grip and touch objects.

Genetics and Environmental Factors: The Roots of Fingerprint Uniqueness

Fingerprints are formed during the embryonic stage and remain unaltered throughout an individual’s life. No two individuals, not even identical twins, share the same fingerprint. The basis for this uniqueness can be traced back to the genetic and environmental factors that influence the development of fingerprints.

Aspects of fingerprints that are analyzed include:

Patterns: The general pattern or type of fingerprint (arch, loop, or whorl) is inherited through genetics. Minutiae: The precise details of the ridges, known as minutiae, are influenced by random and unpredictable factors such as pressure, blood flow, and position in the womb during development. Ridges: Each ridge in a fingerprint contains several minutiae points, which can be bifurcations (where one ridge splits into two) or ridge endings.

The distribution and layout of minutiae points vary in every individual, contributing to the specific characteristics of each fingerprint. It is these characteristics that biometric systems analyze when comparing and matching fingerprints.

Behind the Screen: How Fingerprint Biometrics Work

Fingerprint recognition is achieved through three steps.

A fingerprint scanner captures the fingerprint, converting the physical pattern into a digital format. The automated recognition system then processes this image to extract distinctive features, forming a unique pattern-matching template. Finally, the facial recognition system matches this template against stored identification or identity verification templates.

Depending on the exact type of fingerprint scanner a business uses, the scanner may use optical, capacitive, ultrasonic, or thermal technologies. Each fingerprint technology has its strengths and weaknesses and will vary in cost, accuracy, and durability.

While the efficacy of biometric scanners is unquestionable, questions about their safety often arise. Potential challenges of fingerprint and facial recognition systems include false acceptance or rejection and biometric data theft.

Although rare, false acceptance of biometric technology can lead to unauthorized access, while false rejection can deny access to legitimate users. Furthermore, if biometric data is compromised, the repercussions can be severe, given that fingerprints cannot be changed, unlike passwords.

However, continuous technological advancements aim to mitigate these risks. Enhanced encryption techniques, anti-spoofing measures, identity verification, and continuous authentication are ways technology addresses these concerns. Together, these enhancements can improve the reliability and security of fingerprint biometrics.

In-depth Look at Fingerprint Scanners: Optical vs. Capacitive vs. Ultrasonic vs. Thermal

There are various types of fingerprint scanners, each with strengths and weaknesses.

Optical scanners are the most traditional type. They take a digital fingerprint picture using a light source and a photodiode (a device that turns light into electrical current). Optical scanners are simple to use but can be easily fooled with a good-quality fingerprint image.

Capacitive scanners

Commonly found in smartphones, capacitive scanners use electrical current to sense and map the ridges and valleys of a fingerprint. They offer higher resolution and security than optical scanners but can be sensitive to temperature and electrostatic discharge.

Ultrasonic scanners

Ultrasonic scanners are considered more secure than optical and capacitive scanners. They use high-frequency sound waves to penetrate the epidermal layer of the skin. This allows them to capture both the surface and sub-surface features of the skin. This information helps form a 3D image of the fingerprint and makes the scanner less prone to spoofing.

Thermal scanners

This type of scanner is the least common of the four. Thermal scanners detect minutiae based on the temperature differences of the contact surface. However, their high costs and sensitivity to ambient temperature make them less popular choices.

Protecting Biometric Identities: Emerging Methods

As different biometric authentication technologies become more prevalent, safeguarding these identifiers from data breaches has become increasingly crucial. Biometric data, once compromised, cannot be reset or altered like a traditional password, making its protection paramount.

One of the most cost-effective methods for protecting biometric identities is liveness detection. This technology helps differentiate between a live biometric sample from a synthetic or forged one. These systems can differentiate between a live finger and a spoof by analyzing properties of live skin, such as sweat pores and micro-texture.

By detecting bodily responses or using AI to analyze input data for anomalies, liveness detection can add another layer of security to our biometric identification systems.

Decentralized storage methods, such as blockchain technology, are another avenue for safeguarding biometric data. Instead of storing the data in a central database, it’s dispersed across multiple nodes. These numerous locations make it nearly impossible for hackers to access the entire dataset. While this technology is promising, it’s still nascent and faces scalability and energy efficiency issues.

Potential Issues with Fingerprint Biometrics and Solutions

Fingerprint biometrics has its challenges; a common issue users face is the quality of the scanned fingerprint. Poor-quality images can deny legitimate users access to facilities, databases, or other secure locations.

Factors that can affect a person’s fingerprint quality include:

A person’s age Substances on an individual’s hand Skin conditions like dermatitis Manual labor

Furthermore, some systems can be fooled by artificial fingerprints made from various materials like silicone or gelatin, a practice known as spoofing.

Multi-factor authentication, which requires more than one form of identification, is an increasingly used method to enhance security.

Securing Biometric Data: Ethical and Legal Considerations

While biometric authentication offers many significant benefits, it presents unprecedented privacy and data security challenges. Biometric data, unlike other forms of personal data, is intimately tied to our physical bodies. This makes its theft or misuse potentially more invasive and damaging.

The legal landscape for biometric data is still evolving. In many jurisdictions, existing privacy laws may not be sufficient to cover biometric data. This leaves gaps in protection. Stricter regulations and law enforcement may be necessary to ensure that all biometric information is collected, stored, and used in a manner that respects individual privacy.

Biometric data security isn’t just about preventing unauthorized access to biometric identifiers. It also involves ensuring that the biometric data, once collected, isn’t used for purposes beyond what was initially agreed. This could include selling the data to third parties or using it for surveillance.

Fingerprint Biometrics in Action: Real-world Applications and Impact

Fingerprint biometrics extends beyond personal devices and is a cornerstone of modern security across various sectors.

Fingerprints provide irrefutable evidence for law enforcement and forensics teams by helping identify and track suspects. Moreover, businesses and institutions leverage fingerprint biometrics for physical access control, ensuring that only authorized personnel can enter certain premises.

The advent of smartphones equipped with fingerprint sensors has furthered the customer experience and fortified personal device security. Users can unlock their phones, authenticate payments, and secure apps by simply touching a sensor. This biometric authentication offers convenient access control and security while remaining cost effective.

Smart ID cards incorporating fingerprint biometrics are increasingly used in various sectors

Not surprisingly, government and military operations make frequent use of this type of biometric security. However, an automated fingerprint identification system can also be employed in the healthcare industry to allow individuals to gain access to restricted areas. Employees in the education sector can use fingerprint biometrics to enter schools and universities. The corporate world can use this technology to prevent identity theft. Financial systems also integrate fingerprint biometrics, adding a layer of protection over transactions and access to financial services. It helps reduce fraud and ensure customer trust, making it a valuable tool in banking and financial security.

As these real-world case studies illustrate successful implementations of fingerprint identification and other security biometrics. These include visa applications and the distribution of government benefits. Whether securing a company’s computer systems and premises or identifying criminals, fingerprint biometrics have proven their value and substantially impacted security.

Beyond the Horizon: Future Trends and Innovations in Fingerprint Biometrics

Fingerprint biometrics, like all technologies, continues to evolve. Several trends and innovations promise to enhance the capabilities and applicability of this technology.

The most recent advancements include 3D fingerprint imaging, which provides a more detailed fingerprint representation that enhances accuracy. Anti-spoofing techniques are also being developed to combat attempts at tricking fingerprint sensors with fake digital fingerprints too.

Integrating artificial intelligence (AI) and machine learning offers immense possibilities. These technologies can help improve voice recognition algorithms, making them more highly accurate and adaptable.

Despite these advancements, it’s essential to acknowledge that striking a balance between security and privacy with fingerprint technologies remains challenging. As biometric techniques evolve, unfortunately, so will privacy concerns.

Diversifying Biometric Security: Face and Iris Recognition

While fingerprints are common biometric security, they aren’t the only biometric technology method available. For instance, facial recognition technology and iris scanning add more layers of protection.

Facial recognition

Facial recognition technology uses machine learning algorithms to identify individuals based on their facial features. This technology has seen an increase in use in recent years, especially in surveillance systems and mobile devices. Despite concerns about privacy and misuse, facial recognition is undeniably a powerful tool in biometrics and homeland security.

Iris scanning

Another form of biometric identification is iris scanning. This technique scans a person’s irises using infrared technology and analyzes their individual patterns. Iris scans offer a higher level of security due to the iris’s complex structure, which remains stable throughout an individual’s life. However, it can be more expensive and more complicated to implement than other forms of biometrics.

Integrating these methods with fingerprint biometrics can create a multi-modal biometric system, providing businesses with businesses more reliable and robust security.

Implementing Fingerprint Biometrics with BlockID

Biometric authentication like fingerprint biometrics is key to combating threats. BlockID’s advanced architecture aligns with these principles, transitioning from traditional device-centric to individual-centric authentication, thus reducing risks.

Here’s how BlockID achieves this:

Biometric-based Authentication: We push biometrics and authentication into a new “who you are” paradigm. BlockID uses biometrics to identify individuals, not devices, through credential triangulation and identity verification. Identity Proofing: BlockID provides tamper evident and trustworthy digital verification of identity – anywhere, anytime and on any device with over 99% accuracy. Privacy by Design: Embedding privacy into the design of our ecosystem is a core principle of 1Kosmos. We protect personally identifiable information in a distributed identity architecture, and the encrypted data is only accessible by the user. Distributed Ledger: 1Kosmos protects personally identifiable information in a private and permissioned blockchain, encrypts digital identities, and is only accessible by the user. The distributed properties ensure no databases to breach or honeypots for hackers to target. Interoperability: BlockID can readily integrate with existing infrastructure through its 50+ out-of-the-box integrations or via API/SDK. Industry Certifications: Certified-to and exceeds requirements of NIST 800-63-3, FIDO2, UK DIATF and iBeta DEA EPCS specifications.

With its unique and advanced capabilities, fingerprint biometrics is leading the way in enhancing security across diverse industries. It represents an innovative solution that can significantly strengthen your cybersecurity strategy. If you’re considering integrating more biometric measures into your cybersecurity toolkit, BlockID supports various kinds of biometrics out of the box. Schedule a call with our team today for a demonstration of BlockID.

The post Behind Fingerprint Biometrics: How It Works and Why It Matters appeared first on 1Kosmos.


KuppingerCole

Web Application Firewalls

by Osman Celik This report provides up-to-date insights into the Web Application Firewall (WAF) market. We examine the market segment, vendor service functionality, relative market share, and innovation to help you to find the solution that best meets your organization's needs.

by Osman Celik

This report provides up-to-date insights into the Web Application Firewall (WAF) market. We examine the market segment, vendor service functionality, relative market share, and innovation to help you to find the solution that best meets your organization's needs.

IDnow

Open for business: Understanding gambling regulation in Peru.

Although Peru has a long-standing relationship with gambling and is one of a few South American countries to have allowed local and international companies to operate online casinos and betting sites, recent regulations have changed the game. Here’s what domestic and international operators need to know. Hot on the heels of Colombia, Argentina (Buenos Aires, […]
Although Peru has a long-standing relationship with gambling and is one of a few South American countries to have allowed local and international companies to operate online casinos and betting sites, recent regulations have changed the game. Here’s what domestic and international operators need to know.

Hot on the heels of Colombia, Argentina (Buenos Aires, Mendoza and Córdoba), and, most recently, Brazil, Peru has decided to introduce new regulations and gaming licenses for local and international operators. 

Prior to October 2023, although online gambling in Peru was permitted by the constitution, it operated within a rather relaxed regulatory framework. 

“The Peruvian market has grown exponentially in the last few years and from the point of view of the customers, the operators and the government, it was mandatory to be on the regulated side of the industry. Recent changes will attract investors, generate an increase in government collection, protect players from responsible gaming and reduce illegal operators,” said Nicholas Osterling, Founder of Peru gaming platform, Playzonbet.

Brief history of gambling in Peru.

Peru’s gambling industry has undergone significant changes since it legalized land-based casinos in 1979. Key milestones over the past four decades have included tax regulations and ethical guidelines for casinos.  

Online gambling regulations emerged later, with the first online casino license issued in 2008. Unlike other South American countries, Peru has never explicitly banned offshore or local online casinos, as long as they met the established domestic standards for any company. This relatively open market approach has attracted numerous international and local brands.

Clearing the regulatory path for operators.

MINCETUR is the national administrative authority in charge of regulating, implementing, and overseeing all aspects of online gaming and sports betting in Peru. 

Responsibilities include:  

Issuing licenses to qualified operators.
Monitoring operator activities for compliance with regulations.
Enforcing fines, sanctions, or criminal proceedings for non-compliance.
Fostering a safe gambling environment especially for players. 

Within MINCETUR, the Directorate General of Casino Games and Gaming Machines (DGJCMT) is an important executive body that is particularly instrumental in ensuring player protection, improving game quality, and enforcing regulations. 

In August 2022, Peru passed Law No. 31557 and its amendment, Law No. 31806, which established a comprehensive legal framework for most gambling activities. These laws were further clarified in 2023 by Supreme Decree No. 005-2023, which provided detailed regulations for online sports betting and other real-money gaming services. 

Under these new regulations, international online casino operators must obtain a license from the DGJCMT to operate legally. The decree also outlines the process for obtaining and maintaining a license. Penalties for non-compliance include hefty fines and possible exclusion from the market. 

MINCETUR set a pre-registration phase from February 13 – March 13 for domestic and international gambling operators already active in the market. During this phase, remote gaming and sports betting operators, certification laboratories, and service providers could register their interest for preliminary consideration of their applications. Juan Carlos Mathews, Peru’s Minister of Foreign Trade and Tourism, confirmed that 145 requests had been received from both national and international companies and for both casino and sports betting business units. Although this stage is now closed, newcomers to the Peru market can continue to apply.

Challenges in Compliance Survey Download to discover the top concerns for gambling operators from around the world, what causes players to abandon onboarding, and the likely effect of UK’s upcoming financial risk checks. Read now Objectives of the new Peru gaming license.

The new Peru gambling license places safety and consumer protection at the forefront of its objectives. With a focus on ensuring a secure environment for players, promoting responsible gambling practices, and formalizing online gaming and sports betting activities, these regulations aim to create a robust and transparent framework for the Peruvian gambling industry. 

Overall, the Peruvian government is opting for a more simplified approach compared to the complex structure of the Brazilian gambling license.

Secure gaming environment for consumers.

The regulations prioritize security by implementing measures to protect consumers. Age verification requirements and participation restrictions ensure that only adults engage in gambling activities, while strict measures will be in place to prevent money laundering and fraud, fostering a safe and secure environment for players. A Know Your Customer (KYC) process must also be in place to verify the age, identity, and nationality of players.

Registration and verification.

To create an account on a gaming platform, players must register with the following data: 

i. Full name(s) and surname(s);

ii. Type of identification document;

iii. Identification document number;

iv. Date of birth;

v. Nationality;

vi. Address (address, district, province, and department); and

vii.Statement regarding their status as a Politically Exposed Person (PEP).

Promotion of responsible gaming. 

The new Peruvian regulations promote responsible gaming practices by encouraging operators to implement self-exclusion tools and support programs for players struggling with problem gambling. By raising awareness of the potential risks, the regulations aim to mitigate harm and promote responsible behavior within the industry. Only individuals of legal age (18 years) can register and access a user and gaming account. Gaming accounts will be blocked when the verification of the person’s identity is unsuccessful or is determined the player is registered on any exclusion list.

Preparing for compliance. 

During the application process, certification laboratories, remote sports betting and gaming operators, and service providers will be asked to enter their details on the MINCETUR website.  

Although the website is only available in Spanish, international operators are advised to devote the necessary resources to enter information correctly. Accurate information provided during pre-registration is critical to avoid delays and ensure the smooth processing of license applications. 

Operators must also ensure they have the necessary technical infrastructure in place, such as robust KYC checks

Operators who do not have a license or fail to comply will face:

Significant fines.
Revocation of licenses.
Potential criminal charges. Fines for non-compliance in the Peru gambling market.

Operators who fail to obtain a license while continuing to offer remote gaming could face fines of up to 990,000 Sol, which amounts to approximately £207,000.  

If a licensee fails to verify the identity, age, and nationality of players as required by MINCETUR regulations, a fine of between 50-150 Peru tax units will be imposed, which amounts to between £53,500-160,600. It is therefore crucial for operators to implement solid KYC procedures.  

Article 243-C of the new law also imposes prison sentences of up to four years for those found to be operating online casino games or sports betting without a proper license. 

Operating in the Peruvian market without a license may result in exclusion from the market and possible prosecution. 

Peru aims to generate more revenue from the new Peruvian gaming license by introducing a special gaming tax. The tax rate is set at 12% of the net profits from online gambling activities, to be paid by licensed operators, both domestic and foreign alike. 

The government estimates that the new regulations will generate tax revenues of approximately 162 million Sol (£33.9 million) per year, putting the total size of the Peru gambling market at around £1 billion. 

Full enforcement of the new regulations, including licensing requirements and potential penalties for non-compliance, began on April 1, 2024. 

All companies, both domestic and foreign, operating in the Peruvian online gaming market must comply with the new regulations. Failure to obtain a license while continuing operations will result in fines, exclusions, and criminal charges. 

To be eligible for a Peruvian gambling license, operators must adhere to security protocols, implement KYC processes to verify player identity and age, and have sound responsible gaming policies in place. Provisions of around £1.2 million must also be in place to prevent money laundering and financial fraud.

The future of gambling in Peru.

Implementing a licensing regime will obviously have an impact on the profitability of gaming operators in the market. However, the online gaming market in Peru is also expected to grow at a minimum rate of 6.4% per annum.  

A regulated online gambling market ensures a clear legal framework for operators to conduct their business and reduces potential legal risks. Increased consumer confidence also leads to higher revenues. 

“Peru is a very traditional market when it comes to sports betting. We Peruvians love soccer, and we are very passionate about betting. The industry here has had several years of growth and now that it will become a regulated market it will be even more attractive for vendors and operators that are only interested in these types of jurisdictions,” added Nicholas. 

With a stable economy and a growing middle class among its 33 million inhabitants, Peru is considered one of the most attractive markets for international gaming operators. 

As Peru emerges as a key player in the Latin American gaming market, operators must quickly adapt to the new regulatory landscape to ensure sustainable growth and success in this dynamic industry.

Learn more about how to succeed in the South American gambling market by implementing robust KYC processes.
Read our interview with Brazilian lawyer, Neil Montgomery for insights into the pros and cons of regulation, the importance of KYC, and why Brazilian gambling operators may have the upper hand on their stronger foreign counterparts.
Or perhaps you’re interested in expanding to Brazil, then read out ‘Unpacking the complexities of the Brazilian gambling license structure’ blog.

By

Ronaldo Kos,
Head of Latam Gaming at IDnow
Connect with Ronaldo on LinkedIn


Ocean Protocol

DF83 Completes and DF84 Launches

Predictoor DF83 rewards available. Passive DF & Volume DF are pending ASI merger vote. DF84 runs Apr 4 — Apr 11, 2024 Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by locking OCEAN, curating data, and making predictions via Predictoor. Ocean Protocol is joining with Fetch and SingularityNET to form the Superintelligence Alliance, with a unified token
Predictoor DF83 rewards available. Passive DF & Volume DF are pending ASI merger vote. DF84 runs Apr 4 — Apr 11, 2024

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by locking OCEAN, curating data, and making predictions via Predictoor.

Ocean Protocol is joining with Fetch and SingularityNET to form the Superintelligence Alliance, with a unified token $ASI. This is pending a vote of “yes” from the Fetch and SingularityNET communities, a process that will take several weeks. This Mar 27, 2024 article describes the key mechanisms.
There are important implications for veOCEAN and Data Farming. The article “Superintelligence Alliance Updates to Data Farming and veOCEAN” elaborates.

Data Farming Round 83 (DF83) has completed. Passive DF & Volume DF rewards are on pause; pending the ASI merger votes. Predictoor DF claims run continuously.

DF84 is live today, April 4. It concludes on Apr 11.

Here is the reward structure for DF84:

Predictoor DF is like before, with 37,500 OCEAN rewards and 20,000 ROSE rewards The rewards for Passive DF and Volume DF are on pause, pending the ASI merger votes. About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

Data Farming is Ocean’s incentives program. In DF, you can earn OCEAN rewards by locking OCEAN, curating data, and making predictions.

DF83 Completes and DF84 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


KuppingerCole

Oracle Cloud Guard from CSPM to CNAPP

by Mike Small When an organization uses a cloud service, it must make sure that it does this in a way that is secure and complies with their obligations. Oracle Cloud Infrastructure (OCI) provides a broad set of integrated cloud security services to help its customers achieve these objectives. Oracle continuously innovates to improve these services and Oracle Cloud Guard has now been enhanced to

by Mike Small

When an organization uses a cloud service, it must make sure that it does this in a way that is secure and complies with their obligations. Oracle Cloud Infrastructure (OCI) provides a broad set of integrated cloud security services to help its customers achieve these objectives. Oracle continuously innovates to improve these services and Oracle Cloud Guard has now been enhanced to provide a complete Cloud Native Application Protection Platform (CNAPP) for OCI.

Complementary User Entity Controls

To meet their security and compliance obligations when using OCI the tenant must implement the appropriate controls. The American Institute of CPAs® (AICPA) provides attestations of the security and compliance of cloud services. OCI has a Service Organization Controls (SOC) 2 type 2 attestation that affirms that controls relevant to the AICPA Trust Services Security and Availability Principles are implemented effectively within OCI. This includes a consideration of the Complementary User Entity Controls (CUECs) that the OCI tenant is expected to implement as well as the capabilities provided by OCI to support these.

Figure 1: Complementary User Entity Controls

OCI offers a full stack of cybersecurity capabilities to help the tenant prevent, protect, monitor, and mitigate cyber threats and control access and encrypt data.  These include Oracle Cloud Guard that was first launched in 2020 and enhanced in 2022 to provide Cloud Security Posture Management (CSPM) for OCI. This detects misconfigurations, insecure activity, and threat activities and provides visibility to triage and resolve cloud security issues.

Oracle Cloud Guard CSPM, together with the other OCI security services, helps the OCI tenant to demonstrate how their CUECs meet their security and compliance objectives.

Oracle Cloud Guard for CSPM

Oracle Cloud Guard is an OCI service that helps OCI tenants to maintain a strong security posture on Oracle Cloud.  The tenant can use the service to examine their OCI resources for security weakness related to their OCI configuration and monitor their OCI administrators for risky activities. When Cloud Guard detects weaknesses, it can identify appropriate corrective actions and assist in or automate implementing these. 

Figure 2: OCI CSPM Storage Bucket Risks Example

Cloud Guard detects security problems within a tenant OCI environment by ingesting activity and configuration data about their resources in each region, processing it based on detector rules, and correlating the problems at the reporting region level. Identified problems can be used to produce dashboards and metrics and may also trigger one or more inbuilt responders to help resolve the problem. 

Oracle Cloud Guard works together with Oracle Security Zones to provide an always-on security posture. With Security Zones and Cloud Guard the OCI tenant can define policy compliance requirements for groups of resources. Security Zones and Cloud Guard can then enforce these policies to automatically correct and log any violations.  

Cloud Security Posture Management is a valuable tool for organizations to ensure that they use OCI in a secure and compliant manner. OCI provides a very comprehensive range of capabilities for the tenant to secure their use of the services.  Oracle Cloud Guard CSPM is one of these and is backed by the expertise and experience of Oracle’s technical teams. 

Cloud Guard for CNAPP

The distinctive feature of CNAPP is the integration of several capabilities that were previously offered as standalone products. These most often include Cloud Security Posture Management (CSPM) for identifying vulnerabilities and misconfigurations in cloud infrastructures, Cloud Workload Protection Platforms (CWPP) that deal with runtime protection of workloads deployed in the cloud (such as virtual machines, containers, and Kubernetes, as well as databases and APIs), and Cloud Infrastructure Entitlement Management (CIEM) for centralized management of rights and permissions across (multi-) cloud environments. Cloud Service Network Security (CSNS) is sometimes included as well, combining such capabilities as web application firewalls, secure web gateways, and DDoS protection. OCI Security Services include many of these capabilities. 

Cloud Guard has provided CSPM capabilities since its launch in 2020.  It has now been enhanced to offer further cloud native application security capabilities. 

Cloud Guard Log Insights Detector

Cloud Guard Log Insights Detector, which is not yet generally available, provides a flexible way to capture specific events from logs available in the OCI Logging service. It allows customers to mine all their logs, augmenting out of the box controls to cover all resources and services. 

It continuously monitors audit, service, and custom logs from Oracle IaaS, PaaS, and SaaS across all subscribed regions, and can be used to detect malicious events that may indicate a threat or a risk that needs to be investigated based on user-defined queries. Data from all services (like VCN flow logs, Object Storage or WAF), the OCI event audit trail and custom application logs can be accessed in every region, and results be centralized for consolidated alerting. 

Figure 3: OCI Log Insights Detector

Cloud Guard Instance Security

This provides controls to manage risks and exposures at the compute server, microservices instance / container level. It detects suspicious runtime activities within OCI VMs based on MITRE ATT&CK and creates alerts in real time. It also monitors the integrity of critical system and application files. It comes with a range of predefined detection recipes, based on Oracle’s knowledge and OCI recommended best practices. These can be supplemented with ad hoc and custom scheduled queries.

Cloud Guard Container Security

Cloud-Native Applications are built using a microservices architecture based on containers. Microservices, containers, and Kubernetes have become synonymous with modern DevOps methodologies, continuous delivery, and deployment automation and are seen as a breakthrough in the way to develop and manage cloud-native applications and services.

Figure 4: Examples of container related risks.

However, this approach brings new security challenges and attempts to repurpose existing security tools to protect containerized and microservice-based applications have proven to be inadequate due to their inability to adapt to the scale and ephemeral nature of containers. Static security products that focus on identifying vulnerabilities and malware in container images, while serving a useful purpose, do not address the full range of potential risks.

Oracle Kubernetes Engine (OKE) is an OCI platform for running Kubernetes workloads. Oracle Cloud Guard has been extended to include Kubernetes Security Posture Management (KSPM) for OKE. This helps to protect the DevOps pipeline processes and containers throughout their lifecycle from security vulnerabilities.  It includes out-of-the-box configuration policies based on Oracle best practices.  The rules also align with industry accepted best practices like CIS benchmarks and regulatory frameworks like US FedRAMP.

From CSPM to CNAPP

Since its inception in 2020 Oracle Cloud Guard has enabled OCI tenants to measure their security posture for OCI. These new capabilities extend Cloud Guard beyond CSPM to proactively manage the security of cloud native applications developed and deployed in OCI. This supports Oracle’s vision to make OCI the best platform for enterprises to develop and deploy secure and compliant applications.  Organizations using OCI should review these new capabilities and adopt them where appropriate.


Managing Cloud Data Migration Risks

by Mike Small Data is the most valuable asset of the modern organization but protecting and controlling it when migrating to cloud services is a major challenge. This report provides an overview of how the Protegrity Data Platform can help organizations to meet these challenges.

by Mike Small

Data is the most valuable asset of the modern organization but protecting and controlling it when migrating to cloud services is a major challenge. This report provides an overview of how the Protegrity Data Platform can help organizations to meet these challenges.

Metadium

Metadium 2024 Q1 Activity Report

Dear Metadium Community, In the first quarter of 2024, Metadium made significant advances in Decentralized Identity(DID). Here’s a summary of Metadium’s key achievements and developments from the past. Summary The first quarter of 2024 saw a total of 9,011,158 transactions and 56,953 DID wallets created. As part of our new Metadium plan, we updated our website. It represents the gro

Dear Metadium Community,

In the first quarter of 2024, Metadium made significant advances in Decentralized Identity(DID). Here’s a summary of Metadium’s key achievements and developments from the past.

Summary The first quarter of 2024 saw a total of 9,011,158 transactions and 56,953 DID wallets created. As part of our new Metadium plan, we updated our website. It represents the growth of Metadium and the progress towards the 2024 vision map. The 2024 vision map was released, outlining the goals for the new Metadium. We unified non-circulating wallets for better management of Metadium wallets. Technology Q1 Monthly Transactions

During the first quarter of 2024, there were 9,011,158 transactions, and 56,953 DID wallets(as of April 1st).

Website Update

We are happy to announce a new look for our website. This update is part of our new Metadium plan, representing our growth and progress ane our 2024 vision map.

Find more information here.

Metadium 2024 Vision Map

The 2024 vision map envisions improvements to the blockchain network, governance changes, and expanded usability of DID.

Metadium aims to create new markets through its evolving technology and will continue to break down the boundaries between Web2 and Web3 and create new value-added service models.

Find more information here.

Non-circulating wallet unification plan

We’ve announced and completed the plan to unify Metadium’s non-circulating wallets.

Find more information here.

Metadium is committed to fostering growth and innovation within the blockchain ecosystem. Through continuous research and developmental efforts and enhanced collaboration with partners within the ecosystem, we aim to drive progress forward.

We appreciate the commitment and understanding of the Metadium community. We will continue with our commitment to work hard and bring blockchain technology closer to people’s daily lives.

Metadium Team

안녕하세요. 메타디움 팀입니다.

2024년 1분기에도 메타디움은 DID 분야에서 중요한 발전을 이어갔습니다. 지난 겨울 메타디움의 주요 성과와 개발 사항을 요약해 보고합니다.

요약 2024년 1월부터 3월 간 총 9,011,158건의 트랜잭션과 56,953개의 DID 월렛이 생성되었습니다. 새로운 메타디움 계획의 일환으로 웹사이트가 업데이트 되었습니다. 메타디움의 성장과 2024년 비전맵을 위한 발전을 나타내고 있습니다. 새로운 메타디움의 목표를 담은 메타디움 2024 비전맵이 발표되었습니다. 메타디움 지갑의 원활한 관리를 위해 비유통량 지갑을 통합하였습니다. 기술 Q1 월간 트랜잭션

2024년 1월부터 3월 간 총 9,011,158건의 트랜잭션과 56,953개의 DID 월렛이 생성되었습니다. (4월 1일 기준)

메타디움 웹사이트 업데이트

메타디움의 웹사이트가 새롭게 변경되었습니다. 이번 홈페이지 업데이트는 새로운 메타디움 계획의 일환으로 이루어진 작업으로, 메타디움의 성장과 2024년 비전맵을 위한 발전을 나타내고 있습니다.

자세한 내용은 여기를 확인해보세요.

메타디움 2024 비전맵

메타디움은 2024년 비전 맵을 통해 블록체인 네트워크의 개선과 거버넌스 변화, 그리고 DID 사용성 확장을 계획하고 있습니다.
메타디움은 진화된 기술을 통해 새로운 시장을 창출하고자 하며, 웹2와 웹3의 경계를 허물고 부가가치가 높은 새로운 서비스 모델을 창출하기 위해 지속적인 노력을 기울일 것입니다.

자세한 내용은 여기를 확인해보세요.

비유통량 지갑 통합 계획

메타디움 지갑의 원활한 관리를 위해 메타디움 비유통량 지갑 통합 계획을 발표하고 작업을 완수했습니다.

자세한 내용은 여기를 확인해보세요.

메타디움은 블록체인 생태계의 성장과 혁신을 촉진하고자 끊임없이 노력하고 있습니다. 지속적인 연구 및 개발, 생태계 내 파트너들과의 강화된 협력을 통해 진전을 이끌어내고자 합니다.

메타디움은 앞으로도 블록체인 생태계를 발전시키는 데 헌신할 것임을 약속드리며, 메타디안 여러분께 감사의 말씀을 전합니다.

- 메타디움 팀

Website | https://metadium.com

Discord | https://discord.gg/ZnaCfYbXw2

Telegram(EN) | http://t.me/metadiumofficial

Twitter | https://twitter.com/MetadiumK

Medium | https://medium.com/metadium

Metadium 2024 Q1 Activity Report was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.

Thursday, 04. April 2024

KuppingerCole

Modern IAM builds on Policy Based Access

The idea of policy-based access management and providing just-in-time access by authorizing requests at runtime is not new. It has seen several peaks, from mainframe-based approaches for resource access management to XACML (eXtensible Access Control Markup Language) and, more recently, OPA (Open Policy Agent). Adoption is growing, specifically by developers building new digital services. Demand is

The idea of policy-based access management and providing just-in-time access by authorizing requests at runtime is not new. It has seen several peaks, from mainframe-based approaches for resource access management to XACML (eXtensible Access Control Markup Language) and, more recently, OPA (Open Policy Agent). Adoption is growing, specifically by developers building new digital services. Demand is also massive amongst IAM and cybersecurity people that want to get rid of static access entitlements / standing privileges. The market for PBAM is still very heterogeneous but evolving fast. In the Leadership Compass PBAM, we’ve analyzed the various solutions in this market.

In this webinar, Martin Kuppinger, Principal Analyst at KuppingerCole Analysts, will look at the status and future of PBAM and the various types of solutions that are available in the market. He will look at the overall ratings for this market segment and provide concrete recommendations on how to best select the vendor, but also will discuss strategic approaches for PBAM.

Join this webinar to learn:

Why we need PBAM. Which approaches on PBAM are available in the market. How an enterprise-wide approach / strategy on PBAM should look like. The Leaders in the PBAM market.


Fission

Farewell from Fission

Fission is winding down active operations. The team is wrapping things up and ending employment and contractor status by the end of May 2024. Fission has been a venture investment funded company since 2019. Our last round, led by Protocol Labs, was raised as part of joining the Protocol Labs network as a “blue team”, focused on protocol research and implementation. The hypothesis was to get paid

Fission is winding down active operations. The team is wrapping things up and ending employment and contractor status by the end of May 2024.

Fission has been a venture investment funded company since 2019. Our last round, led by Protocol Labs, was raised as part of joining the Protocol Labs network as a “blue team”, focused on protocol research and implementation. The hypothesis was to get paid by the network, including in Filecoin FIL grants for alignment with the network.

In Q4 of 2023, it was clear that our hypothesis of getting paid directly for protocol research wasn’t going to work.

We did a round of lay offs and focused on productizing our compute stack, powered by the IPVM protocol, as the Everywhere Computer. The team has shipped this as a working decentralized, content-addressable compute system in the past 6 months.

Unfortunately, we have not been able to find further venture fundraising that is a match for us.

What about the projects that Fission works on?

The majority of Fission’s code is available under open source licenses. The protocols we’ve worked on have been developed in working groups, with a community of other collaborators, and have their home base in shared Github organizations:

UCAN: capability-based decentralized auth using DIDs https://github.com/ucan-wg WNFS: encrypted file system https://github.com/wnfs-wg IPVM: content-addressable compute https://github.com/ipvm-wg

Various people on the team, as well as other people and organizations, continue to build on the open source code we’ve developed.

Fission does have active publishing and DNS infrastructure that we'll be winding down, and will be reaching out to people about timing for that.

Thanks

Thank you to everyone for your support, interest, and collaboration over the years. Many of us have been involved in protocols and open source implementations for a long time, and have seen them have impact far after they were first created. We're proud of the identity, data, and compute stack we developed, and hope to see them have continued growth across different ecosystems.



YeshID

The Identity Management Struggle: Overpromised, Underdelivered, and How to Fix It

In the world of identity management, the struggle is real. Identity management involves controlling and managing user identities, access rights, and privileges within an organization. At YeshID, we’ve seen it... The post The Identity Management Struggle: Overpromised, Underdelivered, and How to Fix It appeared first on YeshID.

In the world of identity management, the struggle is real. Identity management involves controlling and managing user identities, access rights, and privileges within an organization. At YeshID, we’ve seen it all: from Google App Scripts built inside Sheets to Notion databases, full of outdated and fragmented documentation. We’ve seen people who have unexpectedly inherited the identity management job and people spending all their time reacting to HR onboarding and offboarding surprises. We’ve seen managed service providers with creative solutions that too often fall short. And we’ve seen IAM vendors overpromising integration and seamless system management and delivering upgrade prices and uncontrolled manual processes.

It’s like a tangled web of issues that can leave you feeling trapped and overwhelmed. The result? A complex set of challenges that can hinder productivity, security, and growth:

Workflow Issues Redundant Workflows: You have workflows dedicated to verifying automation, manually handling unautomatable tasks, and fine-tuning access in each app, including sending requests and reminders to app owners and the time-consuming quarterly access reviews. Workflow Dependencies: Intertwined workflows make it hard to untangle them, leading to a domino effect when changes are made. Bottlenecks and Delays: Manual steps and the need to chase approvals slow down processes, causing frustration and reduced efficiency. Data Management and Accuracy Data Inconsistency: Manual intervention and multiple workflows increase the likelihood of data inconsistencies, such as discrepancies in user information across different systems, leading to confusion and potential security risks. Email Address Standardization: Maintaining a consistent email address format (e.g., firstName.lastName@) can help with organization, but ensuring conventions are followed can be complex, especially as the organization grows.. Security Secure Access: Enforcing secure access practices is non-negotiable, but it’s an uphill battle, including: MFA: Multi-Factor Authentication adds protection against compromised credentials, but getting everyone to comply can be a challenge. Secure Recovery Paths: Ensuring account recovery methods aren’t easily exploitable is crucial, but often overlooked, leaving potential gaps in security.. Principle of Least Privilege: Limiting user permissions to only what’s necessary for their roles is a best practice, but permissions can creep up over time, leading to excessive access rights and failing compliance audits. Regular Updates and Patching: Keeping systems updated and patched is essential to address vulnerabilities and maintain a secure environment. Compliance Compliance Concerns: Meticulously designing workflows to collect evidence that satisfies compliance and regulatory requirements is time-consuming and often confusing. Operational and Growth Challenges Knowledge Silos: Manual processes mean knowledge is held by a few individuals, creating vulnerabilities and making it hard for others to step in when needed, hindering business continuity. Audit Difficulties: A mix of automated and manual workflows without proper documentation makes audits challenging and prone to errors, increasing the risk of non-compliance. Difficulty Scaling: As the organization grows, the complexity of fragmented processes hinders growth potential, making it difficult to onboard new employees and manage access rights efficiently. Complex Offboarding: Workflows must ensure proper, gradual account removal to balance security, archiving, business continuity, and legal compliance concerns. Mandatory Training: Tracking mandatory training like security awareness within the first month of employment is an ongoing struggle. Group and OU Assignments: Correctly placing users in groups and organizational units is key for managing permissions, but automating this requires careful alignment between automation rules and the company’s organizational structure, which can be challenging to maintain. Recommendations: Untangling the Web

YeshID’s YeshList gives you a way to untangle the process web organizing centrally, distributing the workload, and coordinating actions. 

Implement Company-Wide Accountability Establish a regular cadence for an access review campaign to ensure permissions are regularly reviewed and updated. Create a simple form for managers to review access for their team members, making it easy for them to participate in the process. Use a ticketing system or workflow tool to track requests and ensure accountability, providing visibility into the status of each request. Embrace Role-Based Access Control (RBAC) Design granular roles based on common job functions to streamline access granting, reducing the need for individual access requests. Track roles in a spreadsheet, including Role Name, Description, Permissions Included, and Role Owner, to maintain a clear overview of available roles and their associated permissions. Upgrade to Google Groups for decentralized role ownership, employee-initiated join requests, and automation possibilities, empowering teams to manage their own access needs. Use RBAC to speed up audits by shifting focus from individual permissions to role appropriateness, simplifying the audit process. Tool Examples: Google Workspace allows custom roles, but other identity management solutions may offer more robust RBAC capabilities. Conduct Regular Application-Level Access Reviews Periodically review user access within each critical application to close potential security gaps and ensure that access rights align with job requirements. Restrict access to applications using your company’s domain to improve security and prevent unauthorized access from external accounts. Utilize tools like Steampipe or CloudQuery to automate the integration of application access lists with your employee directory, enabling regular comparisons and alerts for discrepancies, saving time and reducing manual effort. Invest in Centralized Workflow Management Consolidate Workflows: Map existing processes, find overlaps, and merge them within a centralized tool. Prioritize High-Impact Automation First: Target repetitive, time-consuming tasks to get the most value. Prioritize Data Standardization and Integrity Define clear rules for email addresses, naming, and data entry, and enforce them during account creation to maintain data consistency across systems. Implement input validation to catch inconsistencies early, preventing data quality issues from propagating throughout the organization. Schedule data hygiene checks to identify and correct discrepancies between systems. Use a tool or script for account creation to ensure consistency. Strengthen Security with Key Enhancements Mandate MFA for all accounts. Review Recovery Methods: Favor authenticator apps or hardware keys over less secure methods. Regularly review user access levels and enforce least privilege principles. Use your company’s Identity Provider (IdP) for authentication whenever possible to centralize access control and simplify user management. Make Compliance a Focus, Not an Afterthought Document Workflows Thoroughly: Include decision points and rationale for auditing purposes. Build requirements for proof of compliance directly into your automated workflows. Tackle Operational Challenges Head-On Reduce errors with in-workflow guidance, providing clear instructions and prompts to guide users through complex processes. Cross-train IT team members to reduce single points of failure. Develop templates for recurring processes to streamline efforts and ensure consistency. Democratize Identity Management Empower employees and managers to resolve access requests whenever possible through: Automated Approval Workflows: Set up workflows with pre-defined rules to grant access based on criteria. Manager Approvals: Delegate access request approvals to direct managers for their teams. Self-Service Access Management: Consider a self-service portal for employees to request and manage basic access needs. Empowered Employees and Managers: Enable employees and managers to add or remove employee accounts for specific apps as needed. The Light at the End of the Tunnel

As you evaluate solutions, keep these factors in mind:

Cost-Effectiveness: Prioritize solutions with free tiers or flexible pricing models. Ease of Use: Choose tools with intuitive interfaces to encourage adoption. Scalability: Ensure solutions can grow with your company.

Identity management is a critical aspect of any organization’s security and operational efficiency. By recognizing the common challenges and implementing the recommendations outlined in this post, you can untangle the web of identity management struggles and create a more streamlined, secure, and efficient process.

YeshID Orchestration is here to help you on this journey, bringing Identity and Automation closer together for a more dedicated, consolidated, and simple solution. Don’t let identity management hold you back any longer – take control and unlock the full potential of your organization today. Try for free today!

The post The Identity Management Struggle: Overpromised, Underdelivered, and How to Fix It appeared first on YeshID.


Entrust

Using Data Analytics to Minimize Rejects in High-Volume Card Production

In the fast-paced and high-stakes industry of high-volume card production, minimizing rejects is crucial not... The post Using Data Analytics to Minimize Rejects in High-Volume Card Production appeared first on Entrust Blog.

In the fast-paced and high-stakes industry of high-volume card production, minimizing rejects is crucial not only for operational efficiency but also for maintaining a competitive edge. To achieve this, best-in-class manufacturers are turning to data analytics as a powerful tool to identify, analyze, and address the root causes of rejects. Data analytics is revolutionizing the smart card manufacturing landscape and helping businesses enhance their quality control processes.

The Power of Data Analytics in Manufacturing

Data analytics involves the use of advanced algorithms and statistical methods to analyze large sets of data, extracting meaningful insights and patterns. In the context of high-volume card production, data analytics provides manufacturers with a comprehensive understanding of the entire manufacturing process from start to finish. This insight allows for informed decision-making and targeted improvements in areas prone to defects or rejects.

One of the primary benefits of data analytics in minimizing rejects is its ability to identify and highlight patterns and anomalies in the manufacturing process. By analyzing historical data and trends, manufacturers can pinpoint specific stages or conditions that lead to a higher likelihood of rejects. This proactive approach enables preemptive measures to be implemented, reducing the occurrence of defects before they become a serious issue.

Data analytics also facilitates real-time monitoring of the manufacturing process. With sensors on the equipment, manufacturers can collect and analyze data in real-time, allowing for immediate identification of anomalies or deviations from established quality standards. This enables swift corrective actions, minimizing the overall number of card rejects.

Integrating Data Analytics into Quality Control Processes

To fully leverage the potential of data analytics in minimizing rejects, manufacturers must integrate it seamlessly into their quality control processes. This involves:

Data Collection Infrastructure – Establishing a robust infrastructure for data collection, including sensors and monitoring software across the production line. Data Processing and Analysis – Implementing advanced data processing and analysis tools to derive actionable insights from the collected data. Real-Time Reporting – Setting up real-time reporting mechanisms to enable immediate response to deviations from quality standards. This ensures that corrective actions can be taken swiftly, minimizing the impact on production efficiency. Continuous Improvement – Creating a culture of continuous improvement by regularly reviewing and updating the data analytics system based on evolving manufacturing conditions and emerging trends in smart card technology.

Transformative Impact of Data Analytics − A Recent Opportunity  

Recently, a large financial client solicited our assistance in analyzing their operational health and overall quality. They simply wanted to track, report, and plan for operational efficiency where minimal measures were currently in place at their issuance facility. They needed clear insight and root cause diagnostics to assess the what, when, and how of production inefficiencies hindering their operational plan. In addition, they needed a solution that helped them maintain complete control of their end-to-end issuance production, from supplies and rejects to idle time and availability.

After a thorough analysis using Entrust’s Adaptive Issuance Production Analytics Solution (PAS), it was determined that the customer’s biggest Overall Equipment Effectiveness (OEE) impact areas were machine utilization (Availability) and the number of reject cards (Quality). The analysis provided our client with a recommended action plan, including anticipated improvements based on their operational environment, specific machine layout, and configurations. Both outcomes were pivotal in increasing overall quality through deeper data interrogations.

Outcome #1 − Availability

In the above example, our digital intelligence identified compelling trend information in two areas. The first was a noticeable gap in the amount of idle time between machines, which led to further investigation into the operators themselves at each station. By enabling the “idle time tracking” feature, a complete picture of all operator activities between runs and during pause time showed a sizable disparity from machine to machine. This helped the client address critical labor differences and immediately laid the foundation to drive a continuous improvement plan, initiating best practices across the production floor.

Outcome #2 − Quality

The second finding determined that a significant percentage of rejects were all traced to a limited number of error codes. Similar to the first outcome, the client was able to investigate through a focused, root-cause analysis, driving their investigation quickly to assess and pinpoint the failures. The result was a significantly improved reject rate. Without a focused analytics-based assessment of their environment, this client was left to guess how and why inefficiencies were happening. The dynamic PAS dashboard was instrumental in identifying these inefficiencies and leading improvement plans for a more stable, healthy, and efficient operation.

In the dynamic landscape of high-volume card production, where precision and efficiency are paramount to the bottom line, leveraging data analytics is no longer a nice-to-have, but rather a necessity. Manufacturers that embrace data-driven approaches to quality control can minimize card rejects, enhance operational efficiency, and ultimately deliver superior smart card products to the market. As technology continues to advance, the integration of data analytics into manufacturing processes will play an increasingly pivotal role in shaping the future of high-volume card production. By harnessing the power of data, manufacturers can stay ahead of the competition and ensure that every smart card produced meets the highest standards of quality.

Learn more about the Entrust Adaptive Issuance™ Production Analytics Solution and how it can aid in operational efficiency using digital intelligence, data analytics, and advanced technologies essential for smart card manufacturing.

The post Using Data Analytics to Minimize Rejects in High-Volume Card Production appeared first on Entrust Blog.


Civic

Upgrading to a Better Digital ID System

Full names, email addresses, mailing address, phone numbers, dates of birth, Social Security numbers, account numbers and phone passcodes may have all been compromised in a recent data breach that affected more than 70 million people. It’s the kind of devastating data breach that makes you wonder why digital identity is so broken. And, it’s […] The post Upgrading to a Better Digital ID System ap

Full names, email addresses, mailing address, phone numbers, dates of birth, Social Security numbers, account numbers and phone passcodes may have all been compromised in a recent data breach that affected more than 70 million people. It’s the kind of devastating data breach that makes you wonder why digital identity is so broken. And, it’s […]

The post Upgrading to a Better Digital ID System appeared first on Civic Technologies, Inc..


KuppingerCole

May 22, 2024: A Bridge to the Future of Identity: Navigating the Current Landscape and Emerging Trends

In an era defined by digital transformation, the landscape of identity and access management (IAM) is evolving at an unprecedented pace, posing both challenges and opportunities for organizations worldwide. This webinar serves as a comprehensive exploration of the current state of the identity industry, diving into key issues such as security, compliance, and customer experience. Modern technology
In an era defined by digital transformation, the landscape of identity and access management (IAM) is evolving at an unprecedented pace, posing both challenges and opportunities for organizations worldwide. This webinar serves as a comprehensive exploration of the current state of the identity industry, diving into key issues such as security, compliance, and customer experience. Modern technology offers innovative solutions to address the complexities of identity management.

Wednesday, 03. April 2024

KuppingerCole

Road to EIC: Leveraging Reusable Identities in Your Organization

In the realm of customer onboarding, the prevailing challenges are manifold. Traditional methods entail redundant data collection and authentication hurdles, contributing to inefficiencies and frustrations for both customers and businesses. Moreover, siloed systems exacerbate the issue, leading to fragmented user experiences that impede smooth onboarding processes and hinder operational agility.

In the realm of customer onboarding, the prevailing challenges are manifold. Traditional methods entail redundant data collection and authentication hurdles, contributing to inefficiencies and frustrations for both customers and businesses. Moreover, siloed systems exacerbate the issue, leading to fragmented user experiences that impede smooth onboarding processes and hinder operational agility.

In today's digital landscape, the need for streamlined onboarding is paramount. Decentralized Identity standards present a solution by enabling reusable identities. This approach not only enhances security but also simplifies the onboarding journey, offering a seamless and efficient experience for both customers and businesses.

Join us for this “Road to EIC” virtual fireside chat where we

Discuss how Decentralized Identity standards optimize customer onboarding. Explore the business benefits of streamlined processes and enhanced security. Learn why reusable identities do not break your business systems and processes. Discuss implications for customer empowerment and digital transformation. Learn strategies for leveraging reusable identities in your organization's ecosystem.


Anonym

Will Quantum Computers Break the Internet? 4 Things to Know

The short answer is yes. The long answer is they will, but quick action could ease the damage.  Quantum computers harness the laws of quantum mechanics to quickly solve problems too complex for classical computers.   “Complex problems” are ones with so many variables interacting in complicated ways that no classical computer at any scale could […] The post Will Quantum Computers B

The short answer is yes. The long answer is they will, but quick action could ease the damage. 

Quantum computers harness the laws of quantum mechanics to quickly solve problems too complex for classical computers.  

“Complex problems” are ones with so many variables interacting in complicated ways that no classical computer at any scale could solve them—at least not within tens of thousands or even millions of years. 

IBM gives the example of a classical computer being able to sort through a big database of molecules, but struggling to simulate how those molecules behave. 

It’s these complex problems that the world would enormously benefit from solving and that quantum computers are being developed to handle.  

Use cases for quantum computing are emerging across industries for simulations (e.g. simulating molecular structures in drug discovery or climate modelling) and optimization (e.g. optimizing shipping routes and flight paths, enhancing machine learning algorithms, or developing advanced materials).  

But the quantum age won’t be all upside. The quantum threats you might have read about are real. Here are 4 things you must know: 
 

1. Quantum computers present a massive cyber threat to the world’s secured data 


Since quantum computers can solve complex computational problems far faster than any classical computer, the day will come when they will be sufficiently powerful and error-resistant to break conventional encryption algorithms (RSA, DSS, Diffie-Hellman, TLS/SSL, etc.) and expose the world’s vast stores of secured data.  

The future technology will use what’s known as Shor’s algorithm and other quantum algorithms to break all public key systems that employ integer factorization-based and other cryptography, rendering these conventional encryption algorithms obsolete and putting at risk global communications, stored data, and networks.  

All protected financial transactions, trade secrets, health information, critical infrastructure networks, classified databases, blockchain technology, satellite communications, supply chain information, defence and national security data, and more, will be vulnerable to attack. 

This video explains exactly how quantum computers will one day “break the internet”: 

 
2. The cyber threat from quantum computing is now urgent  

If a recent Chinese report could be proven, “Q-Day”—the point at which large quantum computers will break the world’s encryption algorithms—could come as soon as 2025. Until now, estimates had put it 5–20 years away. 

Bad actors, as well as nation-states such as Russia and China, are already intercepting and stockpiling data for “steal now, decrypt later” (SNDL) attacks by future quantum computers, and experts are urging organizations to pay attention and prepare now.  

The financial sector is particularly vulnerable and will require rapid development of robust quantum communication and data protection regimes. The transition will take time and, with Q-Day already on the immediate horizon, experts agree there’s no time to waste.  

Top of Form 

3.  The world is already mobilizing against quantum threats  

Governments and industry have had decades to plan their defence against the encryption-busting potential of quantum computers, and now things are heating up. 

Alibaba, Amazon, IBM, Google, and Microsoft have already launched commercial quantum-computing cloud services and in December 2023 IBM launched its next iteration of a quantum computer, IBM Quantum System Two, the most powerful known example of the technology (but still not there in terms of the power required to crack current encryption techniques). 

Importantly, the US National Institute of Standards and Technology (NIST) will this year release four post-quantum cryptographic (PQC) standards “to protect against future, potentially adversarial, cryptanalytically-relevant quantum computer (CRQC) capabilities. A CRQC would have the potential to break public-key systems (sometimes referred to as asymmetric cryptography) that are used to protect information systems today.”  

The goal is to develop cryptographic systems that are secure against both quantum and classical computers and can interoperate with existing communications protocols and networks. 

4. Organizations are being urged to prepare now 

In late August 2023 the US Government published its quantum readiness guide with advice for organization from the Cybersecurity and Infrastructure Security Agency (CISA), the National Security Agency (NSA), and the National Institute of Standards and Technology (NIST) about how to proactively develop and build capabilities to secure critical information and systems from being compromised by quantum computers. 

The advice for organizations, particularly those supporting critical infrastructure, is in four parts: 

Establish a quantum readiness roadmap.  Engage with technology vendors to discuss post-quantum roadmaps.  Conduct an inventory to identify and understand cryptographic systems and assets.  Create migration plans that prioritize the most sensitive and critical assets. 

The US Government says it’s urging immediate action since “many of the cryptographic products, protocols, and services used today that rely on public key algorithms (e.g., RivestShamir-Adleman [RSA], Elliptic Curve Diffie-Hellman [ECDH], and Elliptic Curve Digital Signature Algorithm [ECDSA]) will need to be updated, replaced, or significantly altered to employ quantum-resistant PQC algorithms, to protect against this future threat.” 

“Organizations are encouraged to proactively prepare for future migration to products implementing the post-quantum cryptographic standards.” 

Alongside the readiness guide is a game from the CISA, designed to help organizations across the critical infrastructure community identify actionable insights about the future and emerging risks, and proactively develop risk management strategies to implement now.  

The clear message in all the government advice and industry action is to be prepared such that your organization is ready to enact a seamless transition when quantum computing does become reality.  

As Rob Joyce, Director of NSA Cybersecurity, says: “The transition to a secured quantum computing era is a long-term intensive community effort that will require extensive collaboration between government and industry. The key is to be on this journey today and not wait until the last minute.” 

One CSO writer sums it up this way: “This is not a light lift, it is indeed a heavy lift, yet a necessary lift. Sitting on the sidelines and waiting is not an option.” 

 
Is your organization already planning for quantum cryptography?  

Read the US Government’s readiness guide

The post Will Quantum Computers Break the Internet? 4 Things to Know appeared first on Anonyome Labs.


Microsoft Entra (Azure AD) Blog

Microsoft Entra resilience update: Workload identity authentication

Microsoft Entra is not only the identity system for users; it’s also the identity and access management (IAM) system for Azure-based services, all internal infrastructure services at Microsoft, and our customers’ workload identities. This is why our 99.99% service-level promise extends to workload identity authentication, and why we continue to improve our service’s resilience through a multilayer

Microsoft Entra is not only the identity system for users; it’s also the identity and access management (IAM) system for Azure-based services, all internal infrastructure services at Microsoft, and our customers’ workload identities. This is why our 99.99% service-level promise extends to workload identity authentication, and why we continue to improve our service’s resilience through a multilayered approach that includes the backup authentication system. 

 

In 2021, we introduced the backup authentication system, as an industry-first innovation that automatically and transparently handles authentications for supported workloads when the primary Microsoft Entra ID service is degraded or unavailable. Through 2022 and 2023, we continued to expand the coverage of the backup service across clouds and application types. 

 

Today, we’ll build on our resilience blogpost series by going further in sharing how workload identities gain resilience from the regionally isolated authentication endpoints as well as from the backup authentication system. We’ll explore two complementary methods that best fit our regional-global infrastructure. One example of workload identity authentication is when an Azure virtual machine (VM) authenticates its identity to Azure Storage. Another example is when one of our customers’ workloads authenticates to application programming interfaces (APIs).  

 

Regionally isolated authentication endpoints 

 

Regionally isolated authentication endpoints provide region-isolated authentication services to an Azure region. All frequently used identities will authenticate successfully without dependencies on other Azure regions. Essentially, they are the primary endpoints for Azure infrastructure services as well as the primary endpoints for managed identities in Azure (Managed identities for Azure resources - Microsoft Entra ID | Microsoft Learn). Managed identities help prevent out-of-region failures by consolidating service dependencies, and improving resilience by handling certificate expiry, rotation, and trust.  

 

This layer of protection and isolation does not need any configuration changes from Azure customers. Key Azure infrastructure services have already adopted it, and it’s integrated with the managed identities service to protect the customer workloads that depend on it. 

 

How regionally isolated authentication endpoints work 

 

Each Azure region is assigned a unique endpoint for workload identity authentication. The region is served by a regionally collocated, special instance of Microsoft Entra ID. The regional instance relies on caching metadata (for example, directory data that is needed to issue tokens locally) to respond efficiently and resiliently to the workload identity’s authentication requests. This lightweight design reduces dependencies on other services and improves resilience by allowing the entire authentication to be completed within a single region. Data in the local cache is proactively refreshed. 

 

The regional service depends on Microsoft Entra ID's global service to update and refill caches when it lacks the data it needs (a cache miss) or when it detects a change in the security posture for a supported service. If the regional service experiences an outage, requests are served seamlessly by Microsoft Entra ID’s global service, making the regional service interruption invisible to the customers.  

 

Performant, resilient, and widely available 

 

The service has proven itself since 2020 and now serves six billion requests per day across the globe.  The regional endpoints, working with global services, exceed 99.99% SLA. The resilience of Azure infrastructure is further protected by workload-side caches kept by Azure client SDKs. Together, the regional and global services have managed to make most service degradations undetectable by dependent infrastructure services. Post-incident recovery is handled automatically. Regional isolation is supported by public and all Sovereign Clouds. 

 

Infrastructure authentication requests are processed by the same Azure datacenter that hosts the workloads along with their co-located dependencies. This means that endpoints that are isolated to a region also benefit from performance advantages. 

 

 

Backup authentication system to cover workload identities for infrastructure authentication 

 

For workload identity authentication that does not depend on managed identities, we’ll rely on the backup authentication system to add fault-tolerant resilience.  In our blogpost from November 2021, we explained the approach for user authentication which has been generally available for some time. The system operates in the Microsoft cloud but on separate and decorrelated systems and network paths from the primary Microsoft Entra ID system. This means that it can continue to operate in case of service, network, or capacity issues across many Microsoft Entra ID and dependent Azure services. We are now applying that successful approach to workload identities. 

 

Backup coverage of workload identities is currently rolling out systematically across Microsoft, starting with Microsoft 365’s largest internal infrastructure services in the first half of 2024. Microsoft Entra ID customer workload identities’ coverage will follow in the second half of 2025. 

 

 

Protecting your own workloads 

 

The benefits of both regionally isolated endpoints and the backup authentication system are natively built into our platform. To further optimize the benefits of current and future investments in resilience and security, we encourage developers to use the Microsoft Authentication Library (MSAL) and leverage managed identities whenever possible. 

 

What’s next? 

 

We want to assure our customers that our 99.99% uptime guarantee remains in place, along with our ongoing efforts to expand our backup coverage system and increase our automatic backup coverage to include all infrastructure authentication—even for third-party developers—in the next year. We’ll make sure to keep you updated on our progress, including planned improvements to our system capacity, performance, and coverage across all clouds.  

 

Thank you, 

Nadim Abdo  

CVP, Microsoft Identity Engineering  

 

 

Learn more about Microsoft Entra: 

Related blog post: Advances in Azure AD resilience  See recent Microsoft Entra blogs  Dive into Microsoft Entra technical documentation  Learn more at Azure Active Directory (Azure AD) rename to Microsoft Entra ID  Join the conversation on the Microsoft Entra discussion space  Learn more about Microsoft Security  

Tokeny Solutions

Introducing Leandexer: Simplifying Blockchain Data Interaction

The post Introducing Leandexer: Simplifying Blockchain Data Interaction appeared first on Tokeny.

Product Focus

Introducing Leandexer: Simplifying Blockchain Data Interaction

This content is taken from the monthly Product Focus newsletter in March 2024.

A few years ago, Tokeny encountered some challenges in maintaining and scaling its infrastructure as blockchain data indexing can be quite complex with advanced smart contracts. We were limited by existing indexing tools like The Graph and decided to invest massively in the development of our own indexer solution for our tokenization platform.

Issues such as unsynchronized blockchain data and disruptions to other operations during token and event indexing were costly to maintain and hard to use. With our in-house indexer, these challenges were effectively resolved.

Opening Our Indexer to Third-Parties

This experience led us to realize that many other companies shared similar frustrations with existing indexing solutions, regardless of their size or industry. Recognizing the widespread need for a reliable indexer solution, we decided to launch Leandexer, a standalone version of our in-house indexer, available as a SaaS solution.

What is Leandexer?

Leandexer.com offers a blockchain indexer-as-a-service solution, providing live streams of blockchain data for both individuals and businesses, on any EVM-compatible blockchain. By offering a user-friendly platform and a robust API, it enables the setup of alert notifications from blockchains, such as events triggered by smart contracts. With its no-code approach, Leandexer simplifies blockchain interaction for everyone, regardless of technical expertise. Crucially, it guarantees uninterrupted access to up-to-date blockchain data feeds without downtime, all while ensuring low maintenance costs.

Having already processed over 3 billion blockchain events via the Tokeny platform in the past few years, Leandexer has proven its reliability and efficiency. We are now getting ready to make the solution available to developers, traders, and researchers.

How does Leandexer work? Choose a Blockchain network: Any EVM blockchain network. Input a Smart Contract address: Input the smart contract you want to track. Select Events: Specify the events you want to monitor, like transfers, deposits, etc. Activate Alert Channels: Select your preferred notification methods, such as webhooks, emails, or Slack.

We are now opening a private beta for selected partners. Don’t hesitate to contact us if you are interested in trying the system or would like to know more.

Subscribe Newsletter

This monthly Product Focus newsletter is designed to give you insider knowledge about the development of our products. Fill out the form below to subscribe to the newsletter.

Other Product Focus Blogs Multi-Chain Tokenization Made Simple 3 May 2024 Introducing Leandexer: Simplifying Blockchain Data Interaction 3 April 2024 Breaking Down Barriers: Integrated Wallets for Tokenized Securities 1 March 2024 Tokeny’s 2024 Products: Building the Distribution Rails of the Tokenized Economy 2 February 2024 ERC-3643 Validated As The De Facto Standard For Enterprise-Ready Tokenization 29 December 2023 Introducing Multi-Party Approval for On-chain Agreements 5 December 2023 The Unified Investor App is Coming… 31 October 2023 Introducing WalletConnect V2: Discover the New Upgrades 29 September 2023 Tokeny becomes the 1st tokenization platform to achieve SOC2 Type I Compliance 1 September 2023 Permissioned Tokens: The Key to Interoperable Distribution 28 July 2023 Tokenize securities with us

Our experts with decades of experience across capital markets will help you to digitize assets on the decentralized infrastructure. 

Contact us

The post Introducing Leandexer: Simplifying Blockchain Data Interaction first appeared on Tokeny.

The post Introducing Leandexer: Simplifying Blockchain Data Interaction appeared first on Tokeny.


Microsoft Entra (Azure AD) Blog

Microsoft Entra adds identity skills to Copilot for Security

Today we announced that Microsoft Copilot for Security will be generally available worldwide on April 1. The following new Microsoft Entra skills will be available in the standalone Copilot for Security experience: User Details, Group Details, Sign-in Logs, Audit Logs, and Diagnostic Logs. User Risk Investigation, a skill embedded in Microsoft Entra, will also be available in public preview. 

Today we announced that Microsoft Copilot for Security will be generally available worldwide on April 1. The following new Microsoft Entra skills will be available in the standalone Copilot for Security experience: User Details, Group Details, Sign-in Logs, Audit Logs, and Diagnostic Logs. User Risk Investigation, a skill embedded in Microsoft Entra, will also be available in public preview.  

  

These skills help identity admins protect against identity compromise through providing identity context and insights for security incidents and helping to resolve identity-related risks and sign-in issues. We’re excited to bring new identity capabilities to Copilot for Security and help identity and security operators protect at machine speed.  

 

Identity skills in Copilot for Security 

 

Let's take a closer look at what each of these new Entra Skills in Copilot for Security do to help identity professionals secure access, while easily integrating into any admin's daily workflow via natural language prompts: 

 

User details can quickly surface context on any user managed in Entra, such as username, location, job title, contact information, authentication methods, the account creation date, and account status. Admins can prompt Copilot with phrases like "tell me more about this user”, “list the active users created in the last 5 days”, “what authentication methods does this user have”, and “is this user’s account enabled” to pull up this kind of information in a matter of seconds. 

 

Group details can summarize details on any group managed in Entra.  Admins can ask Copilot questions like “who is the owner of group X?”, “tell me about the group that starts with XYZ”, and “how many members are in this group?” for immediate context. 

 

Sign-in logs can highlight information about sign-in logs and conditional access policies applied to your tenant to assist with identity investigations and troubleshooting. Admins must simply instruct their Copilot to “show me recent sign-ins for this user”, “show me the sign-ins from this IP address”, or “show me the failed sign-ins for this user.” 

   

Audit logs can help isolate anomalies associated with audit logs, including changes to roles and access privileges. Admins just have to ask Copilot to “show me audit logs for actions initiated by this user” or “show me the audit logs for this kind of event”. 

 

An identity admin, who has identified a risky user under the username ‘rashok’, asks Copilot for the March 5th audit logs for ‘rashok’ to discover what actions that user took while at a heightened risk of compromise.

 

Diagnostic logs can help assess the health and completeness of your tenant's policy configurations. This helps ensure sign-in and audit logs are correctly set up, and that there are no gaps in the log collection process. Admins can ask “what logs are being collected in my tenant” or “are audit logs enabled” to quickly remediate any gaps. 

 

Learn more in our documentation about these new Entra Skills in Copilot for Security.

Using Copilot in Entra for risk investigation 

 

To get a better picture of how Copilot for Security can increase the speed at which you respond to identity risks, let’s imagine a scenario in which a user is flagged for having a high-risk level due to several abnormal sign-in attempts. With the User Risk Investigation skill in Microsoft Entra, available in public preview with Copilot for Security, admins can get an analysis of the user risk level coupled with recommendations on how to mitigate an incident and resolve the situation:

 

An identity admin notices that a user has been flagged as high risk due to a series of abnormal sign-ins. With Copilot for Security, the admin can quickly investigate and resolve the risk by clicking on the user in question to receive an immediate summary of risk and instructions for remediation.

 

Copilot summarizes in natural language why the user risk level was elevated. Then, Copilot provides an actionable list of steps to help nullify the risk and close the alert. Finally, Copilot provides a series of recommendations an identity admin can take to automate the response to identity threats, minimizing exposure to a compromised identity. 

 

Learn more in our documentation about the User Risk Investigation skill in Microsoft Entra.

 

How to use Copilot for Security 

 

We are introducing a provisioned pay-as-you-go licensing model that makes Copilot for Security accessible to a wider range of organizations than any other solution on the market. With this flexible, consumption-based pricing model, you can get started quickly, then scale your usage and costs according to your needs and budget. Copilot for Security will be available for purchase April 1, 2024. Connect with your account representative now so your organization can be among the first to realize the incredible benefits. 

 

Copilot for Security helps security and IT teams transition into the age of AI and strengthen their skillsets. This is a huge milestone towards empowering organizations with generative AI tools, and we are so proud to work alongside our customers and partners to bring you a better way to secure identities and access for everyone, to everything. 

 

Sarah Scott, 

Principal Manager, Product Management 

 

 

Learn more about Microsoft Entra: 

See recent Microsoft Entra blogs  Dive into Microsoft Entra technical documentation  Learn more at Azure Active Directory (Azure AD) rename to Microsoft Entra ID  Join the conversation on the Microsoft Entra discussion space  Learn more about Microsoft Security  

Elliptic

Crypto regulatory affairs: The US Treasury’s intense week of crypto-related sanctions actions

During the last week of March the US government had its busiest week ever when it comes to imposing financial sanctions involving cryptoasset activity. 

During the last week of March the US government had its busiest week ever when it comes to imposing financial sanctions involving cryptoasset activity. 


Microsoft Entra (Azure AD) Blog

Microsoft Entra: Top content creators to follow

You’re probably familiar with Microsoft Entra documentation and What's new / Release notes for Entra ID. And perhaps you’ve also explored training for Microsoft Entra, Microsoft Certification for identity and access management, or Microsoft Security on YouTube.    Beyond these official channels, an incredible community of talented identity practitioners and passionate Microsoft emplo

You’re probably familiar with Microsoft Entra documentation and What's new / Release notes for Entra ID. And perhaps you’ve also explored training for Microsoft Entra, Microsoft Certification for identity and access management, or Microsoft Security on YouTube

 

Beyond these official channels, an incredible community of talented identity practitioners and passionate Microsoft employees are also sharing their knowledge so that you can get the most from Microsoft Entra. I hope you’ll review the list and comment if I missed any other good ones! 

 

Links below are to external sites and do not represent the opinions of Microsoft. 

 

Andy Malone 

Microsoft Entra videos from Andy Malone 

 

Microsoft MVP Andy Malone is a well-known technology instructor, consultant, and speaker, and in 2023 was awarded “best YouTube channel” by the European SharePoint, Office 365 & Azure Conference (ESPC). Last summer’s Goodbye Azure AD, Hello Entra ID was a big hit, and he’s continued the trend with Goodbye VPN! Hello Microsoft Global Secure Access, and Goodbye Passwords! Hello Passkeys. Just to prove his titles don’t all start with “goodbye”, I’ll also recommend Entra ID NEW Guest & External Access Features YOU Need to Know! 

 

Daniel Bradley 

Ourcloudnetwork.com 

 

In 2023, Daniel Bradley was awarded the Microsoft MVP award in the Security category. His blogs focus on programmatic management of Microsoft 365 and Microsoft Entra through PowerShell and Security. 

 

To sample his content, check out How to create and manage access reviews for group owners, How to force a password change in Microsoft 365 without password reset, or How to Apply Conditional Access to Protected Actions in Microsoft Entra 

 

Daniel Chronlund 

danielchronlund.com 

 

Daniel Chronlund is a Microsoft Security MVP, Microsoft 365 security expert, and consultant. He writes about cloud security, Zero Trust implementation, Conditional Access, and similar topics, plus shares PowerShell scripts and Conditional Access automation tools. Around here, we’re big fans of passwordless and phishing-resistant multifactor authentication, so we’re especially keen on this post: “Unlocking” the Future: The Power of Passkeys in Online Security.   

 

Lukas Beren 

Cybersecurity World 

 

Lukas Beren works at Microsoft as a Senior Cybersecurity Consultant at the Detection and Response Team (DART).  

 

“DART’s mission is to respond to compromises and help our customers become cyber-resilient,” said Lukas. “So I’m quite passionate about cybersecurity, and I regularly use Microsoft Entra ID along with other Microsoft Security tools.” 

 

Recent blogs include Understanding Primary Refresh Tokens in Microsoft Entra ID, Understanding Entra ID device join types, and Password expiration for Entra ID synchronized accounts

 

John Savill’s Technical Training 

John Savill's Technical Training  

 

John Savill is a Chief Architect in Customer Support for Microsoft with a hobby of sharing his wealth of knowledge via whiteboarding technical concepts of Microsoft Entra, Azure, DevOps, PowerShell, and more. Recent Microsoft Entra topics include Conditional Access Filters and Templates, Microsoft Entra Internet Access, and Microsoft Entra ID Governance.  

 

When John co-starred in the Microsoft Entra breakout session at Ignite 2023, one commenter proclaimed, “John Savill is the GOAT” (that’s Greatest of All Time, of course, not the farm animal ;)).  

 

Merrill Fernando 

Entra.News and Merrill on LinkedIn 

 

Merrill Fernando is part of the Microsoft Entra customer acceleration team, helping complex organizations deploy Entra successfully. Every week he curates Entra.News, a weekly newsletter of links to articles, blog posts, videos, and podcasts about Microsoft Entra from around the web.   

 

“I wanted a way to share the lessons I’ve learned, but I know not everyone has the luxury of reading long posts or detailed docs,” said Merrill. “So I try to break down complex topics into short, easy to understand posts on social media.”  

 

Merill’s Microsoft Entra mind map is pretty famous in our virtual hallways as the best at-a-glance look at the product line capabilities. He’s also published helpful overviews of managing passwords with Microsoft Entra and How single sign-on works on Macs and iPhones

 

Microsoft Mechanics 

Microsoft Entra on Microsoft Mechanics 

 

Microsoft Mechanics is Microsoft's official video series for IT Pros, Solution Architects, Developers, and Tech Enthusiasts. Jeremy Chapman and his team host Microsoft engineers who show you how to get the most from the software, service, and hardware they built. Recent Microsoft Entra topics include Security Service Edge (SSE), migrating from Active Directory Federation Services to Microsoft Entra ID, a beginner’s tutorial for Microsoft Entra ID, and automating onboarding and offboarding tasks.     

 

Thomas Naunheim 

cloud-architekt.net/blog 

 

Thomas Naunheim is a Cyber Security Architect in Germany, a Microsoft MVP, and a frequent community speaker on Azure and Microsoft Entra. His recent blog series on Microsoft Entra Workload ID highlights the need for organizations to manage non-human (workload) identities at scale, and offers guidance on deployment, lifecycle management, monitoring, threat detection, and incident response. 

 

Tony Redmond 

Office365ITPros.com 

 

Tony Redmond is the lead author of the legendary Office 365 for IT Pros. His prolific blog includes recent gems How to Update Tenant Corporate Branding for the Entra ID Sign-in Screen with PowerShell, How to Use PowerShell to Retrieve Permissions for Entra ID Apps, and How to Report Expiring Credentials for Entra ID Apps

 

Shehan Perera 

emsroute.com 

 

Shehan Perera is a Microsoft MVP in Enterprise Mobility who is passionate about modern device management practices, identity and access management, and identity governance. Check out his recent infographic for how to migrate MFA and SSPR policies to the converged authentication methods policy. And his passion for identity governance really shines through in this deep dive to adopting Microsoft Entra ID Governance.  

 

Suryendu Bhattacharyya 

suryendub.github.io 

 

Suryendu Bhattacharyya earned the Microsoft Entra Community Champion badge in 2023 for passion and expertise in his technical knowledge of Microsoft Entra products. Check out his helpful how-to guides, including: Securing Legacy Applications with Entra Private Access and Conditional Access, Deploy Conditional Access Policies for a Zero Trust Architecture Framework, and Keep Your Dynamic Groups Compliant by Microsoft Graph Change Notifications and Azure Event Grid

 

Let us know if this list is helpful – and any sources I missed – in the comments. Thank you! 

 

Nichole Peterson 

 

 

Learn more about Microsoft Entra: 

See recent Microsoft Entra blogs  Dive into Microsoft Entra technical documentation  Learn more at Azure Active Directory (Azure AD) rename to Microsoft Entra ID  Join the conversation on the Microsoft Entra discussion space and Twitter  Learn more about Microsoft Security  

Entrust

Don’t Leave the Door Open to Threat Actors

We came across this recent Joint Cybersecurity Advisory paper: “Threat Actor Leverages Compromised Account of... The post Don’t Leave the Door Open to Threat Actors appeared first on Entrust Blog.

We came across this recent Joint Cybersecurity Advisory paper: “Threat Actor Leverages Compromised Account of Former Employee to Access State Government Organization,” co-authored by the Cybersecurity & Infrastructure Security Agency (CISA) and the Multi-State Information Sharing & Analysis Center (MS-ISAC). The topic strikes a familiar chord, yet we both appreciate the thorough analysis provided by the authors to educate cybersecurity professionals on the details and mitigating factors. In our view, sharing real life experiences helps get the message across more impactfully than discussing abstract threat models and hypothetical attacks.

It makes you think … Do you know how quickly your organization responds to an employee or contractor leaving the organization? How unified are your HR and IT functions? Is your identity and access management (IAM) solution fit for the 21st century? With social engineering attacks such as phishing and man in the middle (MiTM) getting more sophisticated, do you have the tools in place to protect against them?

Let’s first look at the method used by the hacker described in this advisory paper and see what lessons we can learn from this attack.

Unidentified Threat Actor

The hack started with a government agency being alerted to a U.S. government employee’s credentials, host, and user information being offered for sale on the dark web. The incident response assessment determined that “an unidentified threat actor compromised network administrator credentials through the account of a former employee … to successfully authenticate to an internal virtual private network (VPN) access point … .”

The hacker then moved onto targeting the ex-employee’s on-prem environment, running several lightweight directory access protocol (LDAP) queries, and then moving laterally into their Azure environment.

The good news is the hacker didn’t appear to have progressed much further, presumably satisfied they had valid credentials that they could sell to other hackers to continue their nefarious acts.

The advisory paper references the MITRE ATT&CK® framework, which we’ve illustrated below.

These are the steps a threat actor would typically follow as they carry out an attack – starting at the 12 o’clock position (Reconnaissance), moving clockwise to Resource Development all the way to Impact.

NOTE: For more details about the typical stages of an attack and a comprehensive database of real threats used by adversaries, visit attack.mitre.org.

Figure 1: MITRE ATT&CK illustration of the threat actor’s modus operandi

USER1, referenced in the paper, likely followed these steps in sourcing the former employee’s credentials and then used them to access the network. Once on the network, they were able to locate a second set of credentials, labeled USER2. The advisory paper charts the progress of USER1 and USER2 through these stages as far as the Collection stage, where “the actor obtained USER2 account credentials from the virtualized SharePoint server managed by USER1.” As we mentioned, progress seems to have stalled and the paper states: “Analysis determined the threat actor did not move laterally from the compromised on-premises network to the Azure environment and did not compromise sensitive systems.”

Mitigations

What’s clear from the report is several simple errors facilitated this hack. Below, we’ve added some best practices to the MITRE ATT&CK illustration to show how to mitigate those errors.

The joint Cybersecurity Advisory paper is a reminder of how threat actors are poised and ready to exploit weaknesses in an organization’s security posture. Some straightforward security measures would’ve halted this attack before it had even started. However, we know that threat actors are evolving and implementing more sophisticated attacks. Organizations might not always leave the door open, but they might not have secured the latch and attached the door chain to bolster their security posture.

We Can Help You Lock the Door

Entrust offers a comprehensive portfolio of solutions that not only would have helped the organization that was the victim in this particular situation, but can also help other organizations protect against more sophisticated attacks being used by threat actors.

KeyControl manages keys, secrets, and certificates – including credentials. KeyControl proactively enforces security policies by whitelisting approved users and actions while also recording privileged user activity across virtual, cloud, and physical environments – creating a granular, immutable audit trial of those accessing the system.

Entrust CloudControl improves virtual infrastructure security and risk management with features such as role-based access control (RBAC), attribute-based access control (ABAC), and secondary approval (two-person rule). These are important, especially when overseeing virtualized environments on a large scale with a team of busy system administrators. CloudControl provides the necessary guardrails and control measures to ensure that your system admin team consistently applies policies across your VM estate while also mitigating against inadvertent misconfigurations.

Entrust Phishing-Resistant MFA delivers precisely as advertised. Identity continues to be the largest attack vector, with compromised credentials and phishing being the leading causes of breaches. The traditional password adds to the poor user experience and is easily compromised. Even conventional MFA methods such as SMS one-time password (OTP) and push authentication are easily bypassed by attackers.

Credential Management and Access Trends

When examining credential management and access, there are prominent trends in identity and IAM that are receiving significant attention in the office of the CEO and the boardroom.

One prominent trend is the increasing adoption of phishing-resistant passwordless adaptive biometrics authentication. This helps prevent fraud and secure high-value transactions with risk inputs that assess behavioral biometrics and look for indicators of compromise (IOCs) based on various threat intelligence feeds.

Another trend is using identity proofing to enhance security layers, seamless onboarding processes, and the integration of digital signing to provide a unified digital identity experience. Many companies are grappling with the complexities of managing multiple identity providers (IDPs) and associated processes, as well as challenges related to MFA fatigue and phishing attacks targeting OTPs via SMS or email – particularly through adversary in the middle (AiTM) attacks.

Then there’s the management of diverse cybersecurity platforms – including various IDPs, MFA solutions, identity proofing tools, and standalone digital signing platforms – that can lead to productivity bottlenecks and costly administration overheads. Employing certificate-based authentication, biometrics, and other passwordless authentication methods – combined with identity proofing and digital signing within an integrated identity solution – helps streamline operations, reduce costs, and enhance user adoption. Plus, it also helps mitigate potential vulnerabilities associated with disjointed platform connections across enterprise IT environments. It’s a lot for organizations to take on board.

Entrust phishing-resistant identity solutions provide a complete identity and access management platform and comprehensive certificate lifecycle management capabilities to help you implement high-assurance certificate-based authentication for your users and devices.

Lessons Learned

Whether your organization is on a Zero Trust journey or just looking to strengthen your security posture, the attack discussed in the Joint Cybersecurity Advisory paper that started this conversation is a reminder that the threats out there are real – and organizations need to have robust security processes and procedures in place to keep that door firmly closed.

Learn more about Entrust solutions for strong identities, protected data, and secure payments.

The post Don’t Leave the Door Open to Threat Actors appeared first on Entrust Blog.


Trinsic Podcast: Future of ID

Taylor Liggett - ID.me’s Strategy for Adoption, Monetization, and Brand for 100 Million Wallets and Beyond

In today’s episode we spoke with Taylor Liggett, Chief Growth Officer of ID.me, which is the largest reusable ID network in the United States and may be the largest private digital ID network in the world. With over 100 million user wallets and $150 million in revenue, ID.me has figured some things out about reusable ID adoption and monetization. We talk about how reusable identity reduces the fr

In today’s episode we spoke with Taylor Liggett, Chief Growth Officer of ID.me, which is the largest reusable ID network in the United States and may be the largest private digital ID network in the world. With over 100 million user wallets and $150 million in revenue, ID.me has figured some things out about reusable ID adoption and monetization.

We talk about how reusable identity reduces the friction required to undergo a verification, and therefore expands the market. Taylor shares specific stats on conversion rates and completion times that are very interesting.

We cover a bunch of tactical topics, like:

The education process needed to onboard relying parties How the go-to-market of a reusable ID product differs from a traditional transaction-based identity verification solution ID.me’s decision to prioritize web experiences over requiring a mobile wallet The business model ID.me charges its customers

Taylor spoke to some of the common objections that people online and in the media tend to have with ID.me. He did a great job addressing ID.me's tie-in with government, their strategy to build consumer trust in their brand after experiencing both good and bad press, and how they’re thinking about the evolution of interoperability in the space.

You can learn more by visiting the ID.me website.

Listen to the full episode on Apple podcasts, Spotify or find all ways to listen at trinsic.id/podcast.


KuppingerCole

Security Service Edge

by Mike Small Digital transformation and cloud-delivered services have led to a tectonic shift in how applications and users are distributed. Protecting sensitive resources of the increasingly distributed enterprise with a large mobile workforce has become a challenge that siloed security tools are not able to address effectively. In addition to the growing number of potential threat vectors, the

by Mike Small

Digital transformation and cloud-delivered services have led to a tectonic shift in how applications and users are distributed. Protecting sensitive resources of the increasingly distributed enterprise with a large mobile workforce has become a challenge that siloed security tools are not able to address effectively. In addition to the growing number of potential threat vectors, the very scope of corporate cybersecurity has grown immensely in recent years. This has led to the challenges described below:

Ontology

Ontology Weekly Report (March 26th — April 1st, 2024)

Ontology Weekly Report (March 26th — April 1st, 2024) This week at Ontology was filled with exciting developments, insightful discussions, and notable progress in our journey towards enhancing the Web3 ecosystem. Here’s a recap of our latest achievements: 🎉 Highlights Meet Ontonaut: We’re thrilled to introduce Ontonaut, our official mascot, who will be joining us on our journey to explore
Ontology Weekly Report (March 26th — April 1st, 2024)

This week at Ontology was filled with exciting developments, insightful discussions, and notable progress in our journey towards enhancing the Web3 ecosystem. Here’s a recap of our latest achievements:

🎉 Highlights Meet Ontonaut: We’re thrilled to introduce Ontonaut, our official mascot, who will be joining us on our journey to explore the vast universe of Ontology! Latest Developments Digital Identity Insights: Geoff shared his expertise on digital identity in a new article, contributing to our ongoing discussion on the importance of decentralized identity solutions. Web3 Wonderings: Our latest session focused on Farcaster Frames, providing valuable insights into this innovative platform. Make sure to catch up with the recording if you missed the live discussion! Exploring EVM’s: A new article detailing the intricacies of Ethereum Virtual Machines was published, offering a deep dive into their functionality and potential. Development Progress Ontology EVM Trace Trading Function: Progressing steadily at 75%, we continue to enhance our capabilities within the EVM, aiming to bring innovative solutions to our ecosystem. ONT to ONTD Conversion Contract: Development is ongoing at 40%, working towards simplifying the conversion process for our users. ONT Leverage Staking Design: At 25%, this initiative is set to introduce new staking mechanisms, providing more flexibility and opportunities for ONT holders. Product Development ONTO Welcomes Kita: We’re excited to announce that Kita is now listed on ONTO, further diversifying the range of options available to our users. On-Chain Activity dApp Stability: The ecosystem continues to thrive with 177 dApps on MainNet, demonstrating the robust and dynamic nature of Ontology. Transaction Growth: This week saw an increase of 5,860 dApp-related transactions and 24,890 total transactions on MainNet, indicating active engagement and utilization within our network. Community Growth Engaging Discussions: Our platforms, including Twitter and Telegram, have been buzzing with lively discussions on the latest developments and community interactions. We encourage everyone to join us and be part of our vibrant community. Telegram Discussion on DID: Led by Ontology Loyal Members, this week’s focus was “The Dawn of DID,” shedding light on the evolving landscape of digital identity and its implications. Stay Connected

We invite our community members to stay engaged through our official channels. Your insights, participation, and feedback drive our continuous growth and innovation.

Ontology Official Website: https://ont.io/ Email: contact@ont.io GitHub: https://github.com/ontio/ Telegram Group: https://t.me/OntologyNetwork

As we conclude another productive week, we extend our heartfelt gratitude to our community for their unwavering support and engagement. Together, we are shaping the future of Web3 and decentralized identity. Stay tuned for more updates, and let’s continue to innovate and grow together!

Ontology Weekly Report (March 26th — April 1st, 2024) was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


Fission

Causal Islands: LA Community Edition

In 2023, we hosted the inaugural Causal Islands future of computing conference in Toronto. On Saturday, March 23rd 2024, we had the first "Community Edition", a smaller grass roots one day conference, with presentations as well as room for more unconference style discussions, held in Los Angeles, California. Causal Islands is about bringing together experts and enthusiasts from many different bac

In 2023, we hosted the inaugural Causal Islands future of computing conference in Toronto. On Saturday, March 23rd 2024, we had the first "Community Edition", a smaller grass roots one day conference, with presentations as well as room for more unconference style discussions, held in Los Angeles, California.

Causal Islands is about bringing together experts and enthusiasts from many different backgrounds and sharing and learning together. We are creative technologists, researchers, and builders, exploring what the future of computing can be.

The LA community edition had themes of building and running networks together, exploring the future through creativity and poetics of computing, tools for thought and other interfaces for human knowledge, emerging and rethinking social networks, generative AI as a humane tool for people, and the journey of building a more distributed web.

Join the DWeb Community

Thank you to Mai Ishikawa Sutton for being a co-organizer of the event, including support as a media partner through DWeb. We hope that many of you will join us at DWeb Camp 2024!

Continue the conversation in the Causal Islands Discord chat »

Sessions

Thank you to all of the presenters and attendees for convening these sessions together! As well as planned talks, we had facilitated discussions on Commons Networks, Decentralized Social, and AI, Truth, & Identity. Below is the list of sessions and presenters with linked presentation resources where available.

Networked Organizing for a Pluriversal Distributed Web

mai ishikawa sutton

This talk will explore the radical approaches and projects that prioritize solidarity and collective liberation in the creation and maintenance of digital network infrastructure.
Gnosis: The Community-Run Chain

John

Distinguished by its accessibility for network validation, Gnosis Chain offers a significant departure from Ethereum's 32 ETH requirement for home staking. A single GNO token is all that's needed to validate on Gnosis Chain, making it accessible for home-based validators using standard hardware, inc
Content Addressable Compute with the Everywhere Computer

Boris Mann

An overview of the principles behind the Everywhere Computer, deterministic functions, and verifiable, content addressed compute

https://everywhere.computer

The Future of Computing is Fiction

Julian Bleecker

This talk, "The Future of Computing is Fiction", presents a brief overview of Design Fiction and reveals how Design Fiction can serve as an approach for envisioning possible futures of computing. Through the creation of tangible artifacts that imply adjacent possible trajectories for computing's futures, Design Fiction allows us to materialize the intangible, make the non-sensical make sense, and imbue these possibilities with ambitions, desires, and dreams of what-could be. Design Fiction allows us to represent challenges, and technological trajectories in grounded form, through the powerful Design Fiction artifact, such as advertisements, magazines, quick-start guides, FAQs, and news stories. In this talk I will briefly describe the methodologies of Design Fiction and the ways it has been used by organizations, teams, and brands to project into possible futures. I will showcase how speculative artifacts, such as industrial designs for new computing platforms or advertisements for future services, act as conduits for discussion and reflection on the evolution of computing.

https://www.nearfuturelaboratory.com

See more on YouTube

the poetics of computation

ivan zhao

in what ways is a poem a computer? how do the mechanics, the inner working of software, reflect the syntactic and beautiful nature of poetry? this talk dives into the rich history of programmers, poets, writers, and designers and how they've created new worlds with ideas, theories and examples.
Using TiddlyWiki For Personal Knowledge Curation

Gavin Gamboa

TiddlyWiki is an open-source software project initiated and maintained by Jeremy Ruston and volunteers since 2004. It is a local-first, future-proof tool for thought, task management system, storage and retrieval device, personal notebook, and so much more.

https://gavart.ist/offline/

Spatial Canvases: Towards an 'Integration Domain' for HCl

Orion Reed

Going beyond the app for interactive knowledge

Slides →

Everywhere Computer: decentralized compute

Boris Mann, Brooklyn Zelenka

Join the Fission team in a live, hands on workshop about the Everywhere.Computer.

We'll be walking through how to set up and install a Homestar node, an IPVM protocol reference implementation written in Rust optimized for running WebAssembly (Wasm) functions in workflows. Both data and compute functions are stored on content-addressed networks, loaded over the network, and the results stored for future use.

https://everywhere.computer

Towards a topological interchange format for executable notations, hypertext, and spatial canvases.

Chris Shank

In this talk we lay out a vision of an interchange format rooted in describing topological relationships and how it underpins distinctly different mediums such as executable notations, hypertext, and spatial canvases.

Presentation on TLDraw →

Our plan to build a Super App for "Everything"

Zhenya

Tech giants struggle to replicate WeChat's success due to regulatory, political, and privacy challenges. We're creating a developer-focused, open-source collaboration platform that ensures end-user data ownership, offline and realtime collaboration, and credible exit.
Intro to Open Canvas Working Group

Orion Reed

The Open Canvas Working Group is working to establish a robust file format to enable interoperability between infinite canvas tools.

Building atop Obsidian's JSON Canvas, participants so far include TLdraw, Excalidraw, Stately AI, KinopioClub, DXOS.

See the announcement on Twitter.

Patterns in Data Provenance

Benedict Lau

A presentation on patterns I encountered as the Data Provenance practice lead at Hypha Worker Co-op, and how our team approached each scenario.

Slides →

Hallucinating Fast and Slow

Ryan Betts

A series of vignettes retracing one designer's 30 year random walk from QBasic button masher, through Geocities copy-paster, all the way to GPT assisted functional 1x developer — and what that might say about our humane representations of thought, and the seeing spaces not yet achieved.
Farcaster Fever

Dylan Steck

An overview of the Farcaster protocol, a sufficiently decentralized social network, and its recent growth.

Slides →

Building The Distributed Web: Trying, failing, trying again

Hannah Howard

Best practices for building the distributed web in a way that actually works — and a sort of “lessons learned” from the last 5 years or so of not always succeeding. A look at why "the new internet" hasn't taken over yet, despite significant investment, and how we can get there still.

Tuesday, 02. April 2024

Indicio

Leading analyst — Indicio-SITA partnership ‘important in the evolution’ of decentralized identity

The post Leading analyst — Indicio-SITA partnership ‘important in the evolution’ of decentralized identity appeared first on Indicio.

A new blog from Andras Cser, VP and Principal Analyst at Forrester Research, says standardized use cases, such as Indicio and SITA’s development of digital travel credentials will drive adoption of “exciting new identity technology,” what he calls, decentralized digital identity (DDID).

SITA recently announced its role as lead investor in Indicio’s Series A funding round, citing the co-innovation agreement as being key to its digital identity strategy. This means offering verifiable credential-based digital identity solutions that meet International Civil Aviation Organization (ICAO) standards for a Digital Travel Credential (DTC) to its 400-plus members and 2,500 worldwide customers, which SITA says is about 90% of the world’s airline business.

With the digital travel credential — or DTC — Indicio and SITA have applied decentralized identity technology to enable pre-authorized travel and seamless border crossing using verifiable credentials. The simplicity, speed, and security of the process applied to the often stressful experience of travel will not only drive the adoption of the technology in air travel but show the world how verifiable identity and data can be easily applied to make everything from passwordless login to banking and finance better, more secure, and faster.

“We agree with Cser,” says Heather Dahl, CEO of Indicio. “When you can solve one of the toughest security challenges — crossing a border — and solve it so that it becomes, easy, frictionless, and seamless, you have the opportunity not only to scale the technology across the global travel industry and affect the lives of millions of people, but to show how this simplicity can be applied to any digital interaction that requires personal or high-value data. It is a very exciting technology, these are exciting times, and we’re going to change everything.”

Cser’s observation highlights Indicio as a leader in the development and deployment of decentralized digital identity software and solutions through its growing customer roster of global enterprises, governments, organizations, and financial institutions.

Decentralized identity and verifiable credentials are a new and transformational method for data sharing that allows information to be verified without intervention from a centralized third-party or through creating a direct integration between systems. This means data from disparate sources and systems can be easily and quickly shared directly to organizations and governments by end users to make informed decisions based on accurate, verifiable data.

The key to this, as Cser notes, is standardization: “standardized use cases will drive interoperability and usability and help grow DDID adoption.” Government contracts for IT infrastructure increasingly mandate open source and open standard-based technology over proprietary solutions because the former is easier to scale, easier to sustain, and less expensive. In practice, this means that a verifiable credential solution like the DTC can be easily adapted to other identity verification purposes because it is easy for anyone to access and use verification software combined with governance rules.

Indicio’s engineering team are key leaders, contributors to, and maintainers of open-source projects at the Hyperledger Foundation and Decentralized Identity Foundation (DIF), and Indicio’s products and solutions align with the open standards of the World Wide Web Consortium (W3C) and Trust over IP Foundation (ToIP). With the DTC, Indicio and SITA not only followed ICAO’s standard for DTC but also the open standards and open-source codebases that enable interoperability.

“Open standards are a decentralized identity superpower,” says Dahl, “and It is, in large part, due to the work of the open-standards Hyperledger Foundation that we have a complete technology solution that meets the needs of global enterprises and governments now and, equally important, will meet them in the future. Technology will evolve and we have to be ready for that, but we know the direction it will evolve towards: universally verifiable identity and data. It’s the only way forward that makes economic sense. That’s why we provide our customers with a universal solution — Indicio Proven®. It can meet current requirements for eIDAS, OpenID4VC, and mobile driver’s licenses but also allow the evolution, expansion, and innovation that will come — that is already coming — from business models using verifiable data.”

As 2024 continues, more global enterprises are learning about this exciting new technology and contracting with the Indicio Academy to help educate and train their workforce on the latest advancements and technologies encompassing decentralized identity.

Please reach out to our team for more information about Indicio, Indicio Proven, or the Indicio Academy.

####

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post Leading analyst — Indicio-SITA partnership ‘important in the evolution’ of decentralized identity appeared first on Indicio.


Microsoft Entra (Azure AD) Blog

Introducing new and upcoming Entra Recommendations to enhance security and productivity

Managing the myriad settings and resources within your tenant can be daunting. In an era of escalating security risks and an unprecedented global threat landscape, organizations seek trusted guidance to enhance their security posture That’s why we introduced Microsoft Entra Recommendations to diligently monitor your tenant’s status, ensuring it remains secure and healthy. Moreover, they empower yo

Managing the myriad settings and resources within your tenant can be daunting. In an era of escalating security risks and an unprecedented global threat landscape, organizations seek trusted guidance to enhance their security posture That’s why we introduced Microsoft Entra Recommendations to diligently monitor your tenant’s status, ensuring it remains secure and healthy. Moreover, they empower you to extract maximum value from the features offered by Microsoft Entra ID. Since the launch of Microsoft Entra recommendations, thousands of customers have adopted these recommendations and resolved millions of resources.  

 

Today, we’re thrilled to announce the upcoming general availability of four recommendations, and another three recommendations in public preview. We’re also excited to share new updates on Identity secure score. These recommendations cover a wide spectrum, including credentials, application health, and broader security settings—equipping you to safeguard your digital estate effectively.  

 

Presenting new and upcoming recommendations- Learn from our best practices.   

 

 The following list of new and upcoming recommendations help improve the health and security of your applications:

  

Remove unused credentials from applications: An application credential is used to get a token that grants access to a resource or another service. If an application credential is compromised, it could be used to access sensitive resources or allow a bad actor to move latterly, depending on the access granted to the application. Removing credentials not actively used by applications improves security posture and promotes application hygiene. It reduces the risk of application compromise and improves the security posture of the application by reducing the attack surface for credential misuse by discovery.  Renew expiring service principal credentials: Renewing the service principal credential(s) before expiration ensures the application continues to function and reduces the possibility of downtime due to an expired credential.  Renew expiring application credentials: Renewing the app credential(s) before its expiration ensures the application continues to function and reduces the possibility of downtime due to an expired credential.  Remove unused applications: Removing unused applications improves the security posture and promotes good application hygiene. It reduces the risk of application compromise by someone discovering an unused application and misusing it. Depending on the permissions granted to the application and the resources that it exposes, an application compromise could expose sensitive data in an organization.   Migrate applications from the retiring Azure AD Graph APIs to Microsoft Graph: The Azure AD Graph service (graph.windows.net) was announced as deprecated in 2020 and is in a retirement cycle. It’ is important that applications in your tenant, and applications supplied by vendors that are consented in your tenant (service principals), are updated to use Microsoft Graph APIs as soon as possible. This recommendation reports applications that have recently used Azure AD Graph APIs, along with more details about which Azure AD Graph APIs the applications are using.  Migrate Service Principals from the retiring Azure AD Graph APIs to Microsoft Graph: The Azure AD Graph service (graph.windows.net) was announced as deprecated in 2020 and is in a retirement cycle. It’ is important that service principals in your tenant, and service principals for applications supplied by vendors that are consented in your tenant, are updated to use Microsoft Graph APIs as soon as possible. This recommendation reports service principals that have recently used Azure AD Graph APIs, along with more details about which Azure AD Graph APIs the service principals are using. 

 

You can find these recommendations that are in general availability on the Microsoft Entra recommendations portal by looking for “Generally Available” under the column titled “Release Type” as shown below. 

 

 

 

Changes to Secure Score - Your security analytics tool to enhance security. 

 

We’re happy to announce some new developments in Identity Secure Score which functions as an indicator for how aligned you are with Microsoft’s recommendations for security. Each improvement action in Identity Secure Score is customized to your configuration and you can easily see the security impact of your changes. We have an upcoming Secure Score recommendation in public preview to help you protect your organization from Insider risk. Please see the details below: 

 

Protect your tenant with Insider Risk policy: Implementing a Conditional Access policy that blocks access to resources for high-risk internal users is of high priority due to its critical role in proactively enhancing security, mitigating insider threats, and safeguarding sensitive data in real-time. Learn more about this feature here.

 

 

In addition to the new Secure Score recommendation, we have several other recommendations related to Secure Score. We strongly advise you to check your security-related recommendations if you haven't done so yet. Please see below for the current list of recommendations for secure score:  

 

Enable password hash sync if hybrid: Password hash synchronization is one of the sign-in methods used to accomplish hybrid identity. Microsoft Entra Connect synchronizes a hash of the hash of a user's password from an on-premises Microsoft Entra Connect instance to a cloud-based Microsoft Entra Connect cloud sync instance. Password hash synchronization helps by reducing the number of passwords your users need to maintain to just one. Enabling password hash synchronization also allows for leaked credential reporting.  Protect all users with a user risk policy: With the user risk policy turned on, Microsoft Entra ID detects the probability that a user account has been compromised. As an administrator, you can configure a user risk Conditional Access policy to automatically respond to a specific user risk level.  Protect all users with a sign-in risk policy: Turning on the sign-in risk policy ensures that suspicious sign-ins are challenged for multifactor authentication (MFA).  Use least privileged administrative roles: Ensure that your administrators can accomplish their work with the least amount of privilege assigned to their account. Assigning users roles like Password Administrator or Exchange Online Administrator, instead of Global Administrator, reduces the likelihood of a privileged account being breached.  Require multifactor authentication for administrative roles: Requiring multifactor authentication (MFA) for administrative roles makes it harder for attackers to access accounts. Administrative roles have higher permissions than typical users. If any of those accounts are compromised, your entire organization is exposed.   Ensure all users can complete MFA: Help protect devices and data that are accessible to these users with MFA. Adding more authentication methods like the Microsoft Authenticator app or a phone number, increases the level of protection if another factor is compromised.  Enable policy to block legacy authentication: Today, most compromising sign-in attempts come from legacy authentication. Older office clients such as Office 2010 don’t support modern authentication and use legacy protocols such as IMAP, SMTP, and POP3. Legacy authentication doesn’t support MFA. Even if an MFA policy is configured in your environment, bad actors can bypass these enforcements through legacy protocols. We recommend enabling policy to block legacy authentication.  Designate more than one Global Admin: Having more than one Global Administrator helps if you’re unable to fulfill the needs or obligations of your organization. It's important to have a delegate or an emergency access account that someone from your team can access if necessary. It also allows admins the ability to monitor each other for signs of a breach.  Do not expire passwords: Research has found that when periodic password resets are enforced, passwords become less secure. Users tend to pick a weaker password and vary it slightly for each reset. If a user creates a strong password (long, complex, and without any pragmatic words present), it should remain as strong in the future as it is today. It is Microsoft's official security position to not expire passwords periodically without a specific reason and recommends that cloud-only tenants set the password policy to never expire.  Enable self-service password reset: With self-service password reset in Microsoft Entra ID, users no longer need to engage helpdesk to reset passwords. This feature works well with Microsoft Entra dynamically banned passwords, which prevents easily guessable passwords from being used.  Do not allow users to grant consent to unreliable applications: To reduce the risk of malicious applications attempting to trick users into granting them access to your organization's data, we recommend that you allow user consent only for applications that have been published by a verified publisher. 

 

You can find your Secure Score recommendations on Microsoft Entra recommendations portal by adding a filter on “Category” and selecting “Identity Secure Score” as shown below. 

 

 

 

We look forward to you leveraging these learnings and best practices for your organization. We’re constantly innovating and improving customers' experience to bring the right recommendations to the right people. In our future release, we’ll introduce new capabilities like email notifications to create awareness of new recommendations and delegation capabilities to other roles, and provide more actionable recommendations so you can quickly resolve them to secure your organization.  

 

Shobhit Sahay 

 

 

Learn more about Microsoft Entra: 

What are Microsoft Entra recommendations? - Microsoft Entra ID | Microsoft Learn  What is the identity secure score? - Microsoft Entra ID | Microsoft Learn  How to use Microsoft Entra recommendations - Microsoft Entra ID | Microsoft Learn  List recommendations - Microsoft Graph beta | Microsoft Learn  List impactedResources - Microsoft Graph beta | Microsoft Learn  See recent Microsoft Entra blogs  Dive into Microsoft Entra technical documentation  Learn more at Azure Active Directory (Azure AD) rename to Microsoft Entra ID  Join the conversation on the Microsoft Entra discussion space  Learn more about Microsoft Security  

Shyft Network

Guide to FATF Travel Rule Compliance in South Korea

South Korea has a one million won minimum threshold for the Crypto Travel Rule. Crypto businesses must register with the Korea Financial Intelligence Unit and comply with AML regulations to operate in the country. The country has enacted several laws for crypto transaction transparency and asset protection. In South Korea, the Financial Services Commission (FSC) serves as the primar
South Korea has a one million won minimum threshold for the Crypto Travel Rule. Crypto businesses must register with the Korea Financial Intelligence Unit and comply with AML regulations to operate in the country. The country has enacted several laws for crypto transaction transparency and asset protection.

In South Korea, the Financial Services Commission (FSC) serves as the primary regulatory authority, overseeing the sector and ensuring compliance with anti-money laundering (AML) and combating the financing of terrorism (CFT) obligations.

In this article, we will delve into the specifics behind the regulations, starting with the background of the FATF Travel Rule in South Korea.

History of the Crypto Travel Rule

In 2021, South Korea’s Financial Services Commission revised its AML-related law to align with the guidance of the international financial watchdog, the Financial Action Task Force (FATF). With the amendments to the Act on Reporting and Using Specified Financial Transaction Information Requirements of VASPs, the Crypto Travel Rule went into effect in South Korea in March 2022.

Next year, in June 2023, the FSC passed a new law aimed at enhancing transaction transparency, market discipline, and protection for cryptocurrency users. Under this law, the regulator has been granted the authority to supervise and inspect VASPs as well as to impose penalties. This legislative move targets the regulation of unfair trade practices and the protection of assets.

Key Features of the Travel Rule

Under the country’s mandated AML law, both domestic and foreign VASPs are required to register with the Korea Financial Intelligence Unit (KoFIU) before commencing business operations.

To register, VASPs must obtain an Information Security Management Systems (ISMS) certification from the Korea Internet and Security Agency (KISA).

Further amendments to the AML-related law mandate the implementation of the Crypto Travel Rule for international virtual asset transfers over 1 million won (approximately $740 or €687). Any transfers above this threshold are limited to wallets verified by users and must be flagged by exchanges. Additionally, VASPs are required to verify customers’ identities and report any suspicious actions to the authorities.

Compliance Requirements

To register with the Korea Financial Intelligence Unit (KoFIU) and report their business activity, VASPs have to submit their registered company name, their representative’s details, the location of the business contact information, and bank account details. Moreover, VASPs must adhere to all measures prescribed by the Presidential Decree.

VASPs must also comply with AML regulations, which include the collection and sharing of information regarding customers’ virtual asset transfers exceeding KRW 1 million:

- Name of the originator and beneficiary

- Wallet address of originator and beneficiary

Should the beneficiary VASP or authorities request further information, the following must be provided within three working days of the request:

- Originator’s customer identification number, personal document identity number, or foreigner registration number

In addition to Crypto Travel Rule compliance, AML regulations require VASPs to appoint a money laundering reporting officer (MLRO) and develop and implement comprehensive internal AML policies and procedures. These procedures necessitate conducting a company-wide risk assessment and performing Customer Due Diligence (CDD), along with Simplified Due Diligence and Enhanced Due Diligence, depending on the specific situation.

Moreover, AML obligations involve rigorous transaction monitoring, sanctions screening, record keeping, and reporting suspicious activity and transactions.

Impact on Cryptocurrency Exchanges and Wallets

When it comes to crypto exchanges, they are defined as business entities that engage in the purchase, sale, transfer, exchange, storage, or management of crypto, as well as the intermediation or brokerage of virtual asset transactions. Thus, South Korean VASPs cover exchanges, custodians, brokerages, and digital wallet service providers, and they all must comply with the Crypto Travel Rules.

According to South Korea’s Crypto Travel Rule, transactions among individuals are regulated, and there are no rules regarding moving funds to and from self-hosted or non-custodial wallets. Local exchanges, however, have introduced varying rules for users when transacting with foreign exchanges, leading to confusion among users.

As per the new 2023 regulations that have yet to go into effect, VASPs are required to create real-name accounts with financial institutions and separate their customers’ deposits from their own to provide better user and asset protection. They are further required to have an insurance plan or reserves, maintain crypto records for fifteen years, keep records for five years, and be assessed for AML compliance with a financial institution.

Global Context and Comparisons

According to FATF’s latest report, less than 30% of surveyed jurisdictions worldwide have started regulating the cryptocurrency industry.

Of 58 jurisdictions, 33% (19), which includes the likes of Australia, China, Russian Federation, Saudi Arabia, South Africa, Ukraine, and Vietnam, have not yet passed or enacted the Travel Rule for VASPs. In contrast, jurisdictions such as Argentina, Brazil, Colombia, Malta, Mexico, Norway, New Zealand, Türkiye, Thailand, and Seychelles are currently making progress in this area.

The report emphasizes the need for jurisdictions to license VASPs, scrutinize their products, technology, and business practices, and improve oversight to mitigate the risks of money laundering and terrorist financing risks.

While not mandatory, jurisdictions that do not abide by the FATF recommendations may have to face consequences, including being placed on the FATF’s watchlist, which can result in a significant drop in their credibility ratings.

Notably, South Korea, along with a select group of countries like the US, Canada, Singapore, and the UK, has successfully implemented the FATF Travel Rule. This includes mandating compliance for all transactions exceeding the threshold of million Korean won, which is pretty much in line with the watchdog’s US$1,000 threshold.

Concluding Thoughts

South Korea has actively embraced blockchain and cryptocurrencies, responding to the growing popularity of virtual assets. To ensure the market operates safely and securely, the country’s regulators have implemented a series of laws and regulations, including the stringent AML requirements.

FAQs on Crypto Travel Rule South Korea Q1: What is the minimum threshold for the Crypto Travel Rule in South Korea?

South Korea has set a 1 million won minimum threshold for the Crypto Travel Rule.

Q2: Who needs to register with KoFIU in South Korea?

Virtual Asset Service Providers (VASPs) must register with the Korea Financial Intelligence Unit (KoFIU) to legally operate in the country.

Q3: How does South Korea enforce crypto transaction transparency and asset protection?

South Korea enforces crypto transaction transparency and asset protection through a combination of the Crypto Travel Rule, AML compliance requirements for VASPs, and laws targeting the regulation of unfair trade practices and the protection of assets.

About Veriscope

‍Veriscope, the compliance infrastructure on Shyft Network, empowers Virtual Asset Service Providers (VASPs) with the only frictionless solution for complying with the FATF Travel Rule. Enhanced by User Signing, it enables VASPs to directly request cryptographic proof from users’ non-custodial wallets, streamlining the compliance process.

For more information, visit our website and contact our team for a discussion.

To keep up-to-date on all things crypto regulations, sign up for our newsletter, and follow us on X (Formerly Twitter), LinkedIn, Telegram, and Medium.

Guide to FATF Travel Rule Compliance in South Korea was originally published in Shyft Network on Medium, where people are continuing the conversation by highlighting and responding to this story.


Entrust

CA/Browser Forum Updates Code Signing Service Requirements

The CA/Browser Form Code Signing Working Group has recently updated the Signing Service Requirements in... The post CA/Browser Forum Updates Code Signing Service Requirements appeared first on Entrust Blog.

The CA/Browser Form Code Signing Working Group has recently updated the Signing Service Requirements in the Code Signing Baseline Requirements (CSBRs) through ballot CSC-21.

The former Signing Service Requirements allowed for a model with risks for secure deployment. Upon receiving a Code Signing Request, the model would allow the Signing Service to perform CA functions, such as providing the Certificate Subscriber Agreement and performing applicant verification.

Definition of Signing Service

The new model requires the CA to provide the Subscriber Agreement and perform verification. The updated requirements now define the Signing Service as: “An organization that generates the key pair and securely manages the private key associated with the code signing certificate on behalf of the subscriber.” The primary focus is to support the subscriber and ensure that their private keys are generated and protected within a cryptographic module to control private key activation.

Advantages of Signing Services

Signing Services play a critical role in mitigating the primary risk of private key compromise for subscribers. Additionally, they provide simplicity by offering alternatives, such as using a subscriber-hosted cryptographic module. This eliminates the need for subscribers to install and configure a server crypto module or use tokens and drivers.

Compliance and Audit Requirements

In addition to providing Signing Service requirements, the CSBRs also provide audit requirements to ensure compliance and private key protection. Signing Services must undergo annual audits to meet the applicable requirements outlined in WebTrust for CSBRs and WebTrust for Network Security.

As your code signing solution partner, Entrust supports these updated requirements and offers Code Signing as a Service for both OV and EV Code Signing Certificates.

The post CA/Browser Forum Updates Code Signing Service Requirements appeared first on Entrust Blog.


Shyft Network

Shyft DAO March Update: Addressing Concerns On Point System & Leaderboard

Hello, Chameleons! With March behind us and spring in full swing, let’s look back at a month that was full of crucial changes. Addressing Leaderboard Concerns Based on your feedback about the point system and leaderboard, we have initiated a few steps to address them: Bigger Prize Pool: The stakes are higher, with a prize pool of $1,500 for the next two months. Weekly Gatherings

Hello, Chameleons! With March behind us and spring in full swing, let’s look back at a month that was full of crucial changes.

Addressing Leaderboard Concerns

Based on your feedback about the point system and leaderboard, we have initiated a few steps to address them:

Bigger Prize Pool: The stakes are higher, with a prize pool of $1,500 for the next two months.

Weekly Gatherings: Starting this month, these sessions aim to clarify tasks, foster deeper connections, and offer a platform for real-time feedback and appeals.

Qualitative Over Quantitative: We’re shifting towards a more qualitative assessment of contributions, ensuring that quality and thoughtfulness weigh more than speed and quantity.

In line with our new qualitative focus, we’ve refined our tasks to ensure clarity on what makes a standout submission. Here’s what we’re looking for:

Proper Formatting: Aim for posts that are well-structured, complete, and to the point. Good formatting boosts engagement. Personal Touch: Let your unique style shine through. We value relatable content that grabs attention. Clear Explanations: Break down complex ideas with analogies, making your content accessible to all. Visuals: Use images or GIFs to underscore your points vividly. Curiosity: Spark discussions with questions that make people think and engage. Problem-Solution Framework: Clearly highlight a problem and how it can be solved. Show the value of your insight. Accuracy of Information: Sharing correct information is crucial for maintaining trust within our community. Seeking NFT Solutions 🎁

Thanks to all the solid feedback we received on the gas fees for our “WotNot Made of Water” collection, we’re exploring a solution to award Ambassadors with an NFT.

With this move, we are ensuring that every Ambassador gets to participate without the barrier of high costs. So, stay tuned for more details!

Concluding Thoughts

As we have stepped into April, we’re thrilled about the new directions we’re taking. Here’s to growing together, embracing change, and celebrating every win, big or small.✨

The Shyft DAO community is committed to building a decentralized, trustless ecosystem that empowers its members to collaborate and make decisions in a transparent and democratic manner. Our mission is to create a self-governed community that supports innovation, growth, and diversity while preserving the privacy and sovereignty of its users.

Follow us on Twitter and Medium for up-to-date news from the Shyft DAO.

Shyft DAO March Update: Addressing Concerns On Point System & Leaderboard was originally published in Shyft DAO on Medium, where people are continuing the conversation by highlighting and responding to this story.


Dock

KYC Fraud: 7 strategies to prevent KYC fraud

Product professionals face an escalating challenge: ensuring secure and efficient identity verification processes while combating the increasing threat of KYC (Know Your Customer) fraud. This article aims to dive into the complexities of KYC fraud, offering insights into its detection, prevention, and the role of Reusable Digital

Product professionals face an escalating challenge: ensuring secure and efficient identity verification processes while combating the increasing threat of KYC (Know Your Customer) fraud.

This article aims to dive into the complexities of KYC fraud, offering insights into its detection, prevention, and the role of Reusable Digital Identities in mitigating these risks.

Full article: https://www.dock.io/post/kyc-fraud