Microsoft Fabric: what's new and what's next | BRK204

00:48:52
https://www.youtube.com/watch?v=8LJJqD0DOfs

Sintesi

TLDRIn a dynamic presentation at Microsoft Ignite, Arun Ulag, head of Azure Data teams at Microsoft, introduced the advancements and features of Microsoft Fabric, emphasizing its role in simplifying data and AI processing for enterprises. Microsoft Fabric, a unified SaaS platform, integrates data management and AI capabilities, featuring the new Fabric Databases, real-time intelligence solutions, and seamless integration with Microsoft Power BI. Fabric leverages OneLake for scalable data storage, supporting data virtualization and AI-driven insights. The session highlighted Microsoft’s commitment to reducing the complexity of data handling by converging technologies within Azure Databases and Microsoft Fabric to facilitate the rapid deployment of AI applications. Key announcements included the availability of real-time intelligence and the introduction of Fabric Databases, integrating transactional SQL databases into the Fabric environment. With over 16,000 organizations using Microsoft Fabric, including industry leaders like Chanel and Epic, its adoption underscores its significant value in the enterprise space. Microsoft also showcased its efforts in making AI capabilities more accessible to businesses through initiatives like Power BI integration, AI Skills, and workspace monitoring, all aimed at enhancing user experience and data governance. The introduction of translytical applications marks a significant shift in how businesses can utilize data, promising substantial efficiency and innovation.

Punti di forza

  • πŸš€ Microsoft Fabric simplifies data and AI for enterprises, making it easier to manage data from raw state to insights.
  • πŸ€– The platform integrates seamlessly with Power BI, enhancing productivity and enabling AI-driven insights.
  • 🌐 OneLake offers a scalable, unified data lake, supporting data virtualization without duplication.
  • πŸ“Š New Fabric Databases integrate SQL databases into Fabric, facilitating both transactional and analytical operations.
  • πŸ”’ Strong emphasis on data security and governance, including FedRamp certification and workspace monitoring.
  • πŸ” AI Skills feature enhances data interaction, allowing seamless AI model integration with diverse data sources.
  • πŸŽ‰ High adoption rate among enterprises, signifying strong market acceptance and customer satisfaction.
  • ✨ Continuous innovation with weekly updates ensures Fabric evolves to meet customer needs.
  • πŸ’‘ The simplicity of connecting existing data sources and technologies makes Fabric a robust choice for data-driven AI applications.
  • πŸ“ˆ Introduction of translytical applications combining analytics and transactions opens new possibilities for businesses.

Linea temporale

  • 00:00:00 - 00:05:00

    Arun Ulag from Microsoft discusses the importance of data in AI and introduces Microsoft Fabric, which unifies Azure Data tools for seamless integration to support AI initiatives. He emphasizes the complexity of the data and AI landscape and how Microsoft aims to simplify it with the Copilot and AI stack, specifically through Azure Databases and Microsoft Fabric.

  • 00:05:00 - 00:10:00

    Microsoft Fabric combines multiple workloads into a single SaaS platform, allowing transition from raw data to AI/BI with unified architecture. It has gained significant adoption, including by companies like Chanel and Epic, due to its capabilities and OneLake's multi-cloud data lake feature. Fabric is used by over 16,000 organizations, including many Fortune 500 companies.

  • 00:10:00 - 00:15:00

    Microsoft Fabric is highlighted as a transformative tool for data and AI, supporting widespread innovation, exemplified by Power BI's growth. Users of Power BI can easily try Fabric, and Microsoft ensures constant innovation with weekly releases. New announcements include real-time intelligence and the introduction of unified analytics, simplifying data workflows.

  • 00:15:00 - 00:20:00

    The announcement of Fabric Databases integrates SQL Server into Fabric for improved operational and analytical capabilities, showcasing the convergence of database functionalities in AI projects, easing transitions to AI-driven applications. A promotional video emphasizes quick deployment and integration of autonomous databases on Microsoft Fabric.

  • 00:20:00 - 00:25:00

    Microsoft Fabric expands with industry-specific solutions and extensibility through Workload Development kits, now generally available, allowing ISVs to integrate their workloads deeply into Fabric. New solutions for sectors like sustainability and healthcare highlight Fabric's accelerating time to value and improved data accessibility.

  • 00:25:00 - 00:30:00

    Demonstrations highlight Fabric's capabilities with data platforms through examples of real-time telemetry for database monitoring and using GraphQL APIs. There's focus on sustainability and industry solutions, exhibiting the comprehensive nature of Fabric solutions for various business needs and its swift deployment enhancements.

  • 00:30:00 - 00:35:00

    OneLake, as a scalable, global data lake, supports numerous data interactions and enables virtualization via shortcuts and mirroring, simplifying data integration across clouds and databases. Open Mirroring enhances this by allowing data replication from any source, promoting effortless data management and analytics preparation.

  • 00:35:00 - 00:40:00

    Fabric's security and governance are emphasized with certifications and new features like surge protection and workspace monitoring. The OneLake Catalog, now GA, enhances data discovery, management, and governance capabilities, integrating with tools such as Excel, Teams, and more, aiding enterprises in comprehensive data oversight.

  • 00:40:00 - 00:48:52

    Power BI now integrates AI more deeply, offering enhanced user experiences through Copilot, facilitated by new Fabric AI Capacities. The concept of "translytical" applications surfaces, enabling real-time updates and analytics via Power BI, showing a shift in BI use cases towards more interactive and dynamic applications.

Mostra di piΓΉ

Mappa mentale

Video Domande e Risposte

  • What is Microsoft Fabric?

    Microsoft Fabric is a unified SaaS platform that integrates various data and AI capabilities, allowing users to manage data from raw state to AI or BI value. It includes Azure Databases and offers various workloads designed for different personas such as data scientists, engineers, and warehousing professionals.

  • How is Microsoft Fabric making AI more accessible to businesses?

    Microsoft Fabric integrates data management and AI capabilities into a cohesive platform, reducing the complexity and cost involved. It allows businesses to easily prepare their data for AI, facilitating AI-powered applications and insights.

  • What new features does Microsoft Fabric offer?

    Microsoft Fabric has introduced features like real-time intelligence, Fabric Databases which integrate transactional databases into Fabric, and industry solutions for faster deployment. It also supports seamless connections to existing data sources through OneLake.

  • How does Microsoft Fabric integrate with Power BI?

    Microsoft Fabric integrates with Power BI by allowing users to easily transition from Power BI to Fabric, offering built-in Copilot to enhance productivity and simplify accessing data insights. It also enables developing translytical applications within Power BI, combining analytical and transactional capabilities.

  • What is OneLake in Microsoft Fabric?

    OneLake is a part of Microsoft Fabric, acting as a globally deployed, infinitely scalable data lake for organizations. It stores data in an open format, supports data virtualization via shortcuts, and enables mirroring for continuous replication of data from operational databases.

  • How does Microsoft Fabric ensure data security and governance?

    Microsoft Fabric provides data security through certifications like FedRamp, and built-in governance features allowing visibility and control over data estates. It includes workspace monitoring and permissions management to ensure compliance and security.

  • What are AI Skills in Microsoft Fabric?

    AI Skills in Microsoft Fabric streamlines integrating various data sources and AI, offering a way to build AI models that can access diverse data sources and provide insights. This feature enhances how users can extract and work with data using AI capabilities seamlessly.

  • How has Microsoft Fabric been received by organizations?

    Microsoft Fabric has been adopted by over 16,000 organizations, including 70% of the Fortune 500, for its comprehensive data and AI solutions. This reflects strong customer adoption and satisfaction with its capabilities.

Visualizza altre sintesi video

Ottenete l'accesso immediato ai riassunti gratuiti dei video di YouTube grazie all'intelligenza artificiale!
Sottotitoli
en
Scorrimento automatico:
  • 00:00:00
    [MUSIC]
  • 00:00:11
    Arun Ulag: All right, good morning.
  • 00:00:12
    [APPLAUSE]
  • 00:00:15
    Thank you so much for joining us.
  • 00:00:18
    I'm Arun Ulag, I run all of the Azure Data teams at Microsoft.
  • 00:00:21
    I'm so excited to be here.
  • 00:00:22
    Such an exciting time to be in data.
  • 00:00:24
    Such an exciting time to be in AI.
  • 00:00:26
    Thank you so much for joining us at Ignite.
  • 00:00:28
    So, one of the quick requests for the folks in the room,
  • 00:00:31
    we had four times as many people register for the session
  • 00:00:34
    as we could accommodate in this room.
  • 00:00:36
    So, if there happens to be a chair somewhere in between
  • 00:00:38
    that is unfilled, please scoot up a little bit
  • 00:00:40
    so that we can accommodate a few other people.
  • 00:00:43
    So, thank you so much for joining us.
  • 00:00:44
    We have about 10,000 people joining us online as well,
  • 00:00:47
    so really, really exciting, exciting session.
  • 00:00:50
    So, we're going to talk about where we're going with Fabric,
  • 00:00:53
    and we're going to start with what's on everybody's mind.
  • 00:00:56
    It's really AI.
  • 00:00:57
    We all recognize that AI is rapidly transforming the world.
  • 00:01:00
    No surprise for anybody here.
  • 00:01:02
    But we also recognize that as exciting as AI is,
  • 00:01:05
    it is only as good as the data that it gets to work on, right?
  • 00:01:08
    Because it is data that is the fuel that powers AI.
  • 00:01:11
    The best AI models, if you put garbage in,
  • 00:01:13
    most likely you're going to get garbage out.
  • 00:01:15
    So, it's become incredibly important for customers
  • 00:01:17
    to get their data estate ready for AI.
  • 00:01:19
    Unfortunately, it's a lot harder, more complex,
  • 00:01:23
    and more expensive than it needs to be.
  • 00:01:25
    And nothing represents it better than this slide.
  • 00:01:28
    This is the data and AI landscape slide put together
  • 00:01:31
    by a venture capital firm in the Valley.
  • 00:01:33
    You know, they produce a version of this slide every year.
  • 00:01:35
    This is, I think, the 10th, 11th version of the slide.
  • 00:01:37
    But every tiny icon on this slide is a product or technology
  • 00:01:41
    in the data and AI space.
  • 00:01:42
    And this is the complexity that's confronting you today
  • 00:01:44
    because the burden is on you to figure out which products
  • 00:01:47
    to use, which ones work together, how are they priced
  • 00:01:50
    and licensed, and bring them together
  • 00:01:52
    to create business value.
  • 00:01:53
    That's why, from a Microsoft perspective, we are putting all
  • 00:01:56
    of our products and technologies together into what Satya refers
  • 00:01:59
    to as the Copilot and AI stack.
  • 00:02:01
    So on the Microsoft side,
  • 00:02:03
    everything just works together seamlessly so that you can focus
  • 00:02:06
    on moving your business forward.
  • 00:02:08
    Now, what my team and I are doing is we're taking the
  • 00:02:11
    "Your Data" tier of the Copilot stack and we're converging all
  • 00:02:14
    of the capabilities we have in the Azure data team
  • 00:02:17
    into just two things, Azure Databases and Microsoft Fabric.
  • 00:02:21
    So, in this session, we're going to talk about Microsoft Fabric.
  • 00:02:24
    So, we introduced Fabric,
  • 00:02:26
    it became generally available just a year ago at Ignite.
  • 00:02:29
    And with Fabric, we really brought together a set
  • 00:02:32
    of core workloads so that you can do everything you need
  • 00:02:35
    in a single unified SaaS platform to go from raw data
  • 00:02:38
    to AI or BI value in the hands of your customers.
  • 00:02:42
    Fabric has a set of core workloads.
  • 00:02:44
    Each of these workloads are purpose-built
  • 00:02:46
    for a particular persona, like a data scientist, a data engineer,
  • 00:02:48
    a data warehousing professional, and a specific task.
  • 00:02:51
    However, it's not just a bundle of products.
  • 00:02:53
    We took time, we took years to re-engineer these products
  • 00:02:57
    so that they actually work together
  • 00:02:58
    into a seamless platform.
  • 00:03:00
    It has unified experiences, it has a unified architecture,
  • 00:03:03
    and we even unified the business model
  • 00:03:05
    so we can drive down costs.
  • 00:03:07
    Now, this vision has really, really resonated with customers.
  • 00:03:10
    And we see customer adoption for Fabric is off-the-chart.
  • 00:03:13
    Let me just give you three examples.
  • 00:03:15
    Chanel, one of the world's leading companies
  • 00:03:18
    in the fashion industry, adopted Fabric
  • 00:03:20
    as the next-generation analytics platform.
  • 00:03:22
    Epic, the largest healthcare company in the US,
  • 00:03:26
    and one of the world's leading healthcare companies,
  • 00:03:27
    when they were looking for the next-generation analytics
  • 00:03:29
    platform, they chose Fabric as well.
  • 00:03:31
    They chose Fabric because of the strong enterprise capabilities,
  • 00:03:34
    but they also chose Fabric because of OneLake,
  • 00:03:36
    because it gives them a multi-cloud SaaS data lake
  • 00:03:39
    which allows them to make their information available
  • 00:03:41
    to their customers as well.
  • 00:03:42
    Another example is Denner Motorsports.
  • 00:03:45
    Denner Motorsports runs the Porsche Cup,
  • 00:03:47
    and they use Fabric's real-time intelligence capabilities
  • 00:03:50
    to be able to get telemetry from the cars
  • 00:03:53
    as they're literally racing around the tracks
  • 00:03:55
    and make good decisions.
  • 00:03:56
    Now, these customers are not alone.
  • 00:03:58
    Today, Fabric has over 16,000 organizations,
  • 00:04:01
    pretty much in every geography, in every industry,
  • 00:04:04
    that are using Fabric today,
  • 00:04:05
    including 70% of the Fortune 500.
  • 00:04:07
    Let's hear from some of these customers.
  • 00:04:09
    Satya Nadella: We are really thrilled
  • 00:04:11
    to be announcing Microsoft Fabric,
  • 00:04:14
    perhaps the biggest launch from Microsoft
  • 00:04:17
    since the launch of SQL Server.
  • 00:04:19
    [MUSIC]
  • 00:04:21
    Mike Holzman: Fabric allows us to build things
  • 00:04:25
    faster. We'll be able to focus on driving real value
  • 00:04:27
    out of the capabilities that are there.
  • 00:04:30
    Speaker 2: We switched to Microsoft Fabric
  • 00:04:32
    at breakneck speed, completing the transition
  • 00:04:34
    in just two weeks.
  • 00:04:35
    Speaker 3: Microsoft packaged the enterprise-level
  • 00:04:38
    functionality so that it can be a low-code or no-code solution.
  • 00:04:42
    Speaker 4: Our Altec Fabric pilot project
  • 00:04:45
    was to analyze travel spend.
  • 00:04:46
    Our vision is to bring together, in one platform,
  • 00:04:50
    data from multiple sources.
  • 00:04:52
    Jimmy Grewal: Fabric's end-to-end cloud
  • 00:04:54
    solution has empowered us to act on high-volume,
  • 00:04:56
    high-granularity events in real-time.
  • 00:04:59
    Enzo Morrone: The presentation
  • 00:05:00
    of this data, it's intuitive.
  • 00:05:02
    It's friendly.
  • 00:05:03
    The real-time intelligence gives the ability
  • 00:05:05
    to take an action before even the driver notices.
  • 00:05:09
    Speaker 5: Because everything is
  • 00:05:12
    integrated, we get information rights protection,
  • 00:05:14
    as well as security and access policy.
  • 00:05:16
    Speaker 4: AI will play a significant role
  • 00:05:19
    in many areas for Altec.
  • 00:05:20
    Speaker 2: The features
  • 00:05:21
    and functionality are out-of-this-world.
  • 00:05:24
    Spealer 3: It's a great time to be in
  • 00:05:26
    data and AI, and we're just at the start.
  • 00:05:27
    [MUSIC]
  • 00:05:27
    Arun Ulag: So as you can see, Fabric is
  • 00:05:32
    really, really exciting for customers.
  • 00:05:33
    [APPLAUSE]
  • 00:05:35
    Now, as excited as I am about 16,000 customers,
  • 00:05:39
    it's just the beginning.
  • 00:05:40
    We're really excited about really bringing Fabric
  • 00:05:42
    to every organization and every developer on the planet.
  • 00:05:45
    And one example where we have democratized access to data
  • 00:05:49
    and analytics at scale is Power BI.
  • 00:05:51
    Power BI today has over 375,000 organizations that use this,
  • 00:05:56
    including 95% of the Fortune 500,
  • 00:05:58
    and we have over 6.5 million monthly active developers.
  • 00:06:02
    Now, this curve that you see here is actually the usage
  • 00:06:04
    growth of Power BI since the day we started, and you can see
  • 00:06:07
    that it's continuing to grow exponentially,
  • 00:06:09
    and the growth has only accelerated
  • 00:06:11
    since we launched Fabric.
  • 00:06:12
    And the reason I'm talking about Power BI here is because many
  • 00:06:15
    of you use Power BI today, many of the folks
  • 00:06:17
    in the audience use Power BI today,
  • 00:06:19
    and for every Power BI developer,
  • 00:06:21
    Fabric is just one click away.
  • 00:06:23
    We make a free trial with no Azure subscription,
  • 00:06:26
    no credit card required, and we give every developer $17,000
  • 00:06:30
    of Fabric capacity over two months
  • 00:06:32
    so that you can build something real.
  • 00:06:33
    You can experience what Fabric can do for you.
  • 00:06:35
    Now, we're not done from a pace of innovation perspective.
  • 00:06:39
    One of the things that you've seen us do before
  • 00:06:41
    with Power BI is really that cadence
  • 00:06:43
    of continuous innovation.
  • 00:06:44
    Just like Power BI, we ship a new release
  • 00:06:47
    of Fabric every single week, right?
  • 00:06:49
    And every week, you'll see us publish new blogs
  • 00:06:51
    about the new capabilities that light up.
  • 00:06:53
    And these capabilities are not just coming from us,
  • 00:06:55
    but they're coming from you.
  • 00:06:56
    If you go to ideas.fabric.microsoft.com,
  • 00:06:59
    you can create ideas or vote on ideas, and every semester,
  • 00:07:02
    we take the top-voted ideas and we try
  • 00:07:04
    to make sure we ship it very, very quickly so that you know
  • 00:07:06
    that Microsoft is listening, Microsoft is learning,
  • 00:07:08
    and the product is evolving to meet your needs.
  • 00:07:11
    Every month, we take all the features that ship
  • 00:07:13
    that particular month, and we ship a monthly Fabric blog.
  • 00:07:16
    And each of these blogs are 60 to 80 pages long,
  • 00:07:19
    just giving you a sense of the level of innovation
  • 00:07:21
    that Microsoft is bringing to bear.
  • 00:07:23
    So, we talked about Fabric, and this is what we launched.
  • 00:07:26
    Now, we have some exciting announcements for you today.
  • 00:07:29
    We are announcing the general availability
  • 00:07:31
    of real-time intelligence.
  • 00:07:33
    Right, thank you.
  • 00:07:34
    [APPLAUSE]
  • 00:07:36
    There is so much real-time data out in the world today,
  • 00:07:40
    data from IoT devices, data from application telemetry, logs,
  • 00:07:43
    security logs, so much real-time data,
  • 00:07:45
    but it's notoriously hard to work with.
  • 00:07:47
    And with Fabric's real-time intelligence,
  • 00:07:49
    we make it drop-dead simple, so it's something
  • 00:07:51
    that you absolutely need to try.
  • 00:07:52
    Thousands of customers have tried it
  • 00:07:54
    out during the public preview.
  • 00:07:55
    We're seeing massive adoption
  • 00:07:57
    of the real-time intelligence capabilities in Fabric.
  • 00:07:59
    The other thing that we're doing is we are simplifying this
  • 00:08:02
    picture a little bit.
  • 00:08:03
    We're combining our data engineering, data science,
  • 00:08:05
    and data warehousing workloads into just analytics.
  • 00:08:08
    And the reason we're doing that, really, is just to make sure
  • 00:08:10
    that we make room on the slide for the biggest change to Fabric
  • 00:08:13
    since we announced it, which is the introduction
  • 00:08:16
    of Fabric Databases.
  • 00:08:17
    [APPLAUSE]
  • 00:08:21
    With Fabric Databases,
  • 00:08:23
    we're bringing our entire database portfolio to Fabric,
  • 00:08:25
    and starting with a flagship SQL Server product.
  • 00:08:28
    You get full world-class transactional SQL performance,
  • 00:08:31
    all integrated into Microsoft Fabric.
  • 00:08:33
    And just like Fabric, it's all software-as-a-service,
  • 00:08:36
    and all of the data is integrated into OneLake.
  • 00:08:38
    And the reason we're doing this is we believe
  • 00:08:39
    that the distinctions between transactional databases,
  • 00:08:42
    NoSQL databases, document databases, vector databases,
  • 00:08:45
    in-memory databases, all these distinctions are blurring very,
  • 00:08:48
    very quickly.
  • 00:08:49
    And in most AI projects, you're combining
  • 00:08:51
    and using these things together in conjunction.
  • 00:08:53
    Which means that, you know, in Fabric,
  • 00:08:54
    by driving this convergence, we make it much easier for you
  • 00:08:58
    to build applications to make the transition to the era
  • 00:09:00
    of AI much, much simpler.
  • 00:09:02
    So, let's watch a quick video.
  • 00:09:04
    Voiceover: Microsoft Fabric's unified data
  • 00:09:08
    platform now brings together all your data with Fabric Databases.
  • 00:09:11
    A new generation of autonomous databases
  • 00:09:14
    that streamline application development.
  • 00:09:16
    In seconds, provision and deploy a SQL database built upon the
  • 00:09:21
    same proven industry-leading SQL Server engine, all on a simple
  • 00:09:25
    and intuitive software-as-a-service platform.
  • 00:09:28
    Spend less time on resource planning
  • 00:09:31
    with auto-scaling compute, and get fast,
  • 00:09:33
    consistent app performance with automatic resource optimization
  • 00:09:36
    and intelligent auto-indexing, all while working
  • 00:09:39
    in your favorite tools like VS Code and GitHub.
  • 00:09:42
    Accelerate innovation
  • 00:09:44
    with AI-assisted T-SQL code generation
  • 00:09:46
    and chat-based Copilot assistance.
  • 00:09:49
    Create unique experiences with the help
  • 00:09:51
    of built-in vector support and Azure AI integration.
  • 00:09:55
    Finally, you can experience peace of mind with databases
  • 00:09:58
    that are secured by default of automated disaster recovery,
  • 00:10:02
    high availability, and with all your data replicated to OneLake,
  • 00:10:05
    accessible by Fabric's analytical engines.
  • 00:10:08
    Building intelligent AI applications is faster
  • 00:10:12
    and easier with autonomous Fabric Databases,
  • 00:10:15
    part of the unified Microsoft Fabric data platform.
  • 00:10:18
    [MUSIC]
  • 00:10:21
    Arun Ulag: Hopefully that's really exciting for you guys.
  • 00:10:23
    We're super excited about it.
  • 00:10:25
    [APPLAUSE]
  • 00:10:25
    We're also adding industry solutions to Fabric,
  • 00:10:29
    and we're making Fabric extensible as well.
  • 00:10:31
    So what you'll find as generally available today is a range
  • 00:10:34
    of industry solutions -- thank you --
  • 00:10:35
    everything from sustainability, healthcare, and retail,
  • 00:10:39
    which is just built into Fabric.
  • 00:10:40
    So, if you care about these solutions,
  • 00:10:42
    it dramatically accelerates time to value.
  • 00:10:44
    In May, we also announced the Fabric Workload Development kit,
  • 00:10:48
    which makes Fabric extensible, so you can extend Fabric.
  • 00:10:50
    And if you're an ISV, you can bring your own workloads
  • 00:10:53
    to Fabric.
  • 00:10:53
    Now, today I'm announcing that it's generally available.
  • 00:10:56
    We're also excited to show a whole range of ISVs
  • 00:10:59
    that are actively extending Fabric
  • 00:11:01
    and bringing their own workloads to it.
  • 00:11:03
    And these are not just trivial integrations.
  • 00:11:05
    They're deeply integrating into Fabric,
  • 00:11:07
    making sure the data lives in OneLake, the artifacts live
  • 00:11:09
    in the same workspace, they use the same permissions model,
  • 00:11:12
    et cetera.
  • 00:11:12
    A whole bunch of these ISV solutions are available
  • 00:11:15
    in public preview today, so those are the ones
  • 00:11:17
    that are highlighted on top,
  • 00:11:18
    and everything else is being worked on,
  • 00:11:20
    and it should reach public preview in the coming months.
  • 00:11:22
    So, when I switch forward and I think about the Fabric roadmap,
  • 00:11:25
    there's four areas we're working on.
  • 00:11:27
    The first is really an AI-powered platform
  • 00:11:29
    that allows you to dramatically accelerate your time to value.
  • 00:11:32
    The second is OneLake, an open and AI-ready data lake.
  • 00:11:35
    The third is making sure
  • 00:11:36
    that these AI capabilities reach every business user,
  • 00:11:39
    and all of these capabilities will be built
  • 00:11:41
    on a mission-critical platform.
  • 00:11:42
    So, to go much deeper and show you some exciting demos,
  • 00:11:45
    I'd like to invite Amir Netz, technical fellow at CTO.
  • 00:11:47
    [APPLAUSE]
  • 00:11:49
    There you go, Amir.
  • 00:11:50
    [APPLAUSE]
  • 00:11:55
    Amir Netz: I'm so excited.
  • 00:11:56
    We're actually going to spend the rest
  • 00:11:57
    of the session just looking at the product, experiencing,
  • 00:12:00
    seeing demos, and we're going to use the same framework
  • 00:12:03
    that Arun presented with the three pillars
  • 00:12:05
    as the guideline here.
  • 00:12:07
    As a structure of the presentation,
  • 00:12:08
    we'll start with the AI-powered data platform.
  • 00:12:12
    This is where, really, what we are presenting here is a
  • 00:12:15
    complete platform for everything that you need for data,
  • 00:12:18
    for every workload, whether it's transactional,
  • 00:12:20
    whether it's analytical, whether it's real-time,
  • 00:12:22
    whether it's batch.
  • 00:12:23
    Everything that you need is in one platform, all integrated
  • 00:12:27
    in both the experiences and the architecture, all powered by AI.
  • 00:12:32
    And to show us what it means to really build a data tier
  • 00:12:36
    for your application, I'm going to invite Patrick to the stage,
  • 00:12:39
    and we're going to take a look.
  • 00:12:41
    Hey, Patrick.
  • 00:12:42
    PATRICK LeBLANC: What's up, Amir?
  • 00:12:43
    [APPLAUSE]
  • 00:12:45
    Amir Netz: All right.
  • 00:12:46
    So, we're going to see end-to-end.
  • 00:12:47
    It's going to be a bit different this time, right?
  • 00:12:49
    PATRICK LeBLANC: A bit different, a bit
  • 00:12:49
    different. Up until now, the only things that Amir and I have
  • 00:12:52
    talked about on stage together is complete analytical
  • 00:12:56
    solutions. That's all we do.
  • 00:12:57
    But this time, it's going to be a little different.
  • 00:12:59
    And so, we've built this app.
  • 00:13:01
    We've built this app called Contoso Outdoors,
  • 00:13:04
    and the entire solution is built in Fabric.
  • 00:13:07
    Amir Netz: And this is not an analytical solution, right?
  • 00:13:09
    PATRICK LeBLANC: This is not.
  • 00:13:10
    This is a complete data solution, and we're going
  • 00:13:12
    to make a change to that.
  • 00:13:13
    We're going to make a change to that.
  • 00:13:14
    Let's take a look.
  • 00:13:15
    So, you can see this is Contoso Outdoors, and this is where all
  • 00:13:18
    of our vendors and our suppliers go to talk with each other
  • 00:13:21
    to make sure that we have all the products that we need.
  • 00:13:23
    And if we switch over to Fabric,
  • 00:13:25
    you can see this is a complete solution.
  • 00:13:27
    We can do everything in Fabric, visualizing data
  • 00:13:29
    to storing data, to ingesting data.
  • 00:13:32
    We even have real-time telemetry built in,
  • 00:13:35
    so we can track everything that's going on in the database.
  • 00:13:37
    But the star of the show today, Amir,
  • 00:13:40
    is one of my favorite things where I started my career at.
  • 00:13:42
    It's a SQL Server database.
  • 00:13:44
    We're introducing the SQL Server database.
  • 00:13:46
    I even wore a shirt, right, to commemorate that moment.
  • 00:13:49
    And so -- but we need to make some changes,
  • 00:13:52
    and before we make those changes,
  • 00:13:53
    we know data is a team sport.
  • 00:13:55
    And so we have built-in source control in Fabric,
  • 00:13:58
    and so what I'm going to do is I'm going
  • 00:14:00
    to use our new branching capability
  • 00:14:02
    to not only create these objects and move them
  • 00:14:05
    over into another workspace, but I'm going
  • 00:14:07
    to create a new branch in DevOps or GitHub.
  • 00:14:11
    Amir Netz: This is directly to GitHub?
  • 00:14:13
    PATRICK LeBLANC: Absolutely.
  • 00:14:13
    I don't have to do anything.
  • 00:14:14
    And once it's all synced over to the workspace,
  • 00:14:16
    instead of me introducing the break and change,
  • 00:14:18
    I create my own feature branch, and you can go
  • 00:14:20
    into your SQL database.
  • 00:14:21
    And this is not some scaled-back version of SQL.
  • 00:14:23
    You can create tables.
  • 00:14:24
    You can create views.
  • 00:14:25
    You can create stored procedures.
  • 00:14:26
    Amir Netz: It's really compatible with the
  • 00:14:29
    T-SQL that you know and love, from SQL on-prem, or SQL
  • 00:14:31
    in Azure. Everything is there.
  • 00:14:32
    PATRICK LeBLANC: Absolutely.
  • 00:14:33
    You can create indexes
  • 00:14:35
    if you want your queries to run fast, right?
  • 00:14:36
    Just kidding.
  • 00:14:37
    And so, but I need to add a view to this database.
  • 00:14:40
    And so, Amir and I have been writing T-SQL since the 1900s.
  • 00:14:43
    [LAUGHTER]
  • 00:14:43
    And so, I'm not going to write any T-SQL.
  • 00:14:47
    What I'm going to use, I'm going to use Copilot.
  • 00:14:49
    I'm going to do Copilot-first development, and I'm going
  • 00:14:51
    to ask Copilot, can you create this view for me
  • 00:14:54
    that I need for my application?
  • 00:14:55
    And just like that, it creates the T-SQL,
  • 00:14:58
    and I get that T-SQL committed back to my database.
  • 00:15:00
    No hands. I just ask it to do it for me.
  • 00:15:02
    But I need to expose this data to my app.
  • 00:15:05
    And I can use the traditional approach
  • 00:15:07
    of creating a data layer in my application,
  • 00:15:09
    but instead I'm going to use GraphQL.
  • 00:15:11
    Amir Netz: And GraphQL is great
  • 00:15:12
    when you're building web apps
  • 00:15:13
    because everything is JSON-based.
  • 00:15:15
    PATRICK LeBLANC: Absolutely.
  • 00:15:16
    And it's an open format.
  • 00:15:17
    And so, but, instead of me writing it and embedding it
  • 00:15:21
    in the application, I'm just going to use an API, Amir.
  • 00:15:24
    Amir Netz: Okay.
  • 00:15:24
    And just take the endpoint of the API.
  • 00:15:26
    PATRICK LeBLANC: And I'm going to copy that
  • 00:15:27
    endpoint, and I'm going to paste it over in Visual Studio Code,
  • 00:15:30
    and I'm going to paste my query there, and then I'm going
  • 00:15:32
    to compile my application, and all of my developers,
  • 00:15:35
    all of my vendors, all of my suppliers can go in one place
  • 00:15:38
    to ensure that they have all the stock levels they need.
  • 00:15:41
    And I just did that in just a couple of clicks.
  • 00:15:43
    Amir Netz: That's awesome.
  • 00:15:43
    PATRICK LeBLANC: Yeah.
  • 00:15:44
    Yeah. And so finally, I want to get this committed back.
  • 00:15:47
    Amir Netz: The data is going to be in OneLake, right?
  • 00:15:49
    PATRICK LeBLANC: Yeah.
  • 00:15:49
    So, in my SQL database, because my SQL database is automatically
  • 00:15:54
    integrated in Fabric, and it moves all the data, it syncs all
  • 00:15:56
    of my data to OneLake, not only can I do operational,
  • 00:15:59
    but I can create beautiful reports that are blazing fast
  • 00:16:02
    that won't contend with the performance of my application.
  • 00:16:05
    Amir Netz: Yeah.
  • 00:16:06
    PATRICK LeBLANC: It's truly remarkable.
  • 00:16:07
    And so, now that I'm all done, I want to get this committed back
  • 00:16:10
    to my source control, use the integrated source control
  • 00:16:15
    in Fabric, and just click "Commit."
  • 00:16:17
    How cool is that?
  • 00:16:18
    Amir Netz: That's super cool.
  • 00:16:19
    What do you think?
  • 00:16:19
    [APPLAUSE]
  • 00:16:21
    So a few things here.
  • 00:16:22
    Number one is, you see the source control.
  • 00:16:24
    We have eight new items in Fabric
  • 00:16:26
    that are now supporting the CI/CD of Git.
  • 00:16:29
    PATRICK LeBLANC: Yep.
  • 00:16:30
    Amir Netz: And by the end of the year,
  • 00:16:31
    everything that we have in preview will be there.
  • 00:16:33
    PATRICK LeBLANC: Yep.
  • 00:16:33
    Amir Netz: That's really advanced.
  • 00:16:35
    The other thing you mentioned is the GraphQL.
  • 00:16:37
    PATRICK LeBLANC: Yeah, it's exciting.
  • 00:16:37
    Amir Netz: And we have an announcement.
  • 00:16:39
    The GraphQL API for Fabric is now generally available,
  • 00:16:42
    which is awesome.
  • 00:16:43
    PATRICK LeBLANC: Which is awesome.
  • 00:16:44
    It's amazing.
  • 00:16:44
    So, less code for me to write, right?
  • 00:16:46
    Just an API.
  • 00:16:47
    Amir Netz: Now,
  • 00:16:47
    Arun mentioned these industry solutions, right?
  • 00:16:50
    PATRICK LeBLANC: Yeah.
  • 00:16:50
    Amir Netz: And so, we'd
  • 00:16:52
    like to show you a little bit of that.
  • 00:16:54
    It's not really a full demo, but what's going on here?
  • 00:16:56
    PATRICK LeBLANC: So, sustainability is important
  • 00:16:57
    to most organizations, and they have KPIs that they need to hit.
  • 00:17:01
    But imagine trying to collect all the data you need
  • 00:17:04
    into one central place.
  • 00:17:05
    The data is not only disparate, but it's in different formats.
  • 00:17:08
    With this new industry solution, I basically click a button,
  • 00:17:11
    give it a name, and all the items,
  • 00:17:13
    all the artifacts I need are quickly deployed
  • 00:17:15
    out to my Fabric environment.
  • 00:17:17
    And then, I can actually take a look at that data
  • 00:17:19
    to make sure my business is truly sustainable.
  • 00:17:21
    Amir Netz: And we're bringing more
  • 00:17:22
    and more industry solutions in.
  • 00:17:24
    We expect to have around almost a dozen there
  • 00:17:26
    from every industry, healthcare, retail, telecom,
  • 00:17:29
    everything that you need.
  • 00:17:30
    It's coming to Fabric.
  • 00:17:31
    So, whatever industry you're in, you're going to find
  • 00:17:34
    that Fabric is just designed for your solutions.
  • 00:17:36
    PATRICK LeBLANC: Absolutely.
  • 00:17:37
    Okay. Thank you, Amir.
  • 00:17:37
    Amir Netz: Thank you so much.
  • 00:17:38
    PATRICK LeBLANC: Thank you.
  • 00:17:38
    Amir Netz: Okay.
  • 00:17:39
    Moving to the second pillar.
  • 00:17:41
    [APPLAUSE]
  • 00:17:41
    This is the Open and AI-Ready Data Lake.
  • 00:17:44
    This is really the world of OneLake, the OneDrive for data.
  • 00:17:48
    If you haven't heard about OneLake, well,
  • 00:17:51
    you've been sleeping under a rock for the last year.
  • 00:17:53
    This has been an amazing, amazing journey with OneLake.
  • 00:17:57
    This is the OneLake for the entire organization.
  • 00:17:59
    It's infinitely scalable.
  • 00:18:01
    It's globally deployed.
  • 00:18:03
    It's one, only one, OneLake for the whole organization.
  • 00:18:06
    All the workloads of Fabric store their data in OneLake.
  • 00:18:09
    All the data is always stored in an open format.
  • 00:18:12
    There is no proprietary format anywhere in Fabric.
  • 00:18:15
    And once the data is there, well, it's managed.
  • 00:18:18
    It's governed.
  • 00:18:19
    We handle the lineage.
  • 00:18:21
    We're going to talk more about it
  • 00:18:22
    when we talk about the catalog.
  • 00:18:24
    Wait for that.
  • 00:18:24
    But it's all managed by the catalog.
  • 00:18:26
    And boy, you guys have been responding to OneLake
  • 00:18:29
    like there is no tomorrow.
  • 00:18:30
    Just take a look at that.
  • 00:18:31
    We get 21 billion interactions with OneLake every day.
  • 00:18:36
    Four million shortcuts.
  • 00:18:37
    The way to connect your OneLake
  • 00:18:39
    to all the existing storage systems that you have out there.
  • 00:18:42
    Four million of those shortcuts have already been created.
  • 00:18:46
    Every 16 weeks, we double the volume of data
  • 00:18:49
    that is stored in OneLake.
  • 00:18:50
    And to show us how we get the data into OneLake,
  • 00:18:53
    I'm going to bring in Shireen.
  • 00:18:55
    Hey, Shireen.
  • 00:18:56
    Shireen Bahadur: Hey, everyone.
  • 00:18:59
    [APPLAUSE]
  • 00:19:00
    Hi, everyone.
  • 00:19:01
    Yes.
  • 00:19:01
    Amir Netz: So Shireen, we can bring the
  • 00:19:03
    data from everywhere into OneLake.
  • 00:19:04
    Shireen Bahadur: Yes.
  • 00:19:05
    Amir Netz: There are several mechanisms, right?
  • 00:19:06
    Shireen Bahadur: Exactly.
  • 00:19:06
    So there are many different ways to bring data into OneLake,
  • 00:19:09
    but I want to hone into a couple that are really important.
  • 00:19:12
    So let's start with shortcuts.
  • 00:19:13
    Shortcuts provide virtualization connections across domains
  • 00:19:16
    and clouds, and it basically allows you
  • 00:19:18
    to virtualize your data all in one place,
  • 00:19:20
    in this case, OneLake.
  • 00:19:21
    And you can connect to, you know,
  • 00:19:23
    different storage locations, file systems with Microsoft
  • 00:19:26
    and non-Microsoft sources, such as your AWS, GCP, Snowflake.
  • 00:19:30
    And there's absolutely no data movement or data duplication.
  • 00:19:33
    Amir Netz: So, just a way to virtualize all
  • 00:19:36
    the data on-prem in every cloud, everything in OneLake.
  • 00:19:38
    Shireen Bahadur: Yes.
  • 00:19:38
    Amir Netz: Great.
  • 00:19:38
    Shireen Bahadur: Yeah, absolutely.
  • 00:19:39
    Amir Netz: And then there is mirroring.
  • 00:19:40
    Shireen Bahadur: Exactly.
  • 00:19:41
    So, mirroring is a continuous data replication solution
  • 00:19:43
    for your operational databases.
  • 00:19:45
    So, that includes all databases or specific tables.
  • 00:19:48
    It really depends on what you want to do.
  • 00:19:49
    So you can bring all that change data directly into OneLake,
  • 00:19:53
    and our engine continuously replicates that data
  • 00:19:55
    for you using our change data captures
  • 00:19:57
    or CBC technology underneath the hood.
  • 00:19:59
    Amir Netz: And it's super simple, because
  • 00:20:00
    all you have to do is just point to the database and say,
  • 00:20:02
    I want to mirror that database, and whoop,
  • 00:20:04
    it just shows up in OneLake.
  • 00:20:04
    Shireen Bahadur: It just shows up.
  • 00:20:05
    So, should we dive a little bit deeper into mirroring?
  • 00:20:07
    Amir Netz: Yeah, let's do that.
  • 00:20:08
    Shireen Bahadur: Okay.
  • 00:20:08
    So, mirroring has been an absolute hit in the past year.
  • 00:20:11
    So, we have these variety of different sources
  • 00:20:13
    that we have currently, Snowflake GA,
  • 00:20:15
    which we just announced recently.
  • 00:20:16
    And as of today, we have announced mirroring
  • 00:20:19
    for Azure SQL DB as generally available.
  • 00:20:21
    Isn't that great?
  • 00:20:22
    Amir Netz: That's good.
  • 00:20:22
    Shireen Bahadur: Exciting, yeah.
  • 00:20:23
    But it doesn't stop there, right?
  • 00:20:25
    We're continuing to listening to your guys' feedback and,
  • 00:20:28
    of course, improving the product capabilities.
  • 00:20:30
    So today, I'm excited to announce
  • 00:20:32
    that we're introducing four new sources that are coming soon.
  • 00:20:36
    We have mirroring for SQL Server, SQL Server 2025,
  • 00:20:39
    PostgreSQL, and Oracle.
  • 00:20:41
    So now, over the course from today
  • 00:20:44
    and the next several weeks,
  • 00:20:45
    you'll see these lighting up soon.
  • 00:20:46
    So please stay tuned.
  • 00:20:48
    It's really, really exciting.
  • 00:20:49
    Amir Netz: Yeah.
  • 00:20:49
    Yeah. Now, you can see that we're graduating more
  • 00:20:51
    and more databases we support with mirroring.
  • 00:20:53
    But there's so, so many sources out there that we have
  • 00:20:56
    to connect to, and we don't want you to have to wait for us.
  • 00:20:59
    So, there's a new thing that we're announcing today,
  • 00:21:01
    which is called Open Mirroring.
  • 00:21:03
    Shireen Bahadur: Exactly.
  • 00:21:04
    Amir Netz: So, what is Open Mirroring?
  • 00:21:05
    Shireen Bahadur: Yeah, Open Mirroring.
  • 00:21:05
    So the goal of mirroring, right, in general,
  • 00:21:08
    is to have the flexibility for customers to bring data
  • 00:21:10
    in from anywhere, right?
  • 00:21:12
    So now, with Open Mirroring, which is in public preview
  • 00:21:14
    as of today, it helps you enhance or accelerate
  • 00:21:18
    to bring any data from any application
  • 00:21:20
    or any source directly into Fabric.
  • 00:21:23
    So, all you really have to do is bring that data
  • 00:21:25
    into a landing zone, and we take care of the rest.
  • 00:21:27
    Amir Netz: So, what do you have to bring
  • 00:21:29
    in? You have to bring, for mirroring to work,
  • 00:21:31
    you have to bring the initial snapshot of the database.
  • 00:21:33
    Shireen Bahadur: Yes.
  • 00:21:34
    Amir Netz: And then start bringing to us
  • 00:21:36
    the CDC, the change data capture feed, of the database.
  • 00:21:40
    Shireen Bahadur: Yes.
  • 00:21:40
    Amir Netz: You drop it into the landing zone,
  • 00:21:42
    and then we make it into Delta Table automatically for you.
  • 00:21:45
    Shireen Bahadur: Right, yeah.
  • 00:21:45
    And it runs automatically,
  • 00:21:46
    like how mirroring actually works, right?
  • 00:21:48
    Same thing.
  • 00:21:48
    Amir Netz: It's super, super simple, right?
  • 00:21:49
    Shireen Bahadur: It's really simple.
  • 00:21:50
    So, let's take a look to see how easy this actually is.
  • 00:21:53
    So, directly from my Fabric home page, I'll create a new item.
  • 00:21:57
    And over here, you'll see all of my sources, right?
  • 00:21:59
    You have the Cosmos DB, which is in preview.
  • 00:22:02
    We have Azure SQL Database, Databricks Catalog,
  • 00:22:05
    as well, and Snowflake.
  • 00:22:06
    And you'll see a few other ones coming soon, too.
  • 00:22:09
    But now we have this really cool capability called Mirror
  • 00:22:11
    Database, which is our Open Mirroring functionality.
  • 00:22:14
    So, I'll go ahead and click on it, I'll give it a name,
  • 00:22:17
    and then I'll hit "Create".
  • 00:22:18
    So, what I want to do now is I want
  • 00:22:20
    to show you guys the inside mechanisms
  • 00:22:22
    of how Open Mirroring actually works.
  • 00:22:24
    So, we have that landing zone, right,
  • 00:22:26
    which you'll actually see over here.
  • 00:22:27
    But I have some orders data on my, you know,
  • 00:22:29
    desktop as a CSV file.
  • 00:22:31
    And if I open it, I can see all my rows
  • 00:22:34
    and my headers directly here.
  • 00:22:36
    And like how Amir was mentioning, it's so simple.
  • 00:22:38
    All I have to do is take that order CSV file and drag
  • 00:22:41
    and drop it into that landing zone.
  • 00:22:42
    And immediately, there's a file there.
  • 00:22:44
    So, what's happening in the back end, right, we're looking
  • 00:22:46
    at the initial snapshot.
  • 00:22:47
    We're looking at change data.
  • 00:22:48
    We're making that file ready in an analytics-ready format.
  • 00:22:51
    Amir Netz: So that was the initial
  • 00:22:52
    snapshot, and you automatically converted it into a data table.
  • 00:22:54
    Shireen Bahadur: There you go.
  • 00:22:55
    That table automatically shows up here, right?
  • 00:22:57
    So now, if I want to go monitor,
  • 00:22:59
    I can use the replication status,
  • 00:23:00
    or I can go to my SQL Analytics endpoint,
  • 00:23:02
    which you guys are all familiar with, right?
  • 00:23:04
    So, I'll go to my SQL Analytics endpoint, and I'll verify
  • 00:23:07
    that my rows are actually there.
  • 00:23:09
    And we're working with about, you know, 62 rows of data.
  • 00:23:12
    And as you know, orders data is always being created
  • 00:23:15
    or modified.
  • 00:23:16
    PATRICK LeBLANC: So, we need to introduce these CDC changes.
  • 00:23:18
    Shireen Bahadur: CDC changes, exactly.
  • 00:23:20
    So now, if I zoom into the first row over here,
  • 00:23:23
    I'll notice that my price
  • 00:23:24
    for that particular row is incorrect.
  • 00:23:25
    And I want to modify that to, let's say, about $100,000.
  • 00:23:29
    Amir Netz: Okay.
  • 00:23:29
    Shireen Bahadur: So, what I have to do is only
  • 00:23:31
    provide and create CSV files with only the changes, right?
  • 00:23:34
    And the thing over here, Amir,
  • 00:23:36
    look at this particular CSV file.
  • 00:23:38
    The difference is that we have a column here called row marker
  • 00:23:41
    that looks at the operations for each row.
  • 00:23:44
    So, if I look at the first row, I'll see that row number one,
  • 00:23:47
    I'm changing that particular row to $100,000.
  • 00:23:50
    Amir Netz: And the marker of four,
  • 00:23:51
    number four, says that's a change.
  • 00:23:53
    Shireen Bahadur: It's a change.
  • 00:23:54
    In this case, it's an upsert, right?
  • 00:23:55
    So now, what I can do over here is
  • 00:23:57
    that I can add even more operations
  • 00:23:59
    with the same CSV file.
  • 00:24:00
    If I look at the next three rows, I'm deleting them,
  • 00:24:03
    and that row operation is set to two, which means delete.
  • 00:24:05
    Amir Netz: Delete.
  • 00:24:06
    Shireen Bahadur: Exactly.
  • 00:24:06
    And I can correspond those three rows to my orders table.
  • 00:24:09
    Amir Netz: And one will mean that you insert?
  • 00:24:11
    Shireen Bahadur: Exactly.
  • 00:24:12
    You're completely right.
  • 00:24:13
    So, the next five rows, I'm inserting that row in,
  • 00:24:16
    and that one means insert.
  • 00:24:17
    So now, I know that these will be inserted
  • 00:24:19
    into my orders table.
  • 00:24:20
    So, I could have actually separated these
  • 00:24:21
    into different CSV files, but I packed them into one
  • 00:24:24
    for this particular example.
  • 00:24:26
    So now, once again, I'm going to drag and drop that CSV file
  • 00:24:29
    with the changes directly into the landing zone.
  • 00:24:31
    And once again, it's already in that analytics-ready format.
  • 00:24:34
    So Amir, should we check to see
  • 00:24:35
    if those changes have been reflected?
  • 00:24:37
    Amir Netz: Yeah, let's see.
  • 00:24:37
    So, we updated the first one, deleted two more.
  • 00:24:40
    Shireen Bahadur: Yes.
  • 00:24:40
    Deleted three, and then we added five.
  • 00:24:42
    Amir Netz: Yes.
  • 00:24:43
    Shireen Bahadur: Okay.
  • 00:24:43
    So now, look at the first row.
  • 00:24:45
    We have updated that price to $100,000.
  • 00:24:47
    Amir Netz: Yes.
  • 00:24:47
    Shireen Bahadur: So check on that aspect.
  • 00:24:49
    Amir Netz: Yes.
  • 00:24:49
    Shireen Bahadur: I don't see rows two, three,
  • 00:24:51
    four, so they actually have been deleted.
  • 00:24:53
    So, check.
  • 00:24:53
    Amir Netz: Yes.
  • 00:24:54
    Shireen Bahadur: And then the five rows, 63 to
  • 00:24:56
    67, have actually been inserted in.
  • 00:24:57
    Amir Netz: Yes.
  • 00:24:57
    Shireen Bahadur: So, how simple is that?
  • 00:24:59
    Right? Isn't that totally simple?
  • 00:25:00
    Amir Netz: Yes.
  • 00:25:00
    The point is, it's very geeky, but we really want
  • 00:25:02
    to show how simple it is.
  • 00:25:04
    You can do it with Notepad.
  • 00:25:05
    Now, of course, you will not do it with Notepad.
  • 00:25:07
    We know that.
  • 00:25:08
    But you can write Python code to do that.
  • 00:25:10
    You can write C-sharp code to do that.
  • 00:25:12
    You know, any -- you can build it yourself,
  • 00:25:15
    or you can use one of our partners.
  • 00:25:17
    Shireen Bahadur: Exactly.
  • 00:25:17
    So, the second way to actually use Open Mirroring is
  • 00:25:20
    integrating it with our vast partner ecosystem.
  • 00:25:22
    So we have partners like Stream, Oracle, MongoDB, Datastacks,
  • 00:25:26
    that are actually integrating their data solutions
  • 00:25:28
    with Open Mirroring APIs.
  • 00:25:30
    And we're really excited to work with these partners
  • 00:25:32
    in the next few months
  • 00:25:33
    to increase the number of mirroring sources.
  • 00:25:35
    Amir Netz: And best of all, it is still all free.
  • 00:25:38
    Shireen Bahadur: Yeah.
  • 00:25:38
    So, Open Mirroring is new, right?
  • 00:25:40
    But it still sits under the umbrella
  • 00:25:41
    of mirroring as a whole.
  • 00:25:43
    So, that means all your replication from your sources
  • 00:25:46
    into OneLake is free, allowing you to just focus
  • 00:25:48
    on bringing your data gravity into Fabric.
  • 00:25:50
    Yeah.
  • 00:25:50
    Amir Netz: Awesome.
  • 00:25:50
    Thank you so much, Shireen.
  • 00:25:51
    Shireen Bahadur: Yes.
  • 00:25:51
    Thank you, Amir.
  • 00:25:52
    Have a great conference, everyone.
  • 00:25:53
    [APPLAUSE]
  • 00:25:55
    Amir Netz: Okay.
  • 00:25:55
    A lot of you use Fabric.
  • 00:25:59
    Lots of you have a lot of data in Fabric.
  • 00:26:00
    You want to make sure it's secured, it's governed,
  • 00:26:03
    so we are constantly working on it.
  • 00:26:04
    So, first thing I want to announce is certification.
  • 00:26:07
    You're using it everywhere.
  • 00:26:09
    You want to make sure that the solution you're building is
  • 00:26:11
    certified, built on a certified platform.
  • 00:26:12
    So, we are now announcing the last major certification
  • 00:26:16
    of Fabric, which is the FedRamp certification that you need
  • 00:26:19
    when you work with the federal government of the U.S.,
  • 00:26:22
    something that is necessary.
  • 00:26:23
    It's here.
  • 00:26:24
    This month we announced it.
  • 00:26:25
    So, that is the last in the line that we actually need
  • 00:26:28
    of the major certifications.
  • 00:26:29
    All the other certifications are basically derived
  • 00:26:32
    from those six certifications that we have here.
  • 00:26:35
    Of course, features.
  • 00:26:36
    Lots and lots and lots of governance features
  • 00:26:39
    and security features.
  • 00:26:40
    Lots have shipped.
  • 00:26:42
    Lots are constantly being worked on.
  • 00:26:45
    It is the top priority for us to make sure
  • 00:26:46
    that you have everything you need to govern
  • 00:26:48
    and secure your platform.
  • 00:26:50
    And to show some of the innovation coming in this space,
  • 00:26:52
    I'm going to invite Adi to the stage.
  • 00:26:54
    Adi, hi. How are you doing?
  • 00:26:57
    Adi Regev: Hello.
  • 00:26:57
    Hello.
  • 00:26:57
    Amir Netz: Faster.
  • 00:26:57
    Adi Regev: Hi, everyone.
  • 00:27:00
    [APPLAUSE]
  • 00:27:01
    Amir Netz: Okay.
  • 00:27:02
    So, we're working on a lot of features, right?
  • 00:27:05
    Adi Regev: Right.
  • 00:27:06
    So, Fabric has so many built-in governance
  • 00:27:09
    and security features today.
  • 00:27:10
    And let's talk about some
  • 00:27:11
    of the new announcements that are coming up.
  • 00:27:13
    Amir Netz: Surge protection.
  • 00:27:14
    Adi Regev: Surge protection.
  • 00:27:15
    Right. So Fabric's on fire, right?
  • 00:27:17
    It's being widely adopted by so many enterprises.
  • 00:27:20
    And some of that means that they're starting
  • 00:27:21
    to actually leverage it for their mission-critical tasks.
  • 00:27:25
    We need to make sure these aren't compromised
  • 00:27:27
    and remain a top priority.
  • 00:27:28
    Now, for that, we now introduce controls for capacity admins
  • 00:27:32
    so that they can actually set thresholds.
  • 00:27:34
    And if they reach those thresholds,
  • 00:27:36
    any background jobs running will just not run, right,
  • 00:27:39
    and prioritize those mission-critical needs.
  • 00:27:41
    Amir Netz: So, you can actually
  • 00:27:43
    deprioritize the development workspaces, or the test
  • 00:27:47
    workspaces, to make sure that the most important part, the
  • 00:27:49
    mission-critical part of the application, continues to run
  • 00:27:51
    even under major loads on your capacity.
  • 00:27:54
    Adi Regev: Right.
  • 00:27:54
    And we also provide flexibility there
  • 00:27:56
    so that you can set different thresholds
  • 00:27:58
    and limits per different capacities so you have
  • 00:28:00
    that granular control.
  • 00:28:01
    Amir Netz: Awesome.
  • 00:28:02
    Now, workspace monitoring.
  • 00:28:03
    Another big thing we're announcing today.
  • 00:28:05
    Adi Regev: Right.
  • 00:28:05
    So, visibility is key, right, especially in all
  • 00:28:08
    of these mission-critical pieces.
  • 00:28:09
    Now, we provide already a lot of monitoring capabilities
  • 00:28:12
    in Fabric, admin monitoring for admins
  • 00:28:15
    or the monitoring you have for data owners.
  • 00:28:17
    But now, we provide
  • 00:28:18
    for application developers, workspace monitoring
  • 00:28:21
    so that they can actually track in granular what's happening
  • 00:28:25
    with their relevant projects, right,
  • 00:28:27
    and perform root cause analysis, track downtime
  • 00:28:30
    or performance issues and see all of those in relevant logs.
  • 00:28:35
    Amir Netz: So, this is kind of the
  • 00:28:36
    monitoring that you need for DevOps.
  • 00:28:37
    So, we really want to understand how your application is
  • 00:28:39
    performing, how it's working, what's going inside.
  • 00:28:41
    So, that's workspace monitoring.
  • 00:28:43
    It's actually built on top
  • 00:28:44
    of the real-time intelligence technology we have.
  • 00:28:46
    Adi Regev: Right.
  • 00:28:46
    It's all saved into an event house
  • 00:28:48
    so that they can later query those and, you know,
  • 00:28:51
    based on that, perform ad hoc queries
  • 00:28:53
    or even save the query sets for later.
  • 00:28:55
    Amir Netz: Yeah.
  • 00:28:55
    You know, can run any query you want using KQL language.
  • 00:28:57
    That's awesome.
  • 00:28:58
    Adi Regev: Exactly.
  • 00:28:58
    Amir Netz: Okay.
  • 00:28:59
    Now, we have the big one.
  • 00:28:59
    This is your baby, Adi, right?
  • 00:29:01
    Adi Regev: It's definitely one
  • 00:29:02
    of my favorite child, right, children.
  • 00:29:06
    The OneLake Catalog, which is now generally available.
  • 00:29:09
    So, this is actually the evolution from the known
  • 00:29:12
    and loved OneLake Data Hub into a full-blown catalog
  • 00:29:15
    for your OneLake data.
  • 00:29:16
    And with that, we allow all Fabric users, so data engineers,
  • 00:29:20
    data scientists, business analysts, all the Fabric users
  • 00:29:23
    to easily discover all of their data, right?
  • 00:29:26
    They can then manage them easily in place
  • 00:29:29
    from within the catalog.
  • 00:29:30
    And they can also govern their entire individual data estate
  • 00:29:35
    with relevant insights
  • 00:29:36
    and recommended actions on the relevant data.
  • 00:29:39
    Amir Netz: So data discovery, data
  • 00:29:41
    management, data governance, all in one.
  • 00:29:43
    Adi Regev: All in one.
  • 00:29:44
    Amir Netz: For everything we have in one.
  • 00:29:44
    Adi Regev: For all item types, right?
  • 00:29:46
    Amir Netz: Okay.
  • 00:29:46
    Let's take a look.
  • 00:29:46
    Okay?
  • 00:29:47
    So, we'll start with discovery, right?
  • 00:29:49
    Adi Regev: Right.
  • 00:29:49
    And discovery has been a key challenge for enterprises.
  • 00:29:53
    So, I'm in the Explore tab in the new OneLake Catalog,
  • 00:29:57
    and I can start by browsing my domains, right?
  • 00:29:59
    So, I'll browse my domains and subdomains to search
  • 00:30:01
    for the relevant data per my business unit.
  • 00:30:04
    I'll select sales in this case, because I'm coming from there
  • 00:30:06
    and I want to build a relevant report.
  • 00:30:08
    I can explore by endorsed items, or favorites, or filter
  • 00:30:11
    to a relevant workspace.
  • 00:30:12
    And then I'll select the relevant content that I need.
  • 00:30:15
    Right?
  • 00:30:16
    Now, this has been a key ask to support all item types
  • 00:30:19
    within the catalog, and now with that evolution,
  • 00:30:21
    we actually do that, and you have all
  • 00:30:23
    of the OneLake data estate at your tips.
  • 00:30:24
    So, I can search for all of the data items, right,
  • 00:30:28
    like lake house, semantic model, and the new SQL database,
  • 00:30:33
    which is now introduced in Fabric,
  • 00:30:34
    the popular insight items, like Power BI reports or dashboards,
  • 00:30:38
    process items, like pipelines or notebook, all of the data
  • 00:30:42
    and items at my fingertips.
  • 00:30:45
    Right?
  • 00:30:45
    Next, another key feature has been for tags, right,
  • 00:30:49
    so that you can curate your data
  • 00:30:51
    and optimize discovery based on tags.
  • 00:30:53
    We now support that, and I can select relevant tags
  • 00:30:57
    to filter down my search.
  • 00:31:00
    I'll look into the warehouse sales booster next,
  • 00:31:02
    and I can see relevant metadata, like description, owner,
  • 00:31:06
    endorsement, sensitivity label,
  • 00:31:08
    but I can also browse its schema, so it's actual tables
  • 00:31:11
    and views to see if that's the data I'm looking for.
  • 00:31:13
    Amir Netz: That was a major ask,
  • 00:31:14
    going all the way to the column.
  • 00:31:15
    Adi Regev: Major ask.
  • 00:31:16
    Major ask.
  • 00:31:17
    So, I'll move on to semantic model, which seems more fitting
  • 00:31:19
    to what I'm looking for.
  • 00:31:20
    It's also endorsed as master data.
  • 00:31:23
    Again, I'll explore the tables and columns, and based on that,
  • 00:31:26
    I see it's the item I've been looking for.
  • 00:31:28
    Right? So, once I've found what I need,
  • 00:31:30
    I can perform relevant actions.
  • 00:31:31
    For instance, I can click on "Explore This Data"
  • 00:31:34
    to actually derive key insights on the fly,
  • 00:31:36
    visualize those insights, and once I have what I need,
  • 00:31:40
    I can either save it for later or share with others.
  • 00:31:43
    Amir Netz: Okay.
  • 00:31:43
    So, one place to find and to find,
  • 00:31:46
    discover every item we have in Fabric, whether it's data item,
  • 00:31:49
    process item, insight item, and so forth.
  • 00:31:52
    Now, we have the need now to manage it.
  • 00:31:55
    Adi Regev: Right.
  • 00:31:55
    Amir Netz: And I don't want to go every
  • 00:31:56
    time to the workspace to do that.
  • 00:31:57
    I can do it all from within the catalog.
  • 00:31:59
    Right?
  • 00:31:59
    Adi Regev: Right.
  • 00:31:59
    So, the next piece is allowing you
  • 00:32:01
    to manage your items in place easily.
  • 00:32:04
    Amir Netz: Let's take a look.
  • 00:32:05
    Adi Regev: Let's have a look.
  • 00:32:07
    So, I've moved on.
  • 00:32:10
    I'm in that same item.
  • 00:32:11
    I moved on to lineage view, where I can see now,
  • 00:32:13
    for instance, end-to-end relations for a selected item
  • 00:32:16
    down from the store analysis report, all the way
  • 00:32:18
    up to the SQL database.
  • 00:32:20
    I can move on to the list view to see additional information
  • 00:32:23
    like endorsement or sensitivity, and here, for example,
  • 00:32:26
    I actually see that some are labeled as confidential,
  • 00:32:28
    while others are labeled as general, but I remember
  • 00:32:31
    that the Global Store SQL database actually contains
  • 00:32:33
    sensitive information.
  • 00:32:34
    Amir Netz: Yep.
  • 00:32:35
    Adi Regev: So, I want to go ahead and fix
  • 00:32:36
    that and adjust the sensitivity, and I can do that all
  • 00:32:39
    from within the catalog.
  • 00:32:40
    I'll easily access the settings,
  • 00:32:42
    and it'll adjust the relevant sensitivity label,
  • 00:32:44
    and once I do that, not only does it fix that SQL database,
  • 00:32:48
    but actually, all of the downstream items inherited
  • 00:32:51
    that sensitivity label automatically
  • 00:32:53
    to ensure they all remain compliant and consistent.
  • 00:32:56
    I'll move on to the monitor tab, where I can see all
  • 00:32:58
    of the last runs, and I can see the last one failed, so again,
  • 00:33:01
    I can trigger refresh and refresh
  • 00:33:03
    that outdated item directly from within the catalog.
  • 00:33:05
    And I can track permissions and manage my permissions
  • 00:33:08
    for that item, both internal and external shares,
  • 00:33:10
    all available within the catalog.
  • 00:33:12
    Amir Netz: So, notice that we never have to
  • 00:33:13
    go through the workspace ever.
  • 00:33:14
    Adi Regev: Right.
  • 00:33:14
    And now that, for instance, my item is up-to-date
  • 00:33:17
    and it's labeled correctly, I can go on and collaborate,
  • 00:33:20
    share it with others, and perform other activities.
  • 00:33:22
    Amir Netz: Okay.
  • 00:33:23
    Governance.
  • 00:33:23
    Okay, Governance is not about managing individual items.
  • 00:33:26
    It's about the entire estate that you have, right?
  • 00:33:28
    Adi Regev: Right.
  • 00:33:28
    Right.
  • 00:33:28
    Amir Netz: So show us what we have in Governance.
  • 00:33:30
    Adi Regev: So Governance, which is coming soon in preview,
  • 00:33:33
    allows you to govern your individual data estate,
  • 00:33:36
    get key insights, drive actions, and again, I can filter
  • 00:33:39
    by a selected domain, or I can, you know,
  • 00:33:42
    choose to view all the insights on my domains at once.
  • 00:33:44
    I get key insights at a glance which are relevant to me,
  • 00:33:48
    or I can click to view more, where I'll see a detailed report
  • 00:33:51
    of all of my individual data estate, so data hierarchy,
  • 00:33:55
    data inventory, data refreshes, my entire status, right?
  • 00:33:59
    But I can also track how secure and compliant my data is,
  • 00:34:02
    with sensitivity label coverage and distribution by item types,
  • 00:34:06
    and I can also see how curated my items are with my use of tags
  • 00:34:11
    or descriptions or endorsement, so it makes it really easy
  • 00:34:14
    to understand from a bird's-eye view what's going on.
  • 00:34:16
    And back in the main view,
  • 00:34:18
    I can actually see recommended actions, especially for me,
  • 00:34:21
    so actions like increasing sensitivity label coverage
  • 00:34:24
    or refreshing outdated items.
  • 00:34:26
    And if I click on a card, I'll see the details,
  • 00:34:29
    I'll see an explanation
  • 00:34:30
    of why I'm getting this recommended action,
  • 00:34:32
    and steps I can take to address it.
  • 00:34:35
    And last, I can -- we mentioned that we have
  • 00:34:37
    so many built-in governance capabilities and also integrated
  • 00:34:40
    with Microsoft Purviews, et cetera, from within Fabric,
  • 00:34:43
    so I get central access to all of those
  • 00:34:45
    from within the govern tab.
  • 00:34:47
    Amir Netz: That's awesome, and so
  • 00:34:49
    beautiful, right? What do you think, guys?
  • 00:34:50
    [APPLAUSE]
  • 00:34:51
    Now, not only that the catalog itself is extremely useful
  • 00:34:55
    for anybody who's using Fabric,
  • 00:34:56
    the catalog is really the gateway to the rest
  • 00:34:59
    of the Microsoft stack.
  • 00:35:00
    Adi Regev: Right.
  • 00:35:00
    Amir Netz: You see all the products that
  • 00:35:02
    we have at Microsoft that integrate with the catalog,
  • 00:35:04
    whether it's the Microsoft Excel or the Copilot Studio,
  • 00:35:08
    or we've seen in the AI keynote, we've seen how it integrates
  • 00:35:12
    with OneLake, all that, it's integrated everywhere.
  • 00:35:15
    And I'll give you an example, for example, in Microsoft Teams,
  • 00:35:18
    this is how the catalog looks like.
  • 00:35:19
    Adi Regev: Right.
  • 00:35:20
    Amir Netz: It's exactly the same way it looks like.
  • 00:35:21
    Adi Regev: So, we already have today the
  • 00:35:23
    OneLake Data Hub there, and soon you'll have the full-blown
  • 00:35:25
    OneLake Catalog. It's that very same one I showed you.
  • 00:35:27
    You'll be able to filter by domain, see all item types,
  • 00:35:30
    the rich metadata, and from there, access everything.
  • 00:35:33
    Amir Netz: Thank you so much, Adi.
  • 00:35:34
    Adi Regev: Thank you.
  • 00:35:34
    [APPLAUSE]
  • 00:35:35
    Amir Netz: Okay.
  • 00:35:37
    Taking us now to the last part, the last pillar,
  • 00:35:40
    it's the AI-enabled insight.
  • 00:35:42
    This is the world of the business users.
  • 00:35:44
    This is the world of Power BI.
  • 00:35:45
    Power BI has been around for 10 years.
  • 00:35:48
    It is the primary tool for every business user
  • 00:35:50
    to get insight into their data.
  • 00:35:53
    We have so many, so many.
  • 00:35:56
    Tens of millions of users of Power BI.
  • 00:35:58
    I want to invite Patrick to just join me on stage
  • 00:36:00
    and show us what are we doing here.
  • 00:36:01
    How do we bring AI to the world of the business user?
  • 00:36:04
    Patrick Baumgartner: Hello, everyone.
  • 00:36:06
    Yeah. So, in Power BI, you know, the key thing
  • 00:36:08
    for us has been thinking about how do we use AI
  • 00:36:10
    to really simplify how everyone experiences their data
  • 00:36:13
    and how everyone interacts with their data.
  • 00:36:14
    I'm going to go to the next slide.
  • 00:36:16
    And when you think about personas across the board,
  • 00:36:18
    and you've seen a couple of demos today about Copilot coming
  • 00:36:20
    in and helping me generate reports quickly,
  • 00:36:22
    it can help me get answers to questions.
  • 00:36:25
    And as we've looked at how people are actually using it,
  • 00:36:27
    we've seen incredible productivity boosts.
  • 00:36:29
    Amir Netz: And we actually measured it.
  • 00:36:30
    Okay. I want to share with you a study.
  • 00:36:32
    A real, you know, about 200 people that we actually measured
  • 00:36:35
    in the lab to see the productivity.
  • 00:36:36
    And look at the productivity gain here.
  • 00:36:38
    It's 52% of the performance or ability
  • 00:36:43
    to complete the task faster.
  • 00:36:44
    Patrick Baumgartner: Yeah, exactly.
  • 00:36:44
    So, if we take people, we give them a task
  • 00:36:46
    and we give them a task with Copilot and give them a task
  • 00:36:48
    without Copilot, we actually see a dramatic increase
  • 00:36:50
    in productivity.
  • 00:36:51
    And a lot of times we think about AI, we think, hey,
  • 00:36:53
    AI is going to do everything end-to-end.
  • 00:36:55
    And it's not always that.
  • 00:36:56
    It's always about, you know, helping me get
  • 00:36:57
    to that next task a little bit faster, adding ambient insights.
  • 00:37:00
    So, really exciting results for us.
  • 00:37:02
    Amir Netz: Yeah.
  • 00:37:03
    So you get faster results, you get more accurate results,
  • 00:37:06
    and the most important thing, 90% of those
  • 00:37:08
    who used the Copilot wanted to continue to use it.
  • 00:37:10
    Patrick Baumgartner: Yeah.
  • 00:37:10
    And one of the things we heard, we're hearing from all of you is
  • 00:37:12
    that how do we streamline how people get access to Copilot?
  • 00:37:15
    How do we understand cost?
  • 00:37:16
    How do we make it more available?
  • 00:37:18
    Amir Netz: And this is really
  • 00:37:19
    where we have a great announcement because it means
  • 00:37:21
    that now you can have what we call the Fabric AI Capacities.
  • 00:37:24
    You can designate a capacity in your tenant
  • 00:37:27
    to be covering all the reports you have in the organization,
  • 00:37:30
    whether the reports are coming from a workspace
  • 00:37:33
    that have capacities assigned to them or those
  • 00:37:35
    that don't have capacities assigned to them.
  • 00:37:36
    All your reports can be powered
  • 00:37:38
    by the Copilot using that capacity.
  • 00:37:40
    Patrick Baumgartner: Yeah.
  • 00:37:40
    So, the Fabric AI capacities is a new mechanism you can use
  • 00:37:43
    to more easily deploy AI and Copilot to your users.
  • 00:37:46
    A very exciting announcement.
  • 00:37:47
    Amir Netz: Okay.
  • 00:37:48
    Now, we have AI Skills.
  • 00:37:50
    Patrick Baumgartner: Yeah.
  • 00:37:51
    So, one of the things we wanted to start by talking about is,
  • 00:37:53
    and now you saw a lot of stuff with AI Foundry and other ways
  • 00:37:56
    to build chat experiences on top of your data
  • 00:37:58
    and integrate that into your apps.
  • 00:38:00
    And we have a way to help you simplify that in Fabric as well,
  • 00:38:02
    because you have lots of different types of data.
  • 00:38:04
    And to understand that data, you need to kind
  • 00:38:06
    of help bring it together and add a little bit of expertise.
  • 00:38:08
    And then that helps you streamline how users get access.
  • 00:38:11
    And that feature is called AI Skills.
  • 00:38:12
    So, let's go ahead and take a look at a quick demo
  • 00:38:14
    so you understand this capability.
  • 00:38:15
    So, here I am in the same Contoso Analytics workspace.
  • 00:38:18
    We've been using a few of these for these demos.
  • 00:38:20
    And I've got lots of different types
  • 00:38:21
    of data that's all coming together.
  • 00:38:23
    And what I want to do is create a customer data expert
  • 00:38:26
    that pulls data from a couple different sources
  • 00:38:29
    but brings it together in a way I can kind of control.
  • 00:38:31
    And so to do that, I'm going to create an AI skill.
  • 00:38:33
    So, this is something we've had in preview for a while.
  • 00:38:35
    And previously, you could only use a lake house
  • 00:38:37
    as the data source.
  • 00:38:38
    And now we're excited to announce
  • 00:38:40
    that you can add additional data sources as well
  • 00:38:42
    into the same AI skill.
  • 00:38:43
    So, I'm going to start
  • 00:38:44
    by grabbing a KQL database that's got some real-time
  • 00:38:49
    delivery information about packages for my customers.
  • 00:38:52
    And just by selecting the data, I can start asking questions
  • 00:38:56
    and using a large language model
  • 00:38:57
    to give me answers from that data.
  • 00:38:58
    So, I can say, break down the number
  • 00:39:00
    of package delivery trips per month
  • 00:39:01
    and what are the most deliveries.
  • 00:39:02
    And automatically, it recognized the type of data set,
  • 00:39:05
    generated the correct Kusto query, and gave me the answer.
  • 00:39:08
    So, the setup is really, really simple.
  • 00:39:10
    And I could ask statistical questions as well.
  • 00:39:12
    So, what's the 99th percentile for trip distance?
  • 00:39:15
    It's going to generate the correct query
  • 00:39:16
    to go ahead and give me that.
  • 00:39:17
    Amir Netz: The data could be everywhere.
  • 00:39:18
    It's not just in one database.
  • 00:39:19
    Patrick Baumgartner: Exactly.
  • 00:39:20
    And I don't necessarily always want to unify
  • 00:39:21
    that into one data structure.
  • 00:39:23
    I want to be able to just kind of link to where the data is.
  • 00:39:25
    So let's add a couple other data sources.
  • 00:39:27
    I'm going to add a semantic model from Power BI.
  • 00:39:29
    And I'm going to add a lake house
  • 00:39:31
    where we have some additional data.
  • 00:39:33
    So, for customer loyalty, for orders and sales.
  • 00:39:36
    And so I can just select the data I want.
  • 00:39:38
    And that's all the setup I need to do.
  • 00:39:40
    And so now what I'm going to do is go through
  • 00:39:42
    and select the specific tables I want the AI to have access to.
  • 00:39:45
    And you can see from the lake house,
  • 00:39:49
    there's a couple different tables.
  • 00:39:50
    And here's the lake house.
  • 00:39:51
    And then previously, the semantic model as well.
  • 00:39:54
    So again, incredibly easy setup.
  • 00:39:56
    And now the AI has access to the schema, so we can ask kind
  • 00:40:00
    of questions about that data automatically.
  • 00:40:02
    Amir Netz: And the AI will figure
  • 00:40:04
    out where to get the data from?
  • 00:40:05
    Patrick Baumgartner: Exactly.
  • 00:40:06
    It's going to just look at my question.
  • 00:40:07
    It's going to route to the correct database
  • 00:40:08
    and generate the correct query.
  • 00:40:10
    So, I can say, what's the name of the top loyalty customer?
  • 00:40:12
    It found Teodoro.
  • 00:40:14
    And in this case, it generated a DAX query to go ahead
  • 00:40:16
    and pull that information out.
  • 00:40:17
    But what's really cool now is I can use
  • 00:40:19
    that information in the chat.
  • 00:40:21
    So I can say, okay.
  • 00:40:22
    We have the chat context.
  • 00:40:23
    So I can say, what are additional information
  • 00:40:26
    about him?
  • 00:40:28
    And it's going to know Teodoro from the first answer
  • 00:40:31
    and then use that to look up information
  • 00:40:33
    in the next database, in this case, the lake house.
  • 00:40:35
    So again, the end user doesn't have to know
  • 00:40:37
    where any of this data is.
  • 00:40:38
    They can just ask questions.
  • 00:40:39
    And the AI is kind of traversing across these data sets.
  • 00:40:42
    And I can keep asking questions along this route.
  • 00:40:45
    And it's super easy to use.
  • 00:40:47
    So, there we see the data coming from now the lake house
  • 00:40:50
    for the additional information.
  • 00:40:52
    But anyway, it's a super cool way to bring data together,
  • 00:40:56
    and then you can think about how to integrate these
  • 00:40:58
    into your other chat experiences all the way up the stack.
  • 00:41:00
    Amir Netz: Okay, we have a couple of more
  • 00:41:02
    demos that are really not about what we ship today, but what's
  • 00:41:05
    coming in the next few months.
  • 00:41:06
    But I think it's really, really worthwhile to see kind
  • 00:41:09
    of what's in the pipeline.
  • 00:41:10
    Okay. The first one is how we present
  • 00:41:13
    and how we provide this Copilot experience
  • 00:41:15
    for the business user.
  • 00:41:16
    Now, typically, we say, hey, business users,
  • 00:41:19
    you go to a report, and then on the sidebar,
  • 00:41:20
    you can ask questions about what you see in the report,
  • 00:41:22
    but you can do better than that, right?
  • 00:41:24
    Patrick Baumgartner: Exactly, so we don't want end
  • 00:41:25
    users to always have to know what report to go
  • 00:41:27
    to, because they don't necessarily know
  • 00:41:28
    where the data is.
  • 00:41:29
    So, let's take a look here at this demo,
  • 00:41:31
    and what you notice here is I'm in the Power BI homepage,
  • 00:41:35
    and there's a new icon up in the corner that's a Copilot icon.
  • 00:41:38
    And if I click here, I'm getting an immersive Copilot experience
  • 00:41:41
    that knows how to traverse all the data I have access to.
  • 00:41:45
    So, I can come to this one location, so you think
  • 00:41:47
    about more of a business-style user,
  • 00:41:49
    and they can just start asking questions, like, you know,
  • 00:41:50
    how many loyalty program members did we add this month?
  • 00:41:53
    Now, this is smart enough to look,
  • 00:41:55
    do I have access to AI Skills?
  • 00:41:56
    Do I have access to semantic models?
  • 00:41:57
    Do I have access to reports?
  • 00:41:58
    It's going to figure out what the right information source is
  • 00:42:01
    and answer my question.
  • 00:42:02
    So in this case, we added about 918 members, and best of all,
  • 00:42:06
    it gives me reasoning of why it found this answer,
  • 00:42:09
    and a link to go back to that original source if I want
  • 00:42:11
    to kind of do further analysis,
  • 00:42:13
    but it's still a conversational chat,
  • 00:42:15
    so I can ask more questions, and so, I can say break this
  • 00:42:18
    down by source, and it's going to go ahead
  • 00:42:20
    and generate, now, a visual for me,
  • 00:42:22
    and I can say copy-paste that, go ahead and use it.
  • 00:42:25
    Maybe I want to get the table of information, so I can ask, like,
  • 00:42:27
    what are the top members with anniversaries this month,
  • 00:42:30
    because maybe I want to go send them an email or something,
  • 00:42:32
    so now I have the table.
  • 00:42:33
    Amir Netz: But it's not limited
  • 00:42:33
    to just one source, right?
  • 00:42:35
    Patrick Baumgartner: Exactly.
  • 00:42:35
    The best part here is, if I want to switch gears now
  • 00:42:37
    and ask about, say, HR, say what are the open positions
  • 00:42:40
    that we have, we're smart enough to look at, again,
  • 00:42:42
    what you have available, and switch over now, and it's going
  • 00:42:45
    to bring back an answer from my HR reporting database,
  • 00:42:48
    and so, I can traverse these very easily.
  • 00:42:51
    And so finally, I can also ask
  • 00:42:53
    about just reports I have access to, so hey,
  • 00:42:55
    list the most interesting reports about a specific topic,
  • 00:42:57
    and now it's gotten me that list,
  • 00:42:59
    so I can go ahead and open that report.
  • 00:43:01
    And then, of course, we have Copilot baked in here as well,
  • 00:43:03
    so I can continue just using voice
  • 00:43:05
    as my interaction mechanism here.
  • 00:43:07
    Amir Netz: That's awesome.
  • 00:43:08
    Yeah, so a brand new way for business users to work with AI
  • 00:43:11
    on the top of the data, and now we're going to get
  • 00:43:14
    to the last part, okay?
  • 00:43:15
    Last demo, but now you have to really,
  • 00:43:17
    really concentrate, okay?
  • 00:43:18
    We've had BI, Power BI, connecting the world
  • 00:43:21
    of the business users and the business application
  • 00:43:23
    to the world of data, and until now, it was all about analytics,
  • 00:43:27
    but now Fabric is more than just analytics.
  • 00:43:29
    It has both the transactional databases
  • 00:43:31
    and the analytical capabilities all in one.
  • 00:43:33
    So, can we really bring the world of Power BI and analytics
  • 00:43:38
    with the world of transactional databases?
  • 00:43:40
    So, I want to teach you a new word today that you're going
  • 00:43:42
    to remember for the next few years.
  • 00:43:44
    It's called translytical.
  • 00:43:45
    It's a combination of transaction and analytical,
  • 00:43:48
    and it's really about creating an application
  • 00:43:52
    that combines the two elements.
  • 00:43:53
    And we're going to show you a demo here how we take the
  • 00:43:55
    database plus the data functions in Fabric plus Power BI
  • 00:44:00
    and create translytical applications in Fabric.
  • 00:44:03
    And what's really cool about it is the Power BI canvas
  • 00:44:05
    transformed from being a read-only canvas
  • 00:44:09
    to being a canvas that you can actually update the operational
  • 00:44:12
    database directly within your report.
  • 00:44:15
    So, let's take a look at that.
  • 00:44:17
    Patrick Baumgartner: Yeah, so this is a sneak peek
  • 00:44:17
    of what's coming here,
  • 00:44:18
    and I think this is the most exciting demo,
  • 00:44:20
    I think you're going to see this week,
  • 00:44:21
    because it's taking the databases in Fabric,
  • 00:44:23
    which I think is the most exciting thing this week,
  • 00:44:24
    and taking it one step further.
  • 00:44:26
    So, let's go ahead and take a look.
  • 00:44:27
    So again, I'm in one of my solutions here,
  • 00:44:30
    and I have my data that's being stored in my SQL database.
  • 00:44:33
    I've got a bunch of other information coming together,
  • 00:44:35
    and what I want to do is make it so my end users can update data
  • 00:44:38
    in the database directly from where they work.
  • 00:44:40
    And so, if I look at my opportunity database,
  • 00:44:43
    I see a bunch of sales information
  • 00:44:45
    that we have going on, and there's always discounts we want
  • 00:44:47
    to add or things we want to change,
  • 00:44:49
    and I don't want necessarily, people to move
  • 00:44:50
    from their analytical area to a different app
  • 00:44:53
    to be able to do that.
  • 00:44:54
    So we have, as Amir mentioned, these user data functions inside
  • 00:44:57
    of Fabric, so I can write code here
  • 00:44:59
    that is updating the data in that SQL database.
  • 00:45:02
    So, here you can see some
  • 00:45:03
    of these different update statements,
  • 00:45:04
    and we added a couple functions to update either one opportunity
  • 00:45:07
    or a group of opportunities.
  • 00:45:09
    And I can go ahead and test that out.
  • 00:45:10
    And so if I -- you know, I'm going to update the status
  • 00:45:14
    to open, I'm going to say
  • 00:45:16
    that this is the specific opportunity.
  • 00:45:18
    I'm going to give it a 50% discount,
  • 00:45:20
    and call it a test update.
  • 00:45:21
    And if I hit "Run", it's going to go ahead
  • 00:45:24
    and update those records in the database,
  • 00:45:25
    or that specific record in the database,
  • 00:45:27
    and it's working, so okay, we're ready.
  • 00:45:29
    Amir Netz: Now let's bring Power BI in.
  • 00:45:30
    Patrick Baumgartner: Yeah, exactly, because now I want
  • 00:45:32
    to put an app experience around that, and I want to use Power BI
  • 00:45:35
    as that home experience.
  • 00:45:37
    And so you're going to see us introduce a couple new buttons
  • 00:45:40
    and text entry fields, and what I can do is take one
  • 00:45:43
    of the buttons in Power BI, and now assign it to that function.
  • 00:45:47
    So I'm going to take the "Submit" button,
  • 00:45:48
    I'm going to turn on the action for the data function.
  • 00:45:51
    I'm going to select my workspace,
  • 00:45:52
    I'm going to select the data function that I created,
  • 00:45:54
    and then I'm going to tell it what data to come
  • 00:45:57
    from either the report, or the entry field
  • 00:45:59
    that I have in the report.
  • 00:46:00
    And I'm going to feed it back into that user data function,
  • 00:46:04
    so it can go into the database.
  • 00:46:06
    So, now I'm going to switch over to runtime,
  • 00:46:08
    I published that report, and now let's see how an end user could
  • 00:46:11
    use this.
  • 00:46:11
    So, I can still slice and dice,
  • 00:46:13
    and use all the analytical features of Power BI.
  • 00:46:15
    So, now I've filtered it down,
  • 00:46:16
    I've clicked a specific opportunity.
  • 00:46:18
    Now I'm going to give it a 25% discount, and I'm going to say,
  • 00:46:22
    hey, I need to close this deal.
  • 00:46:23
    And when I hit "Submit", that record is going back
  • 00:46:26
    into the database, and you can see it updated almost
  • 00:46:28
    immediately into the table.
  • 00:46:30
    And just to kind of show a little more flexibility here,
  • 00:46:33
    we can also select groups here, so I'm going to set a set
  • 00:46:36
    of opportunities, everything that's high-risk,
  • 00:46:37
    expiring in 60 days, and let's bring the quantity
  • 00:46:40
    up to eight or nine.
  • 00:46:42
    And now this is kind of interesting,
  • 00:46:43
    because I've got open and lost in one opportunities.
  • 00:46:46
    I want to give a 30% discount for upcoming deals,
  • 00:46:49
    and so in the function, it's smart enough
  • 00:46:51
    to only write to the open deals.
  • 00:46:53
    And so I hit "Submit", and again that data is going back
  • 00:46:55
    into the database, and you can see it updating in the comments.
  • 00:46:58
    If you think about this, this really blows the doors open
  • 00:47:01
    of anything you can imagine,
  • 00:47:03
    now you can start building in Power BI.
  • 00:47:05
    Amir Netz: The number of scenarios we
  • 00:47:06
    have here, if you've been using Power BI,
  • 00:47:08
    the number of scenarios we have here, just incredible.
  • 00:47:10
    My opinion, this is the biggest upgrade to Power BI
  • 00:47:13
    since the inception of Power BI, everything before that.
  • 00:47:16
    Patrick Baumgartner: Yeah, very exciting.
  • 00:47:18
    Amir Netz: So, very, very exciting.
  • 00:47:19
    [APPLAUSE]
  • 00:47:21
    Patrick Baumgartner: All right, well, thank you, Amir.
  • 00:47:23
    Amir Netz: Thank you.
  • 00:47:24
    Thank you so much, Pat.
  • 00:47:25
    So, we've seen, we've worked through,
  • 00:47:28
    we've seen the entire unified data platform
  • 00:47:31
    for AI transformation.
  • 00:47:32
    It was only 35 minutes, but if you want
  • 00:47:33
    to get three days' worth of content, well,
  • 00:47:36
    join us at FabCon in Vegas.
  • 00:47:38
    We have three days full, everything you need
  • 00:47:41
    about Fabric, about the Fabric Database, about Power BI,
  • 00:47:44
    everything you want, join us.
  • 00:47:46
    And that's it, thank you so much.
  • 00:47:48
    [APPLAUSE]
Tag
  • Microsoft Fabric
  • AI
  • Data Management
  • Azure
  • Power BI
  • OneLake
  • Enterprise Solutions
  • Data Security
  • Real-Time Intelligence
  • Fabric Databases