Archive for the ‘Chess’ Category

Caruana, Gukesh Score In Opening Round Superbet Chess Classic Romania – Chess.com

The Superbet Chess Classic Romania, the second leg of the 2024 Grand Chess Tour, started on Wednesday in Bucharest with wins for GMs Gukesh Dommaraju and Fabiano Caruana. In his first classical game since winning the Candidates, Gukesh defeated wildcard GM Bogdan-Daniel Deac. Caruana was in trouble vs. GM Alireza Firouzja but managed to turn things around.

The first day of the tournament, held in the Grand Hotel Bucharest for a $350,000 prize fund, saw draws in the games Nodirbek Abdusattorov vs. Praggnanandhaa Rameshbabu, Maxime Vachier-Lagrave vs. Ian Nepomniachtchi, and Anish Giri vs. Wesley So.

Round two starts Thursday, June 27, at 8 a.m. ET / 14:00 CEST / 17:30 p.m. IST.

Superbet Chess Classic Romania Round 1 Results

Superbet Chess Classic Romania Standings After Round 1

We have arrived at the second leg of the 2024 Grand Chess Tour, a month and a half after GM Magnus Carlsen's grandiose victory at the Superbet Rapid and Blitz in Warsaw, Poland where the Norwegian star overtook GM Wei Yi on the final day thanks to a 10-game winning streak.

Alongside six wildcards, that rapid and blitz event in Warsaw had only four of the full Grand Chess Tour participantsAbdusattorov, Giri, Gukesh and Praggnanandhaawho were joined with five more in Bucharest: Caruana, Firouzja, Nepomniachtchi, So and Vachier-Lagrave. Deac, back to being the Romanian number-one now that GM Richard Rapport is poised to play for Hungary again, is the wildcard in this tournament.

The first round was on Wednesday, but the day before, the players were already involved in some activities. Besides giving interviews for the tournament broadcast they also played simuls against local chess fans, which is always a nice idea. GM Viswanathan Anand was involved as well:

The organizers of the Superbet tournaments continue to value on-site spectators in a world that's increasingly shifting to online. It is clear that cities like Warsaw and Bucharest, and also Zagreb as another location for the Grand Chess Tour, still have many chess fans, just like some decades ago when several major events were taking place in Eastern Europe, then still linked to the Soviet Union. GCT ambassador GM Garry Kasparov noted in a recent interview:

Considering the overall development of chess and the other GCT host citiesWarsaw and ZagrebEastern Europe has now recovered its place in the world of chess, which was lost after the collapse of the Soviet Union. Now it is clearly the most vibrant part of the chess world in Europe. In contrast, Western Europe has very little left of the activities that were thriving when I was playing some 20-25 years ago. Now we have the Bucharest-Warsaw-Zagreb orb, and maybe a few more cities could join. If I lived in Bucharest, knowing that every year I could watch and meet the world's top players live, I would be delighted.

... Eastern Europe has now recovered its place in the world of chess, which was lost after the collapse of the Soviet Union.

Garry Kasparov

Gukesh-Deac 1-0

Gukesh turned 18 a month ago and played his first classical game since he won the Candidates. He won a somewhat topsy-turvy game against Deac, who played strongly and kept up with his opponent's level of play for long. In fact, the 22-year-old Romanian player was close to winning for a brief moment, something that players and analysts missed.

As the game went beyond move 30, with a rather complicated middlegame position on the board, the clock started to play a role and continued to do so after move 40. The Tour is using a new time control this year for its classical events: 120 minutes for the whole game, with a 30-second increment per move. It was also used at the recent Cairns Cup.

It means that as soon as a player gets into serious time trouble, there's no way out of it anymore (and a visit to the restroom will have to wait until after the game.) And it showed: with 35 seconds left on his clock, Deac blundered his position to shambles, but as soon as Gukesh went under a minute, he allowed a tactic that would have led to a draw. As Deac missed it, Gukesh ended up winning convincingly after all.

Firouzja-Caruana 0-1

Caruana is defending his title from last year in Romania, and started his campaign wellunlike what he did in his game. Firouzja, who came to Bucharest with back-to-back online victories in the Champions Chess Tour (beating Carlsen twice) and the Bullet Chess Championship (beating GM Hikaru Nakamura twice), was simply much better out of the opening.

The game started with a London System with 2.Bf4, apparently once dubbed the "Lazy Tromp" by the English GM Mark Hebden. Caruana didn't feel like repeating the sharp continuation from their game at Norway Chess last month and instead chose to "freestyle" with 2...b6!?, a Queen's Indian type of setup.

Firouzja's 3.c4 meant he was ready to play against a Queen's Indian with his bishop on f4, but Caruana then switched to a more King's Indian type of structure with 4...d6 and 5...g6. That allowed his opponent to grab space in the center, and soon Caruana was looking at a "disgusting" position, as he called it afterward.

Whereas commentatorGM Yasser Seirawan had called it "dodgy," Caruana was more critical on himself: "It was much worse than dodgy. I thought I was, like, close to lost. I wasnt sure. I dont know what I was doing."

It was much worse than dodgy. I thought I was, like, close to lost. Fabiano Caruana

Caruana couldn't understand why Firouzja didnt block the queenside with 17.a4 followed by castling queenside and Rdg1, when White can attack and Black cannot. "Maybe its not so easy to break through, but its probably winning in the long run," said Caruana.

Firouzja had another chance to go for the same setup if he had taken on g5 with his knight with check. Taking with the bishop allowed Caruana to break with 18b5 and get counterplay. Firouzja soon lost his advantage and then got outplayed in the remainder.

In our Game of the Day, GM Dejan Bojkov provides a detailed analysis in his annotations:

Three draws

The first game to finish was MVL-Nepomniachtchi. The Russian GM was fine with a draw as he played the Petroff while the French GM tried but failed to shoot holes in that opening. He entered a different alley than what Nakamura and Praggnanandhaa had tried against the same opponent in the Candidates, but Nepomniachtchi remembered everything. From start to finish, all the moves were part of both players' preparation and the first 20 or so were blitzed out on the board.

Nepomniachtchi took about four minutes on move 22 to double check everything, as he had to play an important queen move there, and then took some more time on move 24. He spent about 50 minutes in total vs. 41 for MVL. Afterward, the Frenchman thought his approach was quite logical since it's Black who can go wrong at several moments. But Nepo was up to the task once again.

Not long after, Giri and So also called it a day. These players had an old line of the Catalan on the board where the Dutchman might have been confused a little by So's unusual 11th move. The American GM continued in solid style, as he is known for, and Giri didn't find a way to get much play.

Abdusattorov, too, failed to get anything in the opening against Praggnanandhaa, who went for a Moller Defense in the Ruy Lopez (5...Bc5, played before ...b5). By move 20 the game looked completely equal, but somehow Pragg ended up with an extra pawn, which was of little value.

Note that the three draws all ended with a move repetition, because in this tournament draw offers are not allowed during the entire game.

The 2024 Superbet Chess Classic Romania is the second leg of the 2024 Grand Chess Tour. The event is a 10-player round-robin with classical time control (120 minutes for the entire game, plus a 30-second increment per move). The tournament runs June 26-July 5 and features a $350,000 prize fund.

Read more here:
Caruana, Gukesh Score In Opening Round Superbet Chess Classic Romania - Chess.com

Are There Too Many Chess Grandmasters? – The New York Times

When the International Chess Federation created the inaugural list of grandmasters, the games highest title, in 1950, there were 27. Today, there are more than 1,850.

There are too many grandmasters, said Nigel Short, the director for chess development at the federation, the games governing body, who himself is a grandmaster. Mr. Short, who is English, said that when he is in Germany, which has almost 100 grandmasters, To call me grandmaster adds nothing. They are two a penny.

Mr. Short, 59, pointed out that the high number of grandmasters is a relatively recent phenomenon. When he was a rising junior player in the late 1970s, there were only about 100 of them in the world.

To become one is technically not easy. A player must at least once achieve an Elo rating, the system used to rank players, of more than 2,500 less than one percent of players ever do that. A player must also achieve a norm, a performance equivalent to playing at the level of a player rated 2,600, in at least three tournaments.

But not all grandmasters are created equal. Magnus Carlsen, the former world champion, who has been ranked No. 1 in the world almost continuously since 2009, is one. So is Jacob Aagaard, a coach and trainer. The difference between them is their ratings: Mr. Carlsens is 2,830, while Mr. Aagaards is 2,426.

Mr. Aagaard, 50, explained that he stopped playing professionally 15 years ago, shortly after he became a grandmaster. Though he still competes occasionally, he plays more for enjoyment and does not worry as much as he once did about whether he wins or loses, he said.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

See the original post here:
Are There Too Many Chess Grandmasters? - The New York Times

Titled Tuesday June 25, 2024 – Chess.com

GMs Awonder Liang and Denis Lazavik won Titled Tuesday on June 25, and several more players locked up a spot in the 2024 Speed Chess Championship after their highly successful Titled Tuesday performances this year to date: GMs Jan-Krzysztof Duda, Alexey Sarana,Tuan Minh Le, Jose Martinez, Alexander Grischuk, and Hans Niemann. They will join nine invited players, and one additional qualifier to be determined, for the SCC.

Liang won outright over second-place Grischuk, while Lazavik neededtiebreaks to outlast Niemann and GM Magnus Carlsen.

With 724 players in the early field, three jumped out to a 6/6 start, but Liang wasn't part of that group. Only one of them, GM Anton Korobov, won in the seventh round. Liang didn't join the lead until the next round, when he beat GM Mustafa Yilmaz while Korobov only made a draw.

They showed down in the ninth round, where Liang took control. In an opposite-sides castling affair, Liang's king appeared to be less safe, but Korobov never broke through and eventually the tide turned.

Liang's resulting sole lead didn't last after his draw in the 10th round with GM Arjun Erigaisi, allowing Le to join Liang atop the leaderboard.

The winner of Liang-Le in the final round would thus claim the whole tournament. After a tense early middlegame, Liang broke through tactically in the center and kingside, and afterward converted easily.

Five players entered the round on 8.5 points, giving them a chance to jump Le. Only one of them did, which was Grischuk after he checkmated Carlsen in rare fashion. Revenge for Carlsen's win in the fourth round of the 2013 Candidates Tournament? Okay, probably not. Just a good win to secure second place in a Titled Tuesday.

Meanwhile, eight players tied for third on nine points. Of the three with the best tiebreaksArjun, GM Fabiano Caruana, and Sarananone of them actually won in the final round, all making draws there instead.

June 25 Titled Tuesday | Early | Final Standings (Top 20)

(Full final standings here.)

Liang won $1,000 while Grischuk earned $750. Arjun took home $350, with $200 going to Caruana and $100 to Sarana. WGM Gulrukhbegim Tokhirjonova won the $100 women's prize, scoring 7.5 points.

Unlike in the early tournament, the winner of the late tournament was also the last perfect player. After winning his first seven games, Lazavik coasted to victory with three draws in his last four contests, completing an undefeated performance to win the late tournament for a second straight week.

Lazavik's only win in his last four games came against GM Pranav Venkatesh in round 10...

The win didn't quite allow Lazavik to break free, as Niemann also won in the round. Niemann stormed back from a fourth-round loss against the great GM Alexei Shirov by winning his sixth straight game, this one against GM David Anton.

Lazavik and Niemann played the "Berlin Draw" in the 11th round, securing a share of first. One player joined them: Carlsen, who got a taste of his own medicine (in terms of silly openings) when FM Artin Ashraf began their game with the moves 1.a4 and 2.a5, but also got the last laugh with a victory.

While tiebreaks were setting the podium, they also helped Anton finish fourth as he recovered from his setback by winning in the final round, while GM Andrew Hong took fifth.

June 25 Titled Tuesday | Late | Final Standings (Top 20)

(Full final standings here.)

Lazavik earned $1,000 for first place while Niemann managed $750 and Carlsen $350. Anton won $200 and Hong $100 to round out the top five. IM Bibisara Assaubayeva scored eight points to win the $100 women's prize in the 21st overall place.

With the SCC qualification stage complete, attention now returns to the yearlong standings. The top women's scores are even closer now than last week, with a mere six points between first and fifth as IM Polina Shuvalova is now in fourth. Le took over fifth in the open standings from GM Dmitry Andreikin.

Juniors: GM Denis Lazavik (179.0 points)

Seniors: GM Gata Kamsky (167.0 points)

Girls: WCM Veronika Shubenkova (113.5 points)

The Titled Cup fantasy game Chess Prophet continues as well. Current standings can be found here. (Login required.)

Titled Tuesday is Chess.com's weekly tournament for titled players, with two tournaments held each Tuesday. The first tournament begins at 11:00 a.m. Eastern Time/17:00 Central European/20:30 Indian Standard Time, and the second at 5:00 p.m. Eastern Time/23:00 Central European/2:30 Indian Standard Time (next day).

Excerpt from:
Titled Tuesday June 25, 2024 - Chess.com

Decoding the chess moves of Snowflake and Databricks – SiliconANGLE News

As customers try to get artificial intelligence right, the need to rationalize siloed data becomes increasingly important.

Data practitioners could put all their data eggs into a single platform basket, but thats proving impractical. As such, firms are moving to an open model where they maintain control of their data and can bring any compute engine to any of their data.

While compelling, the capabilities to govern open data across an entire estate remain immature. Regardless, the move toward open table formats is gaining traction and the point of control in the data platform wars is shifting from the database to the governance catalog.

Moreover, as data platforms evolve, we see them increasingly as tools for analytic systems to take action and drive business outcomes. Because catalogs are becoming freely available, the value in data platforms is also shifting toward tool chains to enable a new breed of intelligent applications that leverage the governance catalog to combine all types of data and analytics, while preserving open access. Two firms, Snowflake Inc. and Databricks Inc., are at the forefront of these trends and are locked in a battle for mindshare that is both technical and philosophical.

In this Breaking Analysis, we take a look back at what we learned from this years back to back customer events from the two leading innovators in the data platform space. Well share some Enterprise Technology Research data that shows how each firm is faring in the others turf and how a new application paradigm is emerging that ties together data management, open governance and of course, AI leadership.

Lets start by looking at how the landscape of data platforms is shifting under our feet. We see five areas that underpin a platform shift where systems of models will, we believe, ultimately come together to define the next architecture for intelligent apps.

Shifting point of control.As we previously mentioned, the point of control began to shift last year when Databricks announced its Unity Catalog. In response, Snowflake this month open-sourced its Polaris technical metadata catalog. Databricks then acquired Tabular (founded by the creators of Iceberg) and subsequently open-sourced Unity.

At the 2023 Databricks Data + AI Summit, Matei Zaharia, the creator of Spark and co-founder of Databricks, introduced an enhanced version of its Unity Catalog. This moment was pivotal, signaling a shift in control over the system of truth about your data estate from the DBMS to the catalog.

Traditionally, the DBMS has had read-write control over data. Now, the catalog is set to mediate this control. This does not eliminate the role of the DBMS; rather, a DBMS-like execution engine attached to the catalog will manage read-write operations. However, this engine can be a new, embedded, low-overhead, and low-cost SKU. This distinction is crucial in determining who maintains the point of control.

Data becomes a feeder for applications that act.Lets come back to the list above and talk about Nos. 2, 3 and 4 here. Were envisioning an application shift where data platforms increasingly inform business actions. Value is shifting and will continue to shift toward tools and workflows that build on and leverage the governance catalog. Here weve cited both MosaicAI, an offering that comes out of Databricks acquisition of MosaicML last year; and Cortex, Snowflakes machine learning/AI managed service.

Further elaborating, today data platforms have mostly been about building standalone analytic artifacts, what some people call data products, whether its dashboards, ML models, maybe even as simple as refined tables. Now were getting to something slightly more sophisticated in the form of retrieval-augmented generation-grounded gen AI models. By RAG-grounded, we mean theres a retriever and vector embeddings that output simple request-response artifacts.

We believe were moving toward systems that drive a business outcome. Examples include nurturing a lead down a funnel or providing expertise to a customer service agent as to how to guide a more effective response to a customer online or to forecast sales and then drive operational planning. These are more sophisticated workflows that include both a supervised human or a supervising human and an agent thats figuring out how to perform a set of tasks under that human supervision to drive some sort of outcome.

Wild card: the so-called semantic layer. Now, coming back to the final point on the graphic above, theres a potential blind spot brewing further up the stack. Wevediscussed, for example, the Salesforce Data Cloud and its customer 360 approach, where applications such as Salesforce embed the business logic and Salesforce and others on the list have harmonized the business process data, which is a piece that both Snowflake and Databricks appear to be missing or perhaps we should say are relying on the ecosystem to deliver.

Because when youre building these analytic systems to drive business outcomes, they need the context of what is the state of the business, what happened in the business in order to determine what should happen next. If you just have a data lake that has 10,000 or 100,000 tables or however many it is, you dont know what 500 tables have some parts of all the attributes that define the context with, for example, a particular customer or beyond this, how a specific customer relates to a sales process or a service process. None of todays data platform firms has that capability today.

When you probe the leaders on things such as a graph database, which could harmonize all this context, our interactions suggest they see this either as a niche, or as something the ecosystem should deliver. Or in some instances, weve uncovered clues that theyre working on graph databases to add capabilities to the catalog.

But even if they build this knowledge graph that has the people, places and things in the business, it doesnt yet give them as rich a map of the state of the business possessed by firms such as a Celonis Inc., which mines from all the application logs; or Microsoft Corp. is building with its AI enterprise resource planning effort as part of the power platform in Dynamics. Palantir Technologies Inc. also has something like this, as does EnterpriseWeb LLC and RelationalAI Inc. These firms are building technologies that should make it easier to build that sort of capability.

The point is the tool chains are there, but when we say they build on a catalog, that catalog itself has to grow in sophistication to make sense out of the business so that the tools know how to consume it. If the application vendors play this role, then either we risk more data silos or possibly disruptions to todays data platforms.

In the past weve talked about how Snowflake had the lead in database and Databricks was having to play catch-up in that regard. But Databricks indicated at its Data + AI Summit that its Lakehouse offering was the fastest-growing product in the companys history, and stated that it had surpassed a $400 million run rate. So lets take a look at some of the ETR data to see what it tells.

The graphic above is from the April survey of more than 1,800 information technology decision-makers or ITDMs. Net Score or spending momentum is shown in the vertical axis. Thats a measure of the net percentage of customers in the survey that are spending more on a platform. It is derived by subtracting the percentage spending less from the percentage spending more.

The horizontal axis is Overlap or pervasiveness in those roughly 1,800 accounts. That red line at 40% on the vertical axis represents a highly spending momentum. This data cut is for the database and data warehouse sector.

The key point we want to make is that Databricks first showed up in the survey back in January 2023 and is showing both elevated spending velocity and impressive penetration in the dataset. Note that its Ns have increased from 146 in January of 2023 to 292 in April 2024. So it has made major moves to the right while holding momentum on to the vertical axis.

You can see that Snowflake was up into the stratosphere (80%+) on the Y axis in January of 2022. Snowflake has also made strong moves to the right as well, but its momentum has decelerated, consistent with the deceleration in its revenue growth rate, which is tracking in the 25% to 30% range. The point is that Databricks actually showing up more prominently than one might expect. Its Lakehouse revenue is likely growing significantly faster than its overall revenue which we believe has been tracking over 50%.

Now lets flip the script and focus on the sector that has historically been the stronghold of Databricks in an area where Snowflake was really not considered a player. Above were showing the same dimensions but isolating on the ML and AI segment of the survey. Net Score or spending velocity on the Y axis and pervasiveness on the X axis. Snowflake was just added to the sector in the survey thats in the field today, so its late June and the survey will close in July. So this is preliminary data, but you can see the early returns above for an N of more than 1,500 accounts.

The data shows Snowflake, while much lower than Databricks on the vertical axis is: 1) Above the 40% magic line; and 2) Further to the right than Databricks, indicating a strong adoption of its new AI tooling.

Our view is its significant that Snowflake did a really good job with Cortex, embedding generative AI capabilities. Our take is this is a natural outgrowth of two of its strongest personas, data engineers and data analysts, making it dead simple for these folks to use gen AI capabilities within, for example, stored procedures in the database.

So it up-leveled the capabilities of their personas through what they do best, which is make it really easy to use new functionality.

Databricks is high on the Net Score or spending momentum likely because it did such a good job up-leveling the data scientists, data engineers and ML engineers as extensions of their existing tool chains.That was what it did that impressed us, for instance, instead of making gen AI an entirely new tool chain, ML engineers became large language model ops engineers.

This was enabled because MLflow, which is Databricks standard for operational tracking of models, expanded to encompass tracking LLMs. Then Unity absorbed that tracking operational data so that, again, the existing personas were up-leveled to take advantage of the new technology. Our belief is since the data scientists, data engineers and ML engineers were the natural center of gravity for gen AI spending, Databricks harnessed that dynamic effectively to maintain its accelerated momentum.

Below is a chart that Databricks put up at its event last week showing all the different data platforms that Unity connects to, the different types of data and capabilities it offers, and the open access to a variety of engines. The basic ideas is to bring any of these processing engines to the data and have them all governed by Unity.

Coming back to our premise that the point of control is shifting to the catalog: Not the point of value necessarily, although Snowflake is trying to hang onto that value with Horizon. The point is because much of the core data governance function in the catalog is being open-sourced, the value, we believe, is shifting elsewhere, which well discuss later.

Databricks Chief Executive Officer Ali Ghodsi shared the graphic below to summarize the sentiment of the customers. He points out that every chief information officer, board member, CEO, leader, you name it, wants to get going on AI, but theyre also leery of making mistakes and putting their companies in jeopardy from a legal and a privacy standpoint. So they need governed AI, and as we said upfront, they also recognize that a fragmented data estate is a recipe for bad AI, high costs, low value and basically failed projects.

The vision that Databricks put forth at its Data + AI Summit was compelling. One of the comments Ghodsi made was, Stop giving your data to vendors even Databricks, dont give it to us either dont trust vendors.

Stop giving your data to vendors even Databricks, dont give it to us either dont trust vendors.

Ali Ghodsi, CEO, Databricks, 2024

So a little fear-mongering there, clearly you have to trust vendors on many things, but hes saying dont put all your data into a vendor-controlled platform (for example Snowflake), rather keep it open and control it yourselves. And use proprietary tools from us (Databricks), or other engine providers (including Snowflake) to create value from the open data.

The vision is control your own data and be able to bring any engine to the data and apply the best engine for the job that youre trying to accomplish. Basically: Use the right engine for the right job and may the best engine win.

This message resonates with customers. However, when we talk to customers, they tell us they love the vision, but when we talk about governance, the answer we typically get is, Were still trying to figure that out. Do they go with Unity? Do they go with Polaris, Horizon, something else? Most customers are still trying to determine this. Part of the reason is this world of governance is evolving very rapidly.

The next section of this post that really underscores this fact.

One of the big questions going into Databricks Data + AI Summit was: How would Databricks respond to Snowflakes move of open sourcing Polaris, its technical metadata catalog? As we said in ourSnowflake Summit analysis, we wanted to see what it would do with Unity and Tabular. Zaharia answered the Unity question by making the product open under theApache 2.0 license. This was a dramatic moment at the event and if you didnt witness first hand, watch the clip below.

This moment was like a tennis match where, when the camera is focused on the audience, you watch the heads snapping back and forth. At Snowflake Summit, we thought open-sourcing Polaris would sever Unitys connection with Iceberg tables. Then as you saw the next week, everyone thought Databricks open-sourcing Unity as a rich operational and business catalog would sever Polaris hold over Iceberg data.

Then when the dust started to clear, it became clear that it wasnt so simple.Because what happens is, and this wasnt as clear as it should have been coming out of Snowflake, the Snowflake-native Horizon operational and business catalog that is part of the Snowflake data platform actually federates all the governance information with Polaris.

So all your security, privacy and governance policies are replicated and synchronized with the Polaris catalog so that you can have unified governance of your Snowflake data estateandyour Iceberg data estate where, to be clear, when the data is in Snowflake, including managed Iceberg tables, third parties can read that Iceberg data. They cant write it.

Now Polaris governs the external tables that anyone else can read and write and Snowflake can read, but those external tables now have unified governance. So if youre a Snowflake shop and you have Iceberg tables, you have a comprehensive governance solution.

In this case, any engine can read and write Delta tables. So thats an advantage for Databricks. But third-party tools only have read access to Iceberg tables. Many people dont realize that Unity is not yet able to mediate write access to Iceberg tables. Thats a problem and thats why we think Tabular, started by the creators of Iceberg, was part of such a bidding war and ultimately was snapped up by Databricks.

So right now its game on and the stakes keep getting higher.

The other big chess move occurred during Snowflake Summit. Just as Snowflakes co-founder Benoit Dageville was stepping on the stage to present his keynote, Databricks dropped a press release announcing that it was acquiring Tabular. You may recall we had Ryan Blue, the CEO of Tabular, on aprevious Breaking Analysis. George has also featured Blue on his program and weve discussed all these different formats including Delta tables, which is the default format in Databricks and very widely adopted.

Listen below to Blue talk about the difficulty in translating between multiple formats. Its worth mentioning, just to emphasize, he made these comments before Iceberg was in acquisition play. This is unvarnished and is not PR-scrubbed.

Lets translate the implications here. Databricks nows has under its control the creators of Iceberg, meaning it has the technical talent to make Delta and Iceberg interoperable and as seamless as possible.

But based on what we heard Blue say, its not easy, although it completely changes the dynamics. The problems Tabular was working on really extended beyond making the table formats function seamlessly, to adding the governance capability. Tabular, we believe, was really working on a catalog with a sophisticated policy engine for advanced governance. It became pretty clear when Blue was onstage in Alis discussion of the objectives of the Tabular acquisition that the goal now is redirecting all the brains of that 40-person Tabular team more toward interoperability with Delta format.

In other words, adding a policy engine is no longer necessary because thats going to be subsumed by Uniform. Uniform is a capability introduced by Databricks that allows data stored in Delta Lake to be read and written as if it were in Iceberg or Hudi formats, without the need for making copies of the data. This is a very attractive scenario for Databricks (if it works), eliminating the need to duplicate data and further support that single version of the truth.

This presents a potential downside for Snowflake, which rather than seeing Iceberg evolve in the direction of adding the advanced functionality thats in the proprietary version of Iceberg tables that Snowflake has could move in the direction of just Delta interoperability.

Of course, given the open-source status of Iceberg, Snowflake can commit resources to provide that capability if it chooses to do so. Our sense coming out of Snowflake Summit was that Snowflake would wait to see if a pure open-source approach could deliver full read/write functions. And if it couldnt, then customers would choose its managed Iceberg service. But waiting too long could give Databricks a head start that may be insurmountable.

Lets turn our attention to AI. Last year right before the Data + AI Summit, Databricks announced the acquisition of a company called the Mosaic ML for about $1. 3 billion, bringing more AI talent into the company. Databricks has leveraged this acquisition to offer Mosaic AI, a diagram of which is shown below.

Fifteen months ago, we thought the gen AI wave was something Microsoft would use to try to steamroll Databricks because it was a new tool chain that was disruptive to Databricks. Databricks proved us wrong by doing an astonishing job of up-leveling the personas they had with new capabilities, many of which came from the Mosaic acquisition. Now, whats key here is that, again, were not building standalone artifacts, were increasingly building systems of models, each of which has specialized tasks.

Whats crucial about the tool chain is youre going to optimize how all those pieces work together. Thats not all there yet, but the pieces are coming together. What wasnt formally announced yet, but came out in deeper discussions, is that Databricks has hired the researcher who created DSPy, which is really a successor to LangChain. LangChain became sort of uncool once everyone heard about DSPy, which is a way of essentially optimizing a full pipeline of specialized models.

This is not something weve seen yet from other vendors, including Snowflake. Also, crucially, the evaluation function in this tool chain is critical because when youre building continually improving models, the evaluation capability is the critical piece that gives necessary feedback to the models to continue learning.

Check out this Reddit thread on some of the plusses and minuses of DSPy.

So when you put all these pieces together, the Databricks Mosaic tool chain is well on its way toward helping customers build very sophisticated compound systems. This notion that you just have GPT-4, an embedding model, a vector database and a retriever, that becomes less useful. Rather, now were building much more sophisticated tool chains, which brings us to the next point: When you start aggregating these tool chains together, you get something more interesting.

Lets project that out and try to visualize it. Thinking out a few years, well discuss how we see this application paradigm evolving. What weve done below is taken that previous slide and created multiples of them to build essentially a system of systems, where different systems are leveraging foundation models and domain- specific models. These feed a new model an uber model if you will that can take action.

Coming back to this notion of the sixth data platform, or maybe beyond that, creating a digital representation of your business that reflects on the state of people, places and things, our Uber-for-the-enterprise metaphor, where the system of analytics can inform the top-level system. And that top-level system is agentic, meaning an intelligent system thats able to work autonomously, making decisions, adapting to changing conditions in real time, and taking action with or potentially even without human interaction based on the specific use case.

Lets take an example of what this might make possible sometime in the future with a frontier vendor. Weve spoken to Amazon.com Inc.s not Amazon Web Services Inc.s but Amazon.coms head of forecasting,Ping Xu. It has taken an agent of agents approach thats really leading edge. What Amazon did is take about 15 years of sales history so that it now has a forecast that can extend five years into the future down to the SKU level for 400 million SKUs, including products that it hasnt seen yet.

When it gets to the point where it can forecast with reliable precision, it then has a bunch of planning agents that can be coordinated based on that forecast. The scope of those planning agents ranges from how should it build out their fulfillment center capacity, how to configure it, what it should order from each supplier, how to feed supply across the distribution centers, the cross docking, all the way to what gets picked, packed and shipped.

The point of that is you have a system of agents now that are trained together to figure out an optimal set of plans, but that are coordinated in service of some top-level objective for example growth, profitability, speed to delivery, and the like, all within local constraints for each of the agents. This represents a system of systems that serves to drive a business outcome and that needs very advanced tools that dont fully exist yet from merchant mainstream vendors. However, a firm such as Amazon is showing the way for whats going to become possible, and thats where we see value being created in the future.

Lets summarize and end on what we see as the outlook and possible next moves for Snowflake and Databricks, and some of those other players.

As weve said, were moving from a database management system-centric control point, where the database is managing the data, to tools using a catalog to build what we just showed you in the Amazon example. This idea is a systems of systems that feeds an uber system, if you will.

The table format resolution remains unresolved. Its an open issue. Were going to be watching very closely to see what happens with Databricks, with Tabular, how successful they are at integrating seamlessly with Delta, and how Snowflake responds next.

In other words: to what extent Snowflake extends Horizon. Remember, Polaris is the technical metadata catalog thats open source. Horizon contains the role-based access controls and all the really high-value governance functions. But you basically have to be inside Snowflake to take full advantage of these capabilities. Managed Iceberg tables can take advantage of the format, but thats again inside of Snowflake.

Where will Snowflake take Horizon? Is it going to go beyond Snowflake native data? We dont think it has fully decided yet and perhaps its going to let the market decide. If open source doesnt deliver on the promise, then customers might be enticed to stay inside or move more data inside of Snowflake. If open source moves fast, then well likely see Snowflake make further moves in that direction.

We see Databricks next steps focused on unifying Delta and Iceberg more tightly to simplify and lower costs for users. It now has the creators of Iceberg to help do that. They deeply understand how Iceberg works better than Databricks does or did.

Many of these acquisitions are designed to pick up talent and this is a good example. You certainly saw that with Mosaic ML. We saw that with Neeva at Snowflake. So the idea to make Unity the catalog of catalogs, that uber catalog that we talked about, is strategically compelling.

The wild card does remain the players that are building out this so-called semantic layer. We talked about folks such as Salesforce with its version of data cloud and firms such as Palantir that are doing some of these interesting harmonization tasks. Of course, the big hyperscalers, we havent talked about them in this post theyre obviously in the mix, so we cant count them out either.

Two years ago, most of the discussion was around business intelligence metrics. Then we talked about the enabling technology to make it possible to build a semantic layer with the declarative knowledge graphs from RelationalAI or EnterpriseWeb.

But there are compromised versions today where essentially Salesforce is becoming a hyperscaler with a set of applications and a semantic layer. Were focused on how the vendors might evolve and try to outmaneuver each other. One example with Snowflake is to take it from being a catalog thats embedded within the DBMS to a SKU thats standalone.

Snowflake might, for example, price this capability differently, even if it has the same technology in it. It might be extended so that it uses perhaps something such asDagsterto orchestrate the data engineering workflows that happen beyond the scope of Snowflake. If it does something like this, Snowflake can capture all the lineage data, which is the foundation of all operational catalogs. This is one way Horizon could start extending its reach beyond just whats happening within the Snowflake DBMS.

Well tease these for a later Breaking Analysis episode, but theres a lot thats going on because what these vendors are jockeying for really is not just the next data platform, but the next application platform.

Its the platform for intelligent applications.

How do you see this playing out? What did you learn from this years back to back Snowflake and Databricks events? And what dots are you connecting? Let us know.

THANK YOU

More:
Decoding the chess moves of Snowflake and Databricks - SiliconANGLE News

How I crossed 3100 and got into the top ten on chess.com – Chess.com

In January 2024 I hit my highest online rating ever: 3,103 on Chess.com. This put me next to chess super grandmasters and in the top 10 on the website.

However, if you think it was a result of great preparation and a positive life - not at all. Surprisingly, it happened during one of the worst periods of my life.

The worst year in my life

2023 started very badly I got divorced, had health issues, and some other significant troubles I can't even talk about yet. Life was going downhill, and despite my efforts to stay positive and tackle the problems, my challenges had a significant negative impact on me. It also greatly affected my online chess rating. In 2021, I reached my peak at 3,064, but in 2023, my rating dropped below 2,800.

Not surprising, right?

It's the expected outcome of a series of unfortunate events. Reflecting on it, I've come to a big conclusion: Don't play chess when you're emotionally overwhelmed; it just adds to the reasons for feeling down.

But then, one day, everything changed.

The day of the Revolution

Since I had lots of tough days last year, I was looking for a light in the darkness, and my friend suggested a book by Kamal Ravikant called, "Love Yourself Like Your Life Depends on It."

Long story short, it helped me a lot, and in my worst moments I tried to apply the ideas learned from the book. One day, after some meditation practice, I was in a great mood and a sudden question appeared in my thoughts, Why not play chess?

I sat down in front of the laptop, and in a few hours, I crossed 2900. The mood that day was fantastic and was multiplied after this successful session. I didn't analyze how that happened, I just enjoyed it all. I continued the routine for the next few days, any second I had the crazy feeling, "I want to play chess", I did so, and sometimes this even occurred during my working time in the ChessMood office. (I know my boss is reading this, so sorry.)

I grasped this fantastic idea from The Surrender Experiment article and it worked out perfectly.

Shortly, I crossed the 3,000 barrier again!

Dont focus on numbers

Once I crossed 3,000 I closed the website and went to sleep with satisfaction thinking that I would now take a break. The next day I again had the mood to play, but there was a question, What should I do? Look at the 3,000 number and admire it or follow the method that helped me get there and keep playing. I am sure you know my decision, I kept playing but with the strict condition to play only in a good mood. 3,020 - 3,040 - 3,063 in a short while. I regained my absolute peak rating, and a sudden idea struck me, what about 3,100? Why not? I am so close. Then I calmed down and applied the advice that I always give to my students:

Dont focus on the rating - focus on good chess. If you play well enough, the rating will chase your strength and go up.

I kept playing, winning, or losing games, and forgot about my rating. One day I found myself just a few points away from the dream, without even knowing what would happen next.

From Hero to Zero

After winning one of the games, I looked at the number, and it was 3,098!! The next game started, one win and thats it! I had the position below, up the entire army against the lonely king - but a move away from checkmate, I flagged!!!!!!!

I couldn't believe that happened to me. I had my dream in my hands, and it just vanished. A crazy feeling of tilt took control of me, and I started ruining my dream. A few days later, after losing another game, I saw this number next to my name.

I lost about 100 points. I was about to give up and quit and never play chess again. However, as a professional chess player and a coach, I dont have the right to give up easily. Instead, I decided to analyze and understand why this happened. Why did I go from Hero to Zero in just a few days? To figure out the reason for this failure I tried to look back and understand what I did specifically on the days when I played bad chess. Nothing really came to my mind until I remembered a conversation.

Most common mistake

My team member and friend said, "Stop playing chess today, you will lose it all." On questioning why, he said, "Look at your facial expression, you are completely not in the mood." Unfortunately, that day I ignored this advice and kept losing my rating. But looking back, that was an Eureka moment!

My mood! The reason that took me high to the sky was also the reason I dropped down. I played without my winning card! Without my superpower!!! Guess what happened next? I just used it again, and afterward climbed back to 3,072. Again being close to the goal.

Its time to tell you about one of the most significant days in my chess career.

The decisive Saturday

Saturday, Jan 13, 2024, at 5 in the evening I started my private educational stream for ChessMood students, and around 6:45 I could go home to rest. However, while walking downstairs, the magical desire to play chess put me in front of my desk. I was so unbelievably confident. With a rating of 3,072, I had only 29 points left, and I said, Its time to cross 3,100.

For 5 minutes, no one accepted my challenge as I was only looking for 3,000+ opponents. For one second Fabiano Caruana was online and, in a state of euphoria, I even challenged him. Luckily I didnt get accepted 🙂

After a few minutes, the match started with GM David Paravyan, a 3,000+ GM who consistently plays online blitz games and is my long-time rival. We had different scores during several played matches, but this time with a score of 7.5 - 2.5, I had the following position as white.

My heart rate dramatically increased I saw the move that promised me the win. What to play?

Black cant take on h3 as the bishop on f3 would be hanging, and otherwise, a powerful h-passed pawn will be enough to win the game. My opponent resigned.

For a second, I forgot what was going on and was about to play the next game. However, looking at the bottom left part of the screen, I saw this beautiful - dream picture.

I screamed like the famous Lion that is shown before Tom and Jerry cartoons.

I scared everyone else in the building but it was worth it.

Conclusion

The journey is over, and I am now going to rest for a while. I have a message for every chess player, as this painful experience of losing 300 rating points and then gaining 350 opened my eyes.

Some people play chess just for fun, others play as a distraction from their daily routines and problems, and some are professionals.

However, my biggest advice, proven by the results of this journey:

Play chess when you are in a good mood and when you want it like nothing else.

Its hard to think about my next goals. Maybe I should aim for 3,200?

Mmm I need some encouragement. If you think I should do it, put + in the comments, and feel free to share your thoughts.

Related articles

How I crossed the plateau and reached 3000 on chess.com

And here is my gift for you!

My 10 best games You can watch all three courses for free, by creating a basic account here.

The rest is here:
How I crossed 3100 and got into the top ten on chess.com - Chess.com