Snowflake SQL for SQL Server Users – UNDROP

Overall the SQL language for Snowflake is somewhat of a subset of what's available in SQL Server with T-SQL. But there are some commands that really are useful. UNDROP is one of them.

I've talked in earlier posts about how Snowflake stores data in immutable micropartitions, and in other posts mentioned the concept of timetravel. Well, an added advantage of having older data still accessible is that you can quickly recover from "incidents".

Ever have a sinking feeling when you've run a script that drops an object and suddenly realized you were connected to the wrong database or server?

If you've been in this industry for any length of time, I know you have.

Things go wrong, no matter how careful you are. So it's good to have an option to help you out.

can be used to bring back a table that's previously been dropped. By default, that works for up to 24 hours but it's also possible to have longer retention periods.

When things really go wrong

But it's not just tables that can be undropped. For the really adventurous, there are also:

And these commands are fast. We're talking milliseconds.

There's a psychological aspect to this as well. Having the ability to recover so easily is also useful for increasing the confidence of the people who are making the changes. (Note: I'm not suggesting that should make people more careless. It's just important to know there is a better fallback).

SQL: (SQL Server) The certificate chain was issued by an authority that is not trusted

Are you trying to connect to a SQL Server instance and ending up with the error:

The certificate chain was issued by an authority that is not trusted

You aren't alone.

SQL Server 2005 introduced authentication encryption (by default) in the SQL Native Access Client (SNAC). SQL Server will self-generate a certificate that's then used unless you replace it with your own certificate.

If you do use your own SSL (Secure Sockets Layer) certificate for SQL Server, unless it's a publicly trusted certificate, your client system will need to trust that certificate. Generally that means that you'll need to list your own certificate authority (CA) as a trusted publisher on each of your client systems. Then that would work well.

And that's often the problem that causes the above issue.

Trusting the Server

But what if you just want to trust the certificate that was self-signed by the server? Well there's an option for that (Trust server certificate), in the Options section of the connection dialog:

Chances are that if you just check that box, you'll then be fine.

Note that there's also an option to turn off encryption (by unchecking Encrypt connection). While it would also "fix" the issue, that's not the best option to choose here.

And I'm posting this so that one day in the future when I forget what this was about, I'll find this post.

SDU Tools: Script Database Object Permissions in SQL Server T-SQL

As part of our free SDU Tools for developers and DBAs, we have many scripting functions. Moving or recreating object-level permissions between SQL Server databases can be time consuming.  To allow scripting these out, we've added a tool called ScriptDatabaseObjectPermissions.

It's a stored procedure (as it has to change databases) and takes a single parameter: @DatabaseName (sysname) as the name of the database whose object permissions you want to script.

You can use our tools as a set or as a great example of how to write functions like these.

Find out more

You can see it in action in the main image above, and in the video here:


Access to SDU Tools is one of the benefits of being an SDU Insider, along with access to our other free tools and eBooks. Please just visit here for more info:


SQL: Am I really going to run out of bigint values?

I was in another discussion recently where someone was worried about running out of bigint values. I come across this regularly. A few months back, another customer was in the middle of changing all their bigint values to GUIDs as they were worried about running out of bigints.

People don't seem to understand just how large a bigint really is. It's certainly hard to imagine.

How big is a bigint?

I remember reading an article back in 1992. They said that if you got an 8 bit computer, put zero in a register, and made it loop on an instruction to increment the register value, you'd have 256 operations before it overflowed. That would happen in the blink of an eye.

For a 16 bit computer, and it's 65,536 operations, again it's the blink of an eye.

For a 32 bit computer, and it's 4,294,967,296 operations, on the machines of the time, it was around 2 to 3 minutes.

But for a 64 bit value, on the fastest machine on the planet at that time, it was 350 years.

And for a 128 bit value (like a GUID), the size of those numbers is outside our ability to comprehend in any meaningful way. (That's why they picked that size).

Current Day

Now computers have become quite a bit faster but based on the fastest MIPS value on Wikipedia's page on it today, that's still around 2 years. Yep, that's two years of doing nothing but just adding one, and on a staggeringly fast machine.

Many years back, I remember someone from the product group telling me they'd had the first customer who'd exceeded bigint for an identity column. But no surprise, crazy allocation of values was involved.

Business users are never going to run out of bigints for sane processes, unless they do something dumb like big seed or increment values.

T-SQL 101: #52 Using TRANSLATE to make multiple string replacements in SQL Server

I previously mentioned that there were three functions for replacing values in strings. We saw REPLACE and STUFF. The other one is relatively new (SQL Server 2017) and it's called TRANSLATE.

Here's an example of how I can use it to change the format of a phone number:

The first parameter is the string to work on, the second parameter is a list of individual characters to replace, and the third parameter (which must be a string the same length as the second parameter), is the set of replacement characters.

Note: the documentation for this function says "The behavior of the TRANSLATE function is similar to using multiple REPLACE functions. TRANSLATE does not, however, replace a character more than once. This is dissimilar to multiple REPLACE functions, as each use would replace all relevant characters." However, whenever I test it, it does multiple replacements.

If you want a function that replaces all occurrences and/or if you are using a version of SQL Server prior to 2017, take a look at our Translate function in our free SDU Tools for Developers and DBAs. It does just that.

Learning T-SQL

It's worth your while becoming proficient in SQL. If you'd like to learn a lot about T-SQL in a hurry, our Writing T-SQL Queries for SQL Server course is online, on-demand, and low cost.

Learning Mandarin: Relative times for Months

I wrote in previous posts how Chinese refer to relative days and relative weeks in Mandarin. In this post, we'll look at how they refer to months.

(Yuè) is the word for Month.

这个月 (Zhège yuè) means "This month"

Similar to weeks, the words for up and down are used to make previous and next months:

上个月 (Shàng gè yuè) means "Last month" (where Shàng means "Up")

下个月 (Xià gè yuè) means "Next month" (where Xià means "Down")

Other periods are a bit different though. While you can double the ups and downs like we could with weeks:

上上个月 (Shàng shàng gè yuè) is "The month before last" or literally "up up month".

下下个月 (Xià xià gè yuè) can be used for "The month after next", even though Google translates it as "Next Month" (I think it's just wrong on that).

More commonly, I seem to hear these variations used:

前一个月 (Qián yīgè yuè) can be used for "The month before last" or "The previous month". Qián means "In front of".

下个月之后 (Xià gè yuè zhīhòu) seems pretty common for "The month after next". The zhīhòu part means pretty much "after that".

Learning Mandarin

I'll write more soon on the best methods for learning Mandarin. If you want to get a taste for it in the meantime though, my current favorite site is iTalki, and my favorite teacher by far is Amy He. If you decide to try it, click here and it's cheaper for both you and me.

Snowflake SQL for SQL Server Users – Sequences: The good and the not so good

I've been working to convert our SQL Down Under sample databases like PopkornKraze across to Snowflake. One of the areas that I ran into real challenges with was sequences.

The good news

The syntax for creating and accessing them is basic enough.

is enough.  (Note the double-quotes are mine as I don't want the name auto-wrapped to SEQUENCENAME).

And there are some of the same options that SQL Server has:

  • START WITH = 1            (you can even leave out the equals sign)
  • INCREMENT BY = 1      (again the equals is optional)
  • COMMENT = 'some comment'

Snowflake syntax also supports IDENTITY(seed, increment) if you want to define that at a table level. That is like a single table sequence but might help in migrating SQL Server code.

What I do really like are the options for replacing or only creating if not exists:

We can't do that in SQL Server today.  (I've complained to the SQL Server team that these types of options don't exist for sequences).

Getting the next value for a sequence is easy enough:

Like SQL Server, there's no guarantee that you won't have gaps in your numbers that are allocated.

There is a table-level function called GETNEXTVAL() that takes the sequence name as a parameter. It's a one row table function that generates a unique value. It's used for precise control of sequence generation when many tables are involved.

The not so good news

First up, there's no concept of anything like currval in Oracle (current value), or @@IDENTITY, or SCOPE_IDENTITY() in SQL Server. The documentation argues that this is an intentional omission as it encourages row by row coding. Can't say I buy that argument.

The biggest challenge (by far) is that you can't reset the next value to be issued by either an IDENTITY or Sequence. This really needs to get fixed. Lots of people are complaining about it in the Snowflake Lodge (user forums) and it just isn't a good thing. Here's an example:

  • You define a table using an IDENTITY column or a Sequence, let's say for a primary key.
  • You need to load existing data into it. This works easily as there is no concept of something like SET IDENTITY INSERT ON/OFF. You can just insert.
  • Next row that gets added to the table will now probably have a duplicate primary key value added.
  • And as Snowflake also doesn't check primary keys (or most other constraints), you'll really just end up with multiple rows with the same primary key.

ALTER SEQUENCE does allow you to change the increment, but won't let you change the next value to be issued. (Or the seed).

You have to drop and recreate the sequence. And now here's the even-nastier part: if you drop the existing sequence, it doesn't stop you doing it, but the column that was using it still references it. So even when you recreate a sequence with the same name, your next insert will blow up with an object not found. (Again, people complaining about this in the Snowflake lodge).

This means that you can't add sequences as defaults to tables until you've finished loading all your data. That's just not sensible, and breaks all the rules about separating DDL and data. The DDL that you write for a table should not be dependent upon what data happens to be loaded into a table.

How could you then write DDL that's used in multiple places? (Unless you just always start with a gigantic seed value).


This aspect of the product needs to be fixed. It's hard to believe it's so broken. Wish it wasn't so.


SQL: Passing different data types to a single stored procedure parameter

There was a recent question on the forums about if there was any way to create a sql parameter with defined type in typescript fashion. What they were after was something like this:

They wanted a single parameter that could have different data types, but only from a specific list of data types.

SQL Server doesn't support that type of construct directly, but what it does do is support the sql_variant data type. So you could have defined the procedure like this instead:

That allows you to send in any data type. What's then remaining to do is to make sure it's one of the allowed data types. You can do that by querying the SQL_VARIANT_PROPERTY function.

As an example, you can find the underlying data type like this:

And then just add logic to check the type name, and if correct, proceed onto casting the values.

SDU Tools: Version 17 shipped recently to SDU Insiders

Version 17 of our SDU Tools shipped out to all our SDU Insiders a few days ago.

It contains the following updates:

DatesInPeriod – Produces a table of dates from a starting date, for a number of intervals. For example, 3 weeks of dates from a starting date.

ServerMaximumDBCompatibilityLevel – Returns the maximum database compatibility level supported by the server.

SetDatabaseCompabilityForAllDatabasesToMaximum​ – Sets the database compatibility level of all databases to the maximum value supported by the server.

WeekdayOfMonth – Calculates a specific weekday of a month. For example, the 4th weekday of February 2020.

DayNumberOfMonth – Calculates a specific day within a month. For example, the 3rd Tuesday of March 2020.

Countries – A brand new view that provides details of countries, along with their ISO codes, and numbers, and the continent that they're part of.

Currencies – A new view that provides details of the currencies in use around the world, their ISO codes, their currency symbols, and details of any minor units (like Cents are to Dollars in AUD or USD). If there are minor units, it tells you how many per major unit.

CurrenciesByCountry – A new view that provides a list of which countries use which currencies. (Some use more than one).

I hope you find them useful.

If you don't get our notifications and you'd like a copy of the tools (free to our insiders), you'll find more info here:

Book Review: TED Talks – The Official Ted Guide to Public Speaking

I do a lot of speaking at conferences, user groups, online, and other sorts of events. It's important to constantly improve, so I take notice of any books released on the topic. I recently listened (via Audible) to TED Talks – The Official Ted Guide to Public Speaking by Chris Anderson.

I've been a fan of TED and their conference talks for a long time. The overall quality of the talks is very high, and some (like Ken Robinson's talk on schools killing creativity) set a really high bar.

TEDx talks are the extension talks. They are local events and the quality isn't quite at the same level, but still often very, very good.

So I was fascinated to see this book was by Chris Anderson. Chris is basically the guy who runs TED nowadays. He took it over from the original founder many years back, and has seen it go to great heights. I was keen to hear Chris' thoughts. If anyone has listened to a lot of presentations, it's Chris.

A real bonus was that he narrated the audio book that I listened to.

The book was a little slow getting started, almost to the point where I was going to give up, but I'm so glad I persisted. It's a truly wonderful book. If you don't gain insights from Chris' thoughts, you aren't trying.

I was also quite fascinated in some of the background info about TED, and how involved Chris and the team are with each presentation. They really help to improve the final product that you see. Not surprisingly too, there are sessions that just don't make it out the door.

The Financial Times review of the book said "Excellent; easily the best public speaking guide I have read".

Chris describes his five key techniques: Connection, Narration, Explanation, Persuasion and Revelation. He then goes on with additional ones to avoid. The book is really well constructed, and Chris is the ideal person to have written it.

Bottom line?

I really enjoyed this book. If you don't find the start of it all that compelling, I'd encourage you to keep at it. It's worth it. If you have any interest at all in public speaking, this is a wonderful book.

Greg's rating: 8 out of 10