Let me first start by saying that, all data we collect for statistical analysis is discrete. And by adding that it does not matter...
Why is there no actual continuous data? Because anything we measure, we can not measure with an infinite resolution. We may measure elapsed time in years, quarters, months, etc. but not in seconds. Even if we measured elapsed time in seconds, then we would not do so in femtoseconds (because for what we are studying, such a small resolution would be impractical, and of no value). And even if we measured down to femtoseconds, we would not measure down to Planck's time.
And it does not matter. If we measure date/time down to a resolution which matters for our problem, we can very safely treat it as continuous for all our tests/inferences/etc. You measured date/time to a resolution of 1 year; if that serves the purposes of your investigation, so be it; treat it as continuous all you want. And you could have measured date/time by decades, centuries, or quarters. months, days, etc. It is just the resolution of your measurement; if it gives you the resolution you need... And the resolution of our measurements does not affect the continuous/discrete nature of the underlying variable.
However, some variables are inherently continuous in nature (regardless of the resolutiom used to measure them), while others are inherently discrete. But even there, the distinction is, let's say, fluid (or more fluid than I think it should be...).
Many (most?) numerical variables are inherently continuous; temperature, length, weight, date/time (your case), duration, voltage, concentration of an analyte, etc. are continuous. We decide, as appropriate for our purposes, to measure them with a given resolution, but this does not change their continuous nature; we could have measured them with 100x the resolution (which would have been impractical, prohibitively expensive, etc. but we could have...).
And we have continuous statistical distributions to deal/test these variables (gaussian, t, chi-square, F, beta, gamma, Weibull, etc...).
Other numerical variables (much fewer of them) are inherently discrete; pretty much all of these are counts. For example a coin toss (Bernoulli), or the count of heads in $k$ coin tosses (Binomial), or the number of calls to a service center in an hour (Poisson), etc. Here, using 10th of counts, or 10's of counts does not make sense; the only rational increment is a single count.
And we have dedicated distributions and tests (binomial, Fisher exact, etc.) as well.
Does the distinction matter? Certainly more so than between continuous data (which is a figment of the imagination) and discrete data (which they all are). But maybe not as much as it should?
The "best" (??) counter example is the $\chi^2$ test of association; it is used to compare 2 (discrete) proportions, but rellies on a continuous distribution. Now, under some circumstances, it gives reasonable results. But it treats discrete variables as if they were continuous (normal approximation of the binomial). So much so, that some have developped continuity corrections for it (to get results which come closer to the "exact" discrete tests).
Some practitioners also sometimes discretize continuous variables (by binning the data - which is what a histogram does, btw), or even dichotomize it. This practice is generally frowned upon, as it loses a lot of information, but it can sometimes be a pragmatic approach (fwiw, I have dichotomized data from continuous variables, to obtain Tolerance Intervals, when the data was very non-normal. The price was of course larger sample sizes, but it was still a practical approach).
And it can even go further. If I were to compare, say, the number of events (accidents?) which occured on a certain stretch of road in 2 different years, now the year would be treated as categorical data.
So, is there a difference between continuous and discrete data? No (continuous data does not exist - and it does not matter). Is there a difference between discrete and continuous variables? Yes, but statisticians can play loose with that difference.
Bottom line; your date/time data is continuous, regardless of the resolution (days or seconds, or years or decades).
Now, can you get a histogram of data from a discrete variable? Of course. Below is a histogram from a sample out of a B(100,.3) discrete binomial distribution;

Could I also have taken a bar chart of it? Of course. See below

Could I take a histogram of data from a continuous variable? Of course.
And a bar chart? Yes, I could, but given that there will be few "ties" (exactly equal values), it will have a lot of bars with height 1... Not very informative.
Note that bar charts (or pie charts) are mostly used for purely catagorical data (blood types, eye colors, ethnic origins, etc.), and not really for numerical data.
So your intuition that the appropriate chart depends on the nature of your either your data, or the variable is ill founded. It depends on what you are trying to get the data to say...
Note also that a bar chart (of years) can be thought of as a histogram (with bin width = 1), and a histogram can be thought of as a bar chart (where each category has the width of the histogram's bins). So there is really no fundamental difference.
Last, I am not even sure I understand plotting a histogram of years. Now, a histogram of a random variable X shows for each bin how many values of X fall in that bin. But date is usually not a random variable; we know when (year, month, day...) an event occurred. What may be random is how many such events occurred in that span of time. But then that is exactly a bar chart (with increments of years, decades, months, or minutes...). A histogram implies that the date (year, or month, or decades, ...) where "something" occurred is a random variable, and you are plotting how many of those "dates" (no matter what scale) occurred in the various bins. I struggle to think of practical, realistic, situations where that would make sense (maybe the years when my favorite sports team achieved something: won a trophy, scored more than $n$ goals, etc. That histogram will have a lot of 0s...).
Now a bar chart assumes that the x-axis is categorical. But years are at a minimum interval scale. So a bar chart is equally "odd", no matter the scale (year, day, decade...).
What dates usually are is just time-stamps of when a measurement was made. The proper plot for such data is a time series plot, where you plot a count/measurement as a function of time. And there are many methods to display/analyze these.