*(Edit: This post seems to have been put on Reddit and has caused some downtime.)*

*This post is a pseudo-continuation of Part I.*

This series is officially canceled! Let’s discuss why.

I was preparing some rather elaborate code for this post. Basically I was going to start playing with structure and type classes for you all. But I ran into some issues. Ones that turned me off.

Let’s dive right in.

Consider the following definition of `Monoid` and declaring `Int` is an instance of `Monoid`.

class Eq a => Monoid a where add :: a -> a -> a addId :: a instance Monoid Int where addId = 0 add = (+)

To mathematicians, this is essentially stating what operations and elements a monoid must have, and stating that `Int`, or $\mathbb{Z}$, is a monoid given the assignments shown.

Now consider:

data UnivariatePolynomial t = UnivariatePolynomial [(t, Int)]

Basically, it says that $P(t)\cong L(t\times\mathbb{Z})$ where the polynomials over $t$ are denoted $P(t)$, and lists of $t$ are denoted as $L(t)$. Now consider an isomorphism $\phi : L(t)\to P(t)$ for monoid $t$.

phi :: Monoid a => [a] -> UnivariatePolynomial a

Assume this is suitably defined. Given that we have declared no other monoids (we have only declared integers as such), what do you expect

phi [1,2,3,4,5]

to return? If you said “the polynomial $5x^4+4x^3+3x^2+2x+1$”, you’d be absolutely…WRONG.

This is what was particularly upsetting. $\phi([1,2,3,4,5])$ is ambiguous from the type theoretical point of view. Let’s figure out why.

Well, we have a list of integers $l \in L(\mathbb{Z})$. And $\phi : L(t)\to P(t)$ for monoid $t$. So $\phi(l)\in P(\mathbb{Z})$ since $\mathbb{Z}$ is a monoid, yes? Of course.

Except that’s not how Haskell works.

In an effort to alleviate issues with explicit type casting of numerical types, Haskell has a type class called `Num`. Here is its definition.

class (Eq t, Show t) => Num t where (+), (-), (*) :: t -> t -> t negate :: t -> t abs, signum :: t -> t fromInteger :: Integer -> t

You have all sorts of numerical types, all of which should be a superset of the integers. So by classifying an umbrella class to cover all of these numerical types, we can create generalized functions, like addition, to work on all numerical types.

How does this play into my distaste? Well.

*Main> phi [1,2,3,4,5] interactive:1:20: Ambiguous type variable `t' in the constraints: `Num t' arising from the literal `5' at interactive:1:20 `Monoid t' arising from a use of `phi' at interactive:1:0-21 Probable fix: add a type signature that fixes these type variable(s)

So what does this mean? It is saying $5$ is a type which is of the type class

`Num`, but

`phi`is looking for a monoid. All of that is indeed true! Is it getting confused that it doesn’t know what to decide $5$ is that of

`Num`or that of

`Monoid`.

Turns out that’s not the issue. **The issue is that Haskell doesn’t know that $5$ is an integer.** More specifically, it knows it could be interpreted as an integer, but it could be interpreted as any numerical type. So we have to clarify.

*Main> phi [1,2,3,4,5] :: UnivariatePolynomial Int 5x^4 + 4x^3 + 3x^2 + 2x + 1 *Main> phi [1,2,3,4,5::Int] 5x^4 + 4x^3 + 3x^2 + 2x + 1

We have to clarify that $5$ is an

`Int`? Seriously?

*Main> :type 5 5 :: (Num t) => t

Yep. Well, okay, it makes sense if you consider \[\big\vert\mathtt{Monoid}\ t\cap\mathtt{Numeric}\ t\big\vert > 1\] to be true. And in general mathematics, it is. But for our definitions, it

**is**$1$; I have only defined integers to be an instance of monoids, and Haskell assumes that though I have only declared integers an instance of such, I

*may*declare more in the future!

The fine folks at `#haskell` clarify.

[18:33] <conal> Quadrescence: i think this behavior is referred to as "the open world assumption". [18:35] <conal> Quadrescence: i guess the behavior you expected is like nonmonotonic logic. where learning new information invalidates previous deductions.

You can say that again. Of course I don’t expect any more than I have already asserted.

[18:35] <Quadrescence> conal: So let me get this straight. Haskell reads 5, and just determines that it is of type Num t => t. So we have [Num t => t]. Use (phi), we have that [(Num t, Monoid t) => t]. And it is not able to tell what t can possibly be because I perhaps would define more instances of Monoid in the future, and therefore it can't discern that it is the only current possibility: an Int [18:36] <mauke> no, (Num t) => [t] [18:36] <Quadrescence> mauke: I did it for emphasis. [18:36] <mauke> you are using syntax errors and/or different types [18:36] <Quadrescence> mauke: I'm not talking to Haskell inter- preters, I'm talking to humans over the internet. [18:36] <conal> Quadrescence: right (modulo mauke's correction).

Pardoning

`mauke`‘s pedantry, this indeed seems to be the case.

I can accept this mathematically, but not practically. If I am to accept this mathematically, then Haskell’s `Num` is a major issue I don’t want to deal with (and I don’t have to, `#haskell` said I should rewrite the entire default standard library called the `Prelude`). If I am to accept practically, I need to not choose Haskell. The air on which Haskell programmers seem to thrive reeks of foul stench of cargo cult mathematics, something of which I don’t want to be a part. *[Edit: Please read the follow up.]*

## Epilogue

Here’s the code I was going to incrementally present in Part II, if you are interested in exploring further.

-- MONOID class Eq a => Monoid a where add :: a -> a -> a addId :: a -- ABELIAN GROUP class Monoid a => Group a -- Abelian where addInv :: a -> a --PSEUDORING class Group a => PseudoRing a where mul :: a -> a -> a -- RING class PseudoRing a => Ring a where mulId :: a -- EUCLIDEAN DOMAIN class Ring a => EuclideanDomain a where degree :: a -> Int divide :: a -> a -> a modulo :: a -> a -> a x `modulo` y | y /= addId = add x $ addInv $ mul (x `divide` y) y -- FIELD class Ring a => Field a where mulInv :: a -> a --instance Field a => EuclideanDomain a -- where -- degree x = 0 -- x `divide` y = x `mul` (mulInv y) -- CONVENIENCE FUNCTIONS summation :: Monoid a => [a] -> a summation = foldr add addId sub :: Group a => a -> a -> a sub x y = x `add` (addInv y) square :: PseudoRing a => a -> a square x = mul x x gcf :: EuclideanDomain a => a -> a -> a gcf x y | degree y <= 0 = x | otherwise = gcf y (x `modulo` y) divi :: Field a => a -> a -> a divi x y = x `mul` (mulInv y) infix 7 `mul`, `divi` infix 7 `divide`, `modulo`, `gcf` infix 6 `add`, `sub` -- INTEGERS instance Monoid Int where addId = 0 add = (+) instance Group Int where addInv = negate instance PseudoRing Int where mul = (*) instance Ring Int where mulId = 1 instance EuclideanDomain Int where degree = abs divide = div modulo = mod -- QUOTIENTS data Quotient t = Quotient (t,t) instance PseudoRing t => Eq (Quotient t) where Quotient (a,b) == Quotient (c,d) = a`mul`b == c`mul`d instance (Ord a, PseudoRing a) => Ord (Quotient a) where Quotient (a,b) <= Quotient (c,d) = a`mul`d <= c`mul`b instance Ring a => Monoid (Quotient a) where addId = Quotient (addId, mulId) Quotient (a,b) `add` Quotient (c,d) = Quotient (a`mul`d `add` b`mul`c, b`mul`d) instance Ring a => Group (Quotient a) where addInv (Quotient (a,b)) = Quotient (addInv a, b) instance Ring a => PseudoRing (Quotient a) where Quotient (a,b) `mul` Quotient (c,d) = Quotient (a`mul`c, b`mul`d) instance Ring a => Ring (Quotient a) where mulId = Quotient (mulId, mulId) instance Ring a => Field (Quotient a) where mulInv (Quotient (a,b)) = Quotient (b,a) instance (Show a, Ring a) => Show (Quotient a) where show (Quotient (x,y)) | y == addId = show x | otherwise = show x ++ "/" ++ show y infix 7 // (//) :: t -> t -> Quotient t x // y = Quotient (x,y) --- POLYNOMIALS data Monomial a = Monomial a Int data UnivariatePolynomial a = UnivariatePolynomial [Monomial a] coef :: Monomial a -> a coef (Monomial x _) = x expo :: Monomial a -> Int expo (Monomial _ e) = e psingle :: a -> Int -> UnivariatePolynomial a psingle x n = UnivariatePolynomial [Monomial x n] pconst :: a -> UnivariatePolynomial a pconst x = psingle x 0 lcoef :: UnivariatePolynomial a -> a lcoef (UnivariatePolynomial xs) = coef $ head xs pnorm :: Monoid a => [Monomial a] -> [Monomial a] pnorm = filter ((/=addId).coef) . puniq . sort' where sort' = foldr ins [] ins x [] = [x] ins x (y:ys) | expo x > expo y = x : y : ys | otherwise = y : ins x ys puniq [] = [] puniq [t] = [t] puniq (x@(Monomial a n) : xs@((Monomial b m):ys)) | n == m = puniq ((Monomial (a`add`b) n):ys) | otherwise = x : puniq xs instance Eq a => Eq (Monomial a) where (==) (Monomial a n) (Monomial b m) = n==m && a==b tneg :: Group a => Monomial a -> Monomial a tneg (Monomial a n) = Monomial (addInv a) n tmul :: PseudoRing a => Monomial a -> Monomial a -> Monomial a tmul (Monomial a n) (Monomial b m) = Monomial (a`mul`b) (m`add`n) instance Eq a => Eq (UnivariatePolynomial a) where UnivariatePolynomial a == UnivariatePolynomial b = length a == length b && and (zipWith (==) a b) instance Monoid a => Monoid (UnivariatePolynomial a) where addId = UnivariatePolynomial [] UnivariatePolynomial a `add` UnivariatePolynomial b = UnivariatePolynomial (pnorm (a ++ b)) instance Group a => Group (UnivariatePolynomial a) where addInv (UnivariatePolynomial a) = UnivariatePolynomial (map tneg a) instance PseudoRing a => PseudoRing (UnivariatePolynomial a) where mul (UnivariatePolynomial a) (UnivariatePolynomial b) = UnivariatePolynomial $ pnorm [tmul x y | x<-a, y<-b] instance Ring a => Ring (UnivariatePolynomial a) where mulId = pconst mulId instance Field a => EuclideanDomain (UnivariatePolynomial a) where degree (UnivariatePolynomial []) = -1 degree (UnivariatePolynomial xs) = expo (head xs) f `divide` g | n < m = addId | otherwise = (f `sub` h`mul`g) `divide` g `add` h where n = degree f m = degree g h = psingle (lcoef f `divi` lcoef g) (n - m) listToPoly :: Monoid a => [a] -> UnivariatePolynomial a listToPoly x = UnivariatePolynomial $ pnorm (listToPoly' x 0) where listToPoly' [] _ = [] listToPoly' (x:xs) n = (Monomial x n) : (listToPoly' xs (n+1)) p :: Monoid a => [a] -> UnivariatePolynomial a p = listToPoly interleave :: a -> [a] -> [a] interleave x [] = [] interleave x [y] = [y] interleave x (y:yy:ys) = y:x:interleave x (yy:ys) instance Show a => Show (Monomial a) where show (Monomial x n) | n == 0 = show x | n == 1 = (show x) ++ "x" | otherwise = (show x) ++ "x^" ++ (show n) instance Show a => Show (UnivariatePolynomial a) where show (UnivariatePolynomial xs) = "(" ++ (concat $ interleave " + " $ map show xs) ++ ")" type Z = Int type Zpoly = UnivariatePolynomial Int type Qpoly = UnivariatePolynomial (Quotient Int) type Q = Quotient Int type QofZpoly = Quotient (UnivariatePolynomial Int)

I think Haskell looks futuristic.

magic!

Or, just replace the Int with Integer throughout the file; then reopen in ghci and:

[1 of 1] Compiling Main

Ok, modules loaded: Main.

*Main> p [1,2,3,4,5]

(5x^4 + 4x^3 + 3x^2 + 2x + 1)

it :: UnivariatePolynomial Integer

Magic!

You seem to have gotten quite worked up over what is, at heart, a type-defaulting issue. You should look at the numeric-prelude package on Hackage; it is a replacement for the Prelude which breaks compatibility with Haskell98 in exchange for a much finer typeclass heirarchy. You’ll find that it most of the classes in your example (rings, fields, monomials, etc) are already there. The standard Prelude class hierarchy is showing its age–it predates a lot of advance type system features, and also reflects the then-prevailing understanding of what was needed, not current ideas. (Case in point: Monad doesn’t depend on Functor, and does include ‘fail.’ If it weren’t for compatibility, everyone would want that fixed.)

tl;dr: Give Haskell another chance, and look the numeric-prelude and the default statement.

Thank you for the nice comment.

I don’t reject Haskell so much as a programming language in general. I am just bothered by some of the idiosyncrasies mentioned in this post.

I can get worked up over languages. I have spent time with a lot of them, and we all of course are entitled to our own opinions.

We will see about giving Haskell another chance if I find wonderful reason to. But having implemented an interpreter and compiler of an almost-superset of Haskell’s type system, it is just becoming more and more evident, to me, that it is becoming less and less appropriate for programming.

[...] This post was mentioned on Twitter by News Bloom, Proggit Articles. Proggit Articles said: "The air on which Haskell programmers seem to thrive reeks of foul stench of cargo cult mathematics, something i… http://bit.ly/e2Aqh4 [...]

The problems with certain old number/math-related type classes in Haskell are well known. NumericPrelude, which may eventually replace the current stuff, may address your concerns.

Separate compilation basically requires the open world assumption. It’s either that or you end up with some really strange non-monotonic logic where loading new modules breaks existing modules.

I suppose there could be support for closing type classes when their definitions are only visible in one module, but that seems like a really narrow use case.

Writing phi [1 :: Int, 2,3,4,5] doesn’t seem so bad. You might even be able to avoid it using the Haskell defaulting mechanism. (You’d want this module to use “default (Int)” instead of the implied “default (Integer, Double)”.) This facility was created specifically for the practical problem you are discussing.

I agree, and I think everyone agrees, that the Num class is a jumble of things that could have usefully been split into about 10 other classes. There are various solutions to this “problem” ranging from ignoring it, to a replacement prelude, to relatively minor language extensions (see the class alias extension proposal for one partial solution with applicability to a number of similar issues), to an extended language definition such as the Haskell’ project.

Another interesting proposal is this one to generate better error messages when typing problems arise from the open world assumption or more complicated issues of context satisfiability in the presence of type classes.

Thanks for this. I didn’t know of

defaultbefore.Lisp still has too many parenthesis

I think you mean

parenthes.esYour complaints seem overblown to me. Num is a hack meant to support overloaded numerical literals and operators, it’s not supposed to make mahematical sense. How are type annotations problematic here?

Anyway, do check out existing Hackage packages, such as http://hackage.haskell.org/package/constructive-algebra and http://hackage.haskell.org/package/HaskellForMaths

Annotations aren’t a problem. I just don’t see them as the solution (in this case).

Haskell does indeed have a bad

interpreterfor the things you want to do. GHCi doesn’t determine instances of type classes, and even when it does, it does not work well. (the proposals would let you fix this with a default instance or two)But, when you use GHC, the

compiler, you don’t notice this bug: either a function is polymorphic in the type class, and GHC adds it to the type signature, or else you’ve explicitly specified the type, and GHC knows which instance to use.Relative to the main “crowd” of users, you’re in a dark, dusty corner of the language; it’s sad that most of them don’t care enough to fix it. But at least there’s a proposal; consider how someone writing “C for Mathematicians” or “Python for Mathematicians” would be received when they pointed out a problem.

Anyways, sorry to see that it’s canceled; I liked the first post.

If it was truly enjoyed, I’d look into just sucking it up and going on with it. I didn’t really receive much response from the first post, so it didn’t seem worthwhile continuing anyway.

That you didn’t receive much response to your first post might be a reason not to continue, but it is not a reason to bash conal and mauke.

Think about it: you were actually being tutored in Haskell by Conal Elliott — and then show gratitude by accusing him and others of ‘cargo cult mathematics’ . All of this just because you were tripping up over tedious details about defaulting. It is so immoral, the mind is stunned.

It is you who are guilty of cargo cult mathematics, and what is more you know this; the accusation can only be understood as case of projection.

I was not accusing Conal Elliott orIf that wasn’t clear, it should be now.anyone else in particularof anything.(edit: added emphasis, sinceapplicativeseemed to not understand a first time.)What is supposed to clear this up? It is plain in the text of your blog. You quote your discussion with mauke and conal, who make excellent clarifying points, then say the air they and others breathe “seems to reek of the foul stench of cargo cult mathematics”. This is a wrong that you do to Haskell programmers in general — by the way no one can stop windbags from studying Haskell, it is a language, like French — but it is a wrong that you are doing to mauke and conal in particular — when you ought to be grateful to them in particular. I am confident that you still haven’t studied what Conal said about “learning new information”, else you wouldn’t have written the next sentence.

Edit: You might as well suggest that it’s not murder when the murderer says “It wasn’t murder”. The way you will cease to be accusing conal and mauke of cargo cult mathematics is alter the post, and to apologize, not to assert elsewhere that you were not wronging them personally. You *are* wronging them — unusually talented people who were trying to teach you, at no cost, because of their love of the topic — and you will continue to wrong them as long as the text of this post stands unedited, and the injustice will be the greater the longer you delay. These these are plain moral facts.

The reason for the open world assumption is to obtain sane behaviour in the face of separate compilation.

While compiling a module, the compiler shouldn’t make any instance selection based on the fact that another instance doesn’t exist.

This is because any future module might come along, import this one, and add some other instance that the compiler had assumed wouldn’t exist when it compiled the first module. Potentially confusing and inconsistent behaviour results as definitions are combined between the two.

Like many of the concessions made in order to achieve separate compilation, it’s a bit of a drag, but if you want short compile times *and* reasonably sane behaviour, well, there you go.

It seems the snippet that the rest of the Internet has picked up on from this article is the “foul stench of cargo cult mathematics” bit. I was wondering if the author could clarify or elaborate in what way this was meant? I think it would help the discussion both here and elsewhere.

Sure. I’ll elaborate right here.

In mathematics, we strive to rid our expositions of ambiguity, and we deem the expositions with the least ambiguity as “rigorous.” One, perhaps unfortunate, side-effect of the loss of ambiguity is that words may not exist which sufficiently describe the relationship or structure that we’re trying relay, so we make them up. Often these words have Greek or Latin underpinnings, and since these words are very precise in their meaning, it is often that many of them are used in conjunction with one another. This has the somewhat pernicious effect of making mathematics less approachable by laymen, and so many assume mathematicians are “geniuses” who not only don’t choke on these hyper-polysyllabic words, but can understand them and reason about them. One may contrast the famous, “you can’t comb the hair of a coconut flat” with “on any tangent vector field to an even-dimensional $n$-sphere, there exists at least one zero-vector,” to see the problem in-motion.

There are individuals, however, who thrive on the “mysteriousness” afforded by this unapproachability. They revel, rather than find distaste, in the “genius” image pinned to those who can speak in this technical tongue. These individuals assimilate the concepts, and then pervert them, giving them names which are not appropriate — a kind of pseudo-mathematics. Haskell’s

Monadis a prime and very uncontrived example. It is impossible to ensure that a monad within Haskell conforms to the monadic axioms (in other words, it is impossible to ensure a monad within Haskell is actually a monad). Furthermore, though it is often regarded as a historical mistake, monads in Haskell are not instances of theFunctortype class. This is particularly strange, because monads in category theory are functors by definition. By considering the use of this pseudo-mathematics as reasoning in the decision-process for the Haskell language, we arrive at the conundrum described in the post. Claiming mathematics as the reasoning for such errors contradicts the flagrant disregard for mathematical correctness withMonadorNum, for example. It is this contradiction with which I take severe offense.Haskell’s relationship to category theory, which has been so thoroughly emphasized, has had a profound effect on the majority of the community. I have personally spoken with countless Haskell programmers who, either implicitly or explicitly, assert they are expert users of the language. They are keen to emancipate themselves of their previous ways of reasoning about programs in favor of The Haskell Way™, which happens to be strongly founded on mathematical ideas. I feel that it is clear most Haskell programmers are not mathematicians, professional or hobbyist, and their knowledge of mathematics is generally below that of what is required to properly understand category theory and its implications from a global perspective. I would suspect their understanding of categorical objects has its roots in one of the many number of “monad tutorials” which usually embrace the idea of a monad to something more concrete.

As a result, the community becomes inundated with programmers who make a sort of claim to understand Haskell (and, erroneously, therefore category theory). In order to keep the reputation with which they wrapped themselves, they feel they are required to unnecessarily (a) use the aforementioned technical terms in programming contexts, and (b) write code which is overly generalized or complex, usually under otherwise baseless pretenses (something is probably wrong when you’re using

Functor f => Algebra f b -> GAlgebra f (Cofree f) a -> (f :~> f) -> FixF f -> aas a type declaration). The definition of cargo cult programming and “mathematics” resonates with this sort of behavior. I hope the reason why Haskell’s community nurtures such behavior, directly or indirectly, should be obvious.This makes a lot of sense, and is a fair criticism. It’s one that I hope a lot of the Haskell community takes to heart. I could quibble with some of it… (for example, Monad not being related to Functor happened for purely historical reasons, and everyone would be delighted if there were a clean way to make the two related properly). But there is a very good point buried deeply here. At some point, one must ask the question “okay, perhaps you can prove that your code satisfies the axioms of that mathematical structure, but why? How does that relate to a problems you’re solving?” Evaluated in this light, I think we’d find that a good bit of the math formalism in Haskell is easily justifiable… but a lot of it would have a hard time weighing positively against the complexity added by tossing it in. (Incidentally, it’s my opinion that the numeric-prelude package you’re being pointed to would fall into the latter category — perhaps we *can* split Num into 10 different type classes, but that would make a fundamental part of the language 10 times harder for average programmers to understand, and it’s clear that nearly all numeric types used by Haskell programmers come equipped with all of the structure in Num, and when they don’t, one can either invent new operators, or else fall back to runtime errors.)

I think most of the negative reaction you see here is that the example you chose is a really poor one. The type ambiguity you’ve identified is *not* related to any of the mathematical underpinnings of Haskell; rather, it’s about providing reasonable predictable behavior of a changing program over time, and giving a programmer a reasonable shot at knowing if their program will compile or not. (Cale’s point about separate compilation is also a good one.) And the underlying feature that made the type ambiguity necessary — namely, that integer literals are polymorphic in their type — arises from Haskell’s decision about whether to have subtyping or implicit type coercions; another completely reasonable language choice that’s made in various ways in any language with a static type system. Whether you like Haskell’s choice here — which is to not have subtypes or implicit conversions, and therefore to not fix the type of an integer literal, and therefore to give an ambiguity error when the proper type can’t be inferred from context — is unrelated to this point about “cargo cult mathematics”. The two should be discussed separately, because they are basically orthogonal to each other.

The idea the the Monad type class was introduced into Haskell without awareness that every monad on a category is also a functor, and that some mistake was being made on this point, is another slanderous falsification, which you could have avoided with about 2 minutes research. The role of Moggi’s papers especially http://www.disi.unige.it/person/MoggiE/ftp/ic91.pdf is quite familiar, you will certainly have seen references to it, but presumably never bothered to look. Note the definition he gives five pages in, which of course begins “A monad over a category C is a triple (T; eta ; nu ), where T: C -> C is a functor…” citing Mac Lane. That a Monad type class was declared *inside* Haskell without a Functor type class *inside* Haskell is in no way a mistake, mathematical or otherwise; in general the policy of declaring every conceivable superordinate type class before declaring any type class would obviously be moronic, yet this is your objection.

Some of what you state here applies to programming just as much as it does mathematics. Most programmers show some dislike for Greek and Latinate words, but they make for it in spades with names such as

AbstractBaseClassFactoryFactory,DependencyInjectionOfTheThirdKindandObserverStrategyPattern. Haskell is one of very few languages which was actually designed with its formal semantics in mind. All language implementations are “rigorous”, since they do have a formal semantics, but–as it turns out–understanding the formal semantics ofad hoclanguages such as C and Java is a huge undertaking.Moreover, the semantics of Haskell clearly has a mathematical character; for instance, my guess is that Errett Bishop’s

Foundations of Constructive Analysiscould be translated to Haskell with little effort.The Haskell community uses mathematical terms whenever these are appropriate to describe features of Haskell or its libraries. Haskellers are unique among computer scientists and software engineers for having remarkably little tolerance for buzzwords.

Haskell’s usage of category theory is somewhat idiosyncratic from a math perspective–how many mathematicians can be said to “properly understand” the Curry-Howard-Lambek correspondence or William Lawvere’s categorical logic? Most Haskell programmers need not care at all about the implications of category theory on math.

I keep reading this claim that the

Monadname is inappropriate because Haskell does not enforce the monadic axioms. Haskell is a compiler, not a general-purpose proof system, so it is not designed to do this. Such proof systems do exist, including Agda (which is written in Haskell and has a Haskell-inspired syntax) and Coq (which can verify constructive proofs and extract their computational content to Haskell).As for the

Numexample, to the best of my knowledge, no-one in the Haskell community holds it as an example of great mathematics. As other commenters have stated, it’s a hack meant to support convenient syntax, and better solutions are available in non-standard Preludes.Something is probably wrong when you take a random example from category-extras or a related library and refer to it as real-world code, or as something that all Haskell programmers are expected to understand.

You might find Coq more to your liking. It’s a lot like Haskell but is designed to support proving; e.g, it allows one to prove that the monad laws hold of a given instance, has its own corresponding category extras library, and has been used to give a completely formal proof of the 4-color theorem.

It is impossible to ensure that a monad within Haskell conforms to the monadic axioms (in other words, it is impossible to ensure a monad within Haskell is actually a monad).Well, no, it’s not impossible. You just can’t, within the language definition, ask the compiler to prove it for you. You can ensure that a monad is actually a monad the same way that you could if I wrote down “This is a monad” and followed it with a triple of operations. That is to say, you could look at it, and then think about it. That seems like a legitimate design choice for a general purpose programming language, especially since you can, where necessary, extract verified code from something stronger.

And just to add:

something is probably wrong when you’re using Functor f => Algebra f b -> GAlgebra f (Cofree f) a -> (f :~> f) -> FixF f -> a as a type declarationSomething is probably wrong when you take the type signature for a zygohistomorphic prepromorphism as serious business!

I don’t know why you think Haskell has any great relationship to category theory.

There was no category theory influence when Haskell was originally designed.

I’d say the the influences came from lambda calculus and somewhat from domain theory.

Later on Haskell borrowed some names (Monad and Functor) from category theory, which seems appropriate since those type classes part of what it means to be a monad and functor.

Regarding your complaint. I feel your pain, because I’ve had similar complaints now and then.

But I think the choice of having classes be open and assume nothing about future instances is the right one given various practical concerns. It’s not a problem that occurs that much in practice.

BTW, I think you want to use the type Integer instead of Int in your instance declarations. Integer (with 0 and (+)) forms a monoid, which is not something you know for sure about Int.

tru

[...] idiotic shit like this, followed by overwhelmingly stupid commentary like this and the most ignorant and intellectually [...]

Haskell chooses to overload the syntax for integers. This enables a LOT of very nice applications, including using them to provide you with polynomials directly from the numeric literal.

It occasionally requires a type annotation. The alternative would occasionally require an explicit conversion.

It is a perfectly reasonable architectural design decision, and one that Haskell is uniquely suited to explore due to the existence of type classes.

There is no pseudo-mathematical cargo-cult nonsense involved. Merely a violation of your expectations.

Now, the Num typeclass _is_ crap. That is a legitimate complaint. However, those of us who have built more anal retentive numerical towers have not gained much traction due to the fact that ~20 years of software have been built on top of it, and its a convenient fiction.

I feel your pain.

This is not the only case of ambiguity in Haskel.

And i agree with Chris Smith that cargo cult is a separate issue, should not mix 2 together.

Ten minutes category theory, twenty minutes wtth a computer system already far too high an opinion of your likely contribution to either. I suggests a few years toil, a big helping of humility and radio silence until you have something to report that is not already well known. How aout a contribution that it not already covered elsewhere? You don’t like it well fix it or do something better… Please grow up and get with the programme bro’

This post was from a long time ago in internet time. But in any case, can you show “prior art”?