Skip to content

IPNS over pubsub loses resolve after 24 hours #996

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
markg85 opened this issue Aug 23, 2020 · 8 comments
Closed

IPNS over pubsub loses resolve after 24 hours #996

markg85 opened this issue Aug 23, 2020 · 8 comments
Labels
kind/bug A bug in existing code (including security flaws)

Comments

@markg85
Copy link

markg85 commented Aug 23, 2020

Hi,

I'm using IPNS over pubsub which works wonderfully!
But just for 24 hours, or so it seems.

Which is confirmed by looking at ipfs name publish --help where the -t option states:

  -t, --lifetime   string - Time duration that the record will be valid for.
                            Default: 24h.

(i don't get why there's also a --ttl option.)

In this case i had just a site without any other people knowing about it. I had published it using my local node, which is hardly ever online, only when i'm developing something. I have another node in vultr somewhere that also does name resolving on anything i publish locally. I did this in hopes that this little trick would keep the IPNS record alive in pubsub as now another host has it besides my local (often turned off) node.

But this appears not to be the case. If my local node is offline for 24+ hours then the remote node can't resolve the IPNS name anymore.

Is there a way to keep the IPNS record alive without the need of periodically firing up the node that did the initial ipfs name publish?

Cheers,
Mark

@markg85 markg85 added the kind/bug A bug in existing code (including security flaws) label Aug 23, 2020
@aschmahmann
Copy link
Collaborator

aschmahmann commented Aug 24, 2020

This is really a go-ipfs issue and not go-libp2p (IPNS is not a libp2p component, but one that lives in IPFS land), so I'm going to close this issue (unfortunately AFAIK GitHub won't let you transfer issues between orgs) and if it's still relevant feel free to reopen a new one in go-ipfs.

i don't get why there's also a --ttl option

As described in the spec https://github.com/ipfs/specs/blob/master/IPNS.md, the TTL recommends how long the record should be considered "fresh" for (i.e. no need to requery the DHT for updates) while the lifetime is about how long the record should be considered "valid" for (e.g. if you were prevent by a malicious party from getting IPNS updates how long could a user go without being prompted that something fishy was happening).

Is there a way to keep the IPNS record alive without the need of periodically firing up the node that did the initial ipfs name publish

Yes and no. Yes, you could just set the Lifetime to be very long. No, this won't really work until ipfs/kubo#7537 is fixed (or you set the IPNS republish times on your node to be super long).

See ipfs/kubo#7572 for links tracking the IPNS todo list (as well as the issues they stemmed from)

@markg85
Copy link
Author

markg85 commented Aug 24, 2020

Hi @aschmahmann,

Thank you for your reply!
I did want to re-open it on ipfs but looking over ipfs/kubo#7572 does seem to cover it just fine.

One question though.

Yes and no. Yes, you could just set the Lifetime to be very long. No, this won't really work until ipfs/kubo#7537 is fixed (or you set the IPNS republish times on your node to be super long).

Would adding a new ipfs key that is shared between my local node and my always-online-node work too?
So if i make a new key with ipfs key gen --type rsa --size 2048 shared-key
Add that key to both nodes (as i now know how, hehe)
Then publish using:
ipfs name publish <hash> --key=shared-key

Does that mean that both nodes get to be the "owner" and only one of the two needs to be online for the ipns renewal after 24 hours?

@aschmahmann
Copy link
Collaborator

@markg85 probably if you ensure that only one of them is publishing at a time and they're in sync, but no guarantees. IPNS can get unhappy when there are multiple publishers using the same key.

So my answer is "maybe, but it's unsupported behavior if it happens to work".

@aschmahmann
Copy link
Collaborator

@markg85 I mispoke about the ramifications of ipfs/kubo#7537.

(or you set the IPNS republish times on your node to be super long).

This is actually about setting the IPNS Lifetimes (https://github.com/ipfs/go-ipfs/blob/master/docs/config.md#ipnsrecordlifetime) to be super long, which may actually work out ok for your use case.

@markg85
Copy link
Author

markg85 commented Aug 24, 2020

@aschmahmann That's great! Pun intended :P
I'll give that a shot as soon as it expires again to know if that shared-key method works out fine.

Am i right in assuming that IPNS currently only works properly if the ipns name publish author is alive (i.o.w, then ode that did that with it's private key is reachable)? As i understand if that's the case, it would make sense! But... It is very... centralized i would say! As then only "getting the record to others" is decentralized. The single point of truth where all nodes would (directly or indirectly) get their update from would be that single node that did the initial name publish.

@markg85
Copy link
Author

markg85 commented Aug 26, 2020

@aschmahmann to confirm your suspicion.
Using a shared key doesn't work.

It's now been (much) more then 24 hours since my local node was online. Resolving the IPNS is now indeed impossible.
Next up is setting a super long lifetime. I wonder if i can set it to 8760 hours (1 year). You won't hear me for quite some time if that works :P

@aschmahmann
Copy link
Collaborator

@markg85 having a long life time is actually the correct approach here, it's just that the mechanics of using it are off (e.g. there should be an easy way to track keys that you're following and publish updates to them as long as the keys are allowed to be alive).

By the way go-ipfs v0.7.0-rc1 is out and fixes ipfs/kubo#7537

@markg85
Copy link
Author

markg85 commented Aug 27, 2020

@markg85 having a long life time is actually the correct approach here, it's just that the mechanics of using it are off (e.g. there should be an easy way to track keys that you're following and publish updates to them as long as the keys are allowed to be alive).

By the way go-ipfs v0.7.0-rc1 is out and fixes ipfs/go-ipfs#7537

Well, if you explain it that way, it sure does make more sense to me.

But lets go into details a little to make it crystal clear!
It might even be something you'd want to document in the IPNS call when doing a --help there.

So, i should see this as:
IPNS is comparable to "registering a domain" where i do say how long it's being registered. If i'd map that to IPNS naming, that would be the --lifetime argument, correct?

Continuing that reasoning, if i "register" a name with ipns name publish --lifetime=8760h then the world (all ipfs nodes) should consider that ipns record to be valid for that amount of time. Only i, as "the registrar" if you will can update the lifetime. Or more specifically, only the private key that created it can update it's lifetime, correct?
If this is correct then the IPNS lifetime default really should be much longer then 1 day...

Now the interesting bit comes with the --ttl. As you said, it's to tell how fresh the record is. If i'm correct, this is where following nodes can re-broadcast the freshness. Which only means that those those followers "don't know either" only that it's still the most fresh one. So if nobody of the followers has a newer freshness then what was known is still accurate, correct?

In this whole setup it's apparently the intention to have 1 party (or rather, 1 private key) be able to update the lifetime. And have any number of nodes tell you how fresh a IPNS is (and those obviously know the lifetime too). A IPNS name is therefore online/accessible as long as there is any node alive that can verify it's alive. The TTL would be neat too but won't even prevent it from being alive.

I'm having difficulties explaining the TTL stuff but if i mentally compare IPNS with registering a domain and considering the TTL setting as the DNS entries within a domain (where you set a TTL) then i think i get it.

I think the best short term thing to improve with IPNS is the case where you use a shared key to do the publish. As that currently seems broken. If that were to be working it would already be much less of an issue to keep an IPNS online in it's default settings. As then you could have a normal developer setup (a local sometime online node) with a shared key to a remote (always online node).

As for the 0.7.0 RC. Awesome! :)
Still, i'm going to wait till the final is out as i now have a custom IPFS docker and don't quite like updating it in a few weeks again... It's a bit of a hassle.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug A bug in existing code (including security flaws)
Projects
None yet
Development

No branches or pull requests

2 participants