Struct P2PGossipSync
pub struct P2PGossipSync<G, U, L>where
G: Deref<Target = NetworkGraph<L>>,
U: Deref,
L: Deref,
<U as Deref>::Target: UtxoLookup,
<L as Deref>::Target: Logger,{ /* private fields */ }
Expand description
Receives and validates network updates from peers, stores authentic and relevant data as a network graph. This network graph is then used for routing payments. Provides interface to help with initial routing sync by serving historical announcements.
Implementations§
§impl<G, U, L> P2PGossipSync<G, U, L>where
G: Deref<Target = NetworkGraph<L>>,
U: Deref,
L: Deref,
<U as Deref>::Target: UtxoLookup,
<L as Deref>::Target: Logger,
impl<G, U, L> P2PGossipSync<G, U, L>where
G: Deref<Target = NetworkGraph<L>>,
U: Deref,
L: Deref,
<U as Deref>::Target: UtxoLookup,
<L as Deref>::Target: Logger,
pub fn new(
network_graph: G,
utxo_lookup: Option<U>,
logger: L,
) -> P2PGossipSync<G, U, L>
pub fn new( network_graph: G, utxo_lookup: Option<U>, logger: L, ) -> P2PGossipSync<G, U, L>
Creates a new tracker of the actual state of the network of channels and nodes,
assuming an existing NetworkGraph
.
UTXO lookup is used to make sure announced channels exist on-chain, channel data is
correct, and the announcement is signed with channel owners’ keys.
pub fn add_utxo_lookup(&self, utxo_lookup: Option<U>)
pub fn add_utxo_lookup(&self, utxo_lookup: Option<U>)
Adds a provider used to check new announcements. Does not affect existing announcements unless they are updated. Add, update or remove the provider would replace the current one.
pub fn network_graph(&self) -> &G
pub fn network_graph(&self) -> &G
Gets a reference to the underlying NetworkGraph
which was provided in
P2PGossipSync::new
.
This is not exported to bindings users as bindings don’t support a reference-to-a-reference yet
Trait Implementations§
§impl<G, U, L> MessageSendEventsProvider for P2PGossipSync<G, U, L>where
G: Deref<Target = NetworkGraph<L>>,
U: Deref,
L: Deref,
<U as Deref>::Target: UtxoLookup,
<L as Deref>::Target: Logger,
impl<G, U, L> MessageSendEventsProvider for P2PGossipSync<G, U, L>where
G: Deref<Target = NetworkGraph<L>>,
U: Deref,
L: Deref,
<U as Deref>::Target: UtxoLookup,
<L as Deref>::Target: Logger,
§fn get_and_clear_pending_msg_events(&self) -> Vec<MessageSendEvent>
fn get_and_clear_pending_msg_events(&self) -> Vec<MessageSendEvent>
§impl<G, U, L> RoutingMessageHandler for P2PGossipSync<G, U, L>where
G: Deref<Target = NetworkGraph<L>>,
U: Deref,
L: Deref,
<U as Deref>::Target: UtxoLookup,
<L as Deref>::Target: Logger,
impl<G, U, L> RoutingMessageHandler for P2PGossipSync<G, U, L>where
G: Deref<Target = NetworkGraph<L>>,
U: Deref,
L: Deref,
<U as Deref>::Target: UtxoLookup,
<L as Deref>::Target: Logger,
§fn peer_connected(
&self,
their_node_id: &PublicKey,
init_msg: &Init,
_inbound: bool,
) -> Result<(), ()>
fn peer_connected( &self, their_node_id: &PublicKey, init_msg: &Init, _inbound: bool, ) -> Result<(), ()>
Initiates a stateless sync of routing gossip information with a peer
using gossip_queries
. The default strategy used by this implementation
is to sync the full block range with several peers.
We should expect one or more reply_channel_range
messages in response
to our query_channel_range
. Each reply will enqueue a query_scid
message
to request gossip messages for each channel. The sync is considered complete
when the final reply_scids_end
message is received, though we are not
tracking this directly.
§fn handle_query_channel_range(
&self,
their_node_id: &PublicKey,
msg: QueryChannelRange,
) -> Result<(), LightningError>
fn handle_query_channel_range( &self, their_node_id: &PublicKey, msg: QueryChannelRange, ) -> Result<(), LightningError>
Processes a query from a peer by finding announced/public channels whose funding UTXOs are in the specified block range. Due to message size limits, large range queries may result in several reply messages. This implementation enqueues all reply messages into pending events. Each message will allocate just under 65KiB. A full sync of the public routing table with 128k channels will generated 16 messages and allocate ~1MB. Logic can be changed to reduce allocation if/when a full sync of the routing table impacts memory constrained systems.
§fn handle_node_announcement(
&self,
msg: &NodeAnnouncement,
) -> Result<bool, LightningError>
fn handle_node_announcement( &self, msg: &NodeAnnouncement, ) -> Result<bool, LightningError>
node_announcement
message, returning true
if it should be forwarded on,
false
or returning an Err
otherwise.§fn handle_channel_announcement(
&self,
msg: &ChannelAnnouncement,
) -> Result<bool, LightningError>
fn handle_channel_announcement( &self, msg: &ChannelAnnouncement, ) -> Result<bool, LightningError>
channel_announcement
message, returning true
if it should be forwarded on, false
or returning an Err
otherwise.§fn handle_channel_update(
&self,
msg: &ChannelUpdate,
) -> Result<bool, LightningError>
fn handle_channel_update( &self, msg: &ChannelUpdate, ) -> Result<bool, LightningError>
channel_update
message, returning true if it should be forwarded on,
false
or returning an Err
otherwise.§fn get_next_channel_announcement(
&self,
starting_point: u64,
) -> Option<(ChannelAnnouncement, Option<ChannelUpdate>, Option<ChannelUpdate>)>
fn get_next_channel_announcement( &self, starting_point: u64, ) -> Option<(ChannelAnnouncement, Option<ChannelUpdate>, Option<ChannelUpdate>)>
short_channel_id
indicated by starting_point
and including announcements
for a single channel.§fn get_next_node_announcement(
&self,
starting_point: Option<&NodeId>,
) -> Option<NodeAnnouncement>
fn get_next_node_announcement( &self, starting_point: Option<&NodeId>, ) -> Option<NodeAnnouncement>
<PublicKey as Ord>::cmp
) than starting_point
.
If None
is provided for starting_point
, we start at the first node.§fn handle_reply_channel_range(
&self,
_their_node_id: &PublicKey,
_msg: ReplyChannelRange,
) -> Result<(), LightningError>
fn handle_reply_channel_range( &self, _their_node_id: &PublicKey, _msg: ReplyChannelRange, ) -> Result<(), LightningError>
§fn handle_reply_short_channel_ids_end(
&self,
_their_node_id: &PublicKey,
_msg: ReplyShortChannelIdsEnd,
) -> Result<(), LightningError>
fn handle_reply_short_channel_ids_end( &self, _their_node_id: &PublicKey, _msg: ReplyShortChannelIdsEnd, ) -> Result<(), LightningError>
§fn handle_query_short_channel_ids(
&self,
_their_node_id: &PublicKey,
_msg: QueryShortChannelIds,
) -> Result<(), LightningError>
fn handle_query_short_channel_ids( &self, _their_node_id: &PublicKey, _msg: QueryShortChannelIds, ) -> Result<(), LightningError>
short_channel_id
s.§fn provided_node_features(&self) -> Features<NodeContext>
fn provided_node_features(&self) -> Features<NodeContext>
NodeFeatures
which are broadcasted in our NodeAnnouncement
message.§fn provided_init_features(
&self,
_their_node_id: &PublicKey,
) -> Features<InitContext>
fn provided_init_features( &self, _their_node_id: &PublicKey, ) -> Features<InitContext>
InitFeatures
which are sent in our Init
message. Read more§fn processing_queue_high(&self) -> bool
fn processing_queue_high(&self) -> bool
ChannelAnnouncement
(or other) messages
pending some async action. While there is no guarantee of the rate of future messages, the
caller should seek to reduce the rate of new gossip messages handled, especially
ChannelAnnouncement
s.Auto Trait Implementations§
impl<G, U, L> !Freeze for P2PGossipSync<G, U, L>
impl<G, U, L> RefUnwindSafe for P2PGossipSync<G, U, L>where
G: RefUnwindSafe,
L: RefUnwindSafe,
impl<G, U, L> Send for P2PGossipSync<G, U, L>
impl<G, U, L> Sync for P2PGossipSync<G, U, L>
impl<G, U, L> Unpin for P2PGossipSync<G, U, L>
impl<G, U, L> UnwindSafe for P2PGossipSync<G, U, L>where
G: UnwindSafe,
L: UnwindSafe,
Blanket Implementations§
§impl<'a, T, E> AsTaggedExplicit<'a, E> for Twhere
T: 'a,
impl<'a, T, E> AsTaggedExplicit<'a, E> for Twhere
T: 'a,
§impl<'a, T, E> AsTaggedImplicit<'a, E> for Twhere
T: 'a,
impl<'a, T, E> AsTaggedImplicit<'a, E> for Twhere
T: 'a,
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
§impl<T> Instrument for T
impl<T> Instrument for T
§fn instrument(self, span: Span) -> Instrumented<Self> ⓘ
fn instrument(self, span: Span) -> Instrumented<Self> ⓘ
source§impl<T> IntoRequest<T> for T
impl<T> IntoRequest<T> for T
source§fn into_request(self) -> Request<T>
fn into_request(self) -> Request<T>
T
in a tonic::Request